syn
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- title_10K/test_title_short_2405.00715v1.json +18 -0
- title_10K/test_title_short_2405.00716v1.json +17 -0
- title_10K/test_title_short_2405.00718v1.json +17 -0
- title_10K/test_title_short_2405.00722v1.json +17 -0
- title_10K/test_title_short_2405.00738v1.json +18 -0
- title_10K/test_title_short_2405.00739v1.json +18 -0
- title_10K/test_title_short_2405.00747v1.json +17 -0
- title_10K/test_title_short_2405.00772v1.json +16 -0
- title_10K/test_title_short_2405.00791v1.json +18 -0
- title_10K/test_title_short_2405.00801v1.json +16 -0
- title_10K/test_title_short_2405.00824v1.json +17 -0
- title_10K/test_title_short_2405.00843v1.json +16 -0
- title_10K/test_title_short_2405.00853v1.json +17 -0
- title_10K/test_title_short_2405.00864v1.json +16 -0
- title_10K/test_title_short_2405.00899v1.json +19 -0
- title_10K/test_title_short_2405.00902v1.json +18 -0
- title_10K/test_title_short_2405.00954v1.json +16 -0
- title_10K/test_title_short_2405.00957v1.json +18 -0
- title_10K/test_title_short_2405.00958v1.json +20 -0
- title_10K/test_title_short_2405.00966v1.json +19 -0
- title_10K/test_title_short_2405.00970v1.json +18 -0
- title_10K/test_title_short_2405.00972v1.json +20 -0
- title_10K/test_title_short_2405.00977v1.json +17 -0
- title_10K/test_title_short_2405.00978v1.json +17 -0
- title_10K/test_title_short_2405.00981v1.json +17 -0
- title_10K/test_title_short_2405.00982v1.json +17 -0
- title_10K/test_title_short_2405.00987v1.json +16 -0
- title_10K/test_title_short_2405.00988v1.json +18 -0
- title_10K/test_title_short_2405.01008v2.json +16 -0
- title_10K/test_title_short_2405.01029v2.json +17 -0
- title_10K/test_title_short_2405.01035v1.json +18 -0
- title_10K/test_title_short_2405.01051v1.json +17 -0
- title_10K/test_title_short_2405.01063v1.json +18 -0
- title_10K/test_title_short_2405.01097v1.json +21 -0
- title_10K/test_title_short_2405.01102v1.json +17 -0
- title_10K/test_title_short_2405.01103v1.json +17 -0
- title_10K/test_title_short_2405.01116v1.json +16 -0
- title_10K/test_title_short_2405.01130v1.json +16 -0
- title_10K/test_title_short_2405.01143v1.json +16 -0
- title_10K/test_title_short_2405.01159v1.json +16 -0
- title_10K/test_title_short_2405.01175v1.json +17 -0
- title_10K/test_title_short_2405.01217v1.json +16 -0
- title_10K/test_title_short_2405.01229v1.json +20 -0
- title_10K/test_title_short_2405.01248v1.json +16 -0
- title_10K/test_title_short_2405.01266v1.json +17 -0
- title_10K/test_title_short_2405.01270v1.json +16 -0
- title_10K/test_title_short_2405.01280v1.json +16 -0
- title_10K/test_title_short_2405.01345v1.json +16 -0
- title_10K/test_title_short_2405.01350v1.json +17 -0
- title_10K/test_title_short_2405.01359v1.json +17 -0
title_10K/test_title_short_2405.00715v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00715v1",
|
| 3 |
+
"title": "Towards Adapting Open-Source Large Language Models for Expert-Level Clinical Note Generation",
|
| 4 |
+
"abstract": "Large Language Models (LLMs) have shown promising capabilities in handling\nclinical text summarization tasks. In this study, we demonstrate that a small\nopen-source LLM can be effectively trained to generate high-quality clinical\nnotes from outpatient patient-doctor dialogues. We achieve this through a\ncomprehensive domain- and task-specific adaptation process for the LLaMA-2 13\nbillion parameter model. This process incorporates continued pre-training,\nsupervised fine-tuning, and reinforcement learning from both AI and human\nfeedback. We introduced an enhanced approach, termed DistillDirect, for\nperforming on-policy reinforcement learning with Gemini Pro serving as the\nteacher model. Our resulting model, LLaMA-Clinic, is capable of generating\nclinical notes that are comparable in quality to those authored by physicians.\nIn a blinded physician reader study, the majority (90.4%) of individual\nevaluations rated the notes generated by LLaMA-Clinic as \"acceptable\" or higher\nacross all three criteria: real-world readiness, completeness, and accuracy.\nNotably, in the more challenging \"Assessment and Plan\" section, LLaMA-Clinic\nscored higher (4.2/5) in real-world readiness compared to physician-authored\nnotes (4.1/5). Additionally, we identified caveats in public clinical note\ndatasets, such as ACI-BENCH. We highlight key considerations for future\nclinical note-generation tasks, emphasizing the importance of pre-defining a\nbest-practice note format. Overall, our research demonstrates the potential and\nfeasibility of training smaller, open-source LLMs to assist with clinical\ndocumentation, capitalizing on healthcare institutions' access to patient\nrecords and domain expertise. We have made our newly created synthetic clinic\ndialogue-note dataset and the physician feedback dataset publicly available to\nfoster future research in this field.",
|
| 5 |
+
"authors": "Hanyin Wang, Chufan Gao, Bolun Liu, Qiping Xu, Guleid Hussein, Mohamad El Labban, Kingsley Iheasirim, Hariprasad Korsapati, Jimeng Sun",
|
| 6 |
+
"published": "2024-04-25",
|
| 7 |
+
"updated": "2024-04-25",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.LG"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "LLM Fairness",
|
| 16 |
+
"gt": "Towards Adapting Open-Source Large Language Models for Expert-Level Clinical Note Generation",
|
| 17 |
+
"main_content": "Introduction Recent breakthroughs in LLMs have transformed the landscape of natural language processing (NLP). Yet, we are only beginning to explore the use of LLMs in the medical field [1, 2]. Proprietary LLMs, such as GPT-4 and Med-PaLM, have demonstrated impressive capabilities in medical knowledge and Preprint. Under review. arXiv:2405.00715v1 [cs.CL] 25 Apr 2024 \fclinical NLP tasks [3\u20135]. However, most proprietary LLMs to date have limited flexibility for domainspecific fine-tuning, primarily due to restricted access to model weights. Additionally, proprietary LLMs raise several concerns pertinent to the healthcare sector, including HIPAA compliance, data security, cost, and transparency of training data [6\u201310]. The emergence of powerful open-source LLMs has opened up opportunities for domain-specific fine-tuning within the clinical field, yielding promising results [11\u201316]. For example, Meditron, a LLaMA-2 model pretrained on a vast corpus of curated medical literature, outperformed GPT-3.5 in the MedQA benchmark [16]. However, most research to date on open-source models has concentrated on medical knowledge injection rather than practical applications in real-world clinical workflow. In this work, we address a practical question faced by clinicians in their everyday routine: How can we best adapt an open-source LLM for the specific use case of clinical note generation? Clinical note documentation represents a significant burden for healthcare practitioners [17] and appears to be a natural application for LLMs, given their remarkable generative capabilities [18]. Recent research on LLMs for clinical text summarization found that LLM-generated outputs are preferred over human summaries for their completeness and accuracy [5]. The 2023 ACL ClinicalNLP and CLEFImage workshops explored the generation of clinical notes from patient-doctor conversations using the newly released Ambient Clinical Intelligence Benchmark (ACI-BENCH) dataset [19, 20]. The most notable results were achieved using GPT-4 along with few-shot in-context learning [21, 5]. However, these initial exploratory studies revealed significant limitations: (1) The evaluation relied solely on automatic metrics of lexical similarity to the \u201creference\u201d notes, such as ROUGE scores, which may not accurately reflect human preferences [22, 5]. (2) There was considerable variation in the quality, format, and style of \u201creference notes\u201d within ACI-BENCH, suggesting that a note similar to an ACI-BENCH reference might not necessarily be of high quality for real-world application. (3) Previous studies conducted only limited SFT of open-source LLMs using small training datasets, thereby not fully exploring their potential for domainand task-specific adaptation [19, 20, 5]. Additionally, the role of data augmentation and reinforcement learning remains unexplored. In this study, we revisit the task of outpatient note generation and hypothesize that, through optimal training strategies, an open-source LLM can achieve performance in clinical note generation that is comparable to that of physicians. We thoroughly evaluated techniques for domainand task-specific adaptation, ranging from continued pretraining and supervised fine-tuning (SFT) to reinforcement learning from AI and human feedback. We explored practical yet critical considerations for model training and deployment, drawing on our physician authors\u2019 expertise (such as the importance of defining a \u201cbest practice\u201d note format). Lastly, we conducted an expert evaluation of our model\u2019s outputs against those from state-of-the-art proprietary models and physician-generated notes. Our resulting model, LLaMA-Clinic, demonstrated the capacity to generate clinical notes that are comparable in quality to those authored by physicians. The majority (90.4%) of individual evaluations by reviewers rated the notes generated by LLaMA-Clinic as \u201cacceptable\u201d or higher across all three criteria: real-world readiness, completeness, and accuracy. Notably, in the more challenging \u201cAssessment and Plan\u201d section, LLaMA-Clinic scored higher (4.2/5) in real-world readiness compared to physician-authored notes (4.1/5). 2 Results 2.1 Problem Formation Given a recorded dialogue from a patient-doctor clinic encounter, we task LLMs to generate a high-quality outpatient note akin to one written by a clinician. This scenario is becoming increasingly prevalent due to the rising popularity of both in-person and virtual scribes, as well as automatic speech recognition devices in real-world settings [23]. Our research focuses solely on the generation of the \u201cSubjective\u201d and \u201cAssessment and Plan\u201d sections. This decision is informed by feedback from physician authors, who indicate that it would be unnatural or impractical during an actual clinical encounter to discuss all necessary information for the \u201cObjective\u201d part of a note, such as physical exam results. Moreover, much of the objective data, including lab and imaging results, are nowadays directly integrated into Electronic Medical Records (EMR). For each dialogue, two separate prompts are sent to the LLM to generate the \u201cSubjective\u201d and \u201cAssessment and Plan\u201d sections, respectively. 2 \f2.2 Experimental Design Model Selection: We selected Meta\u2019s LLaMA-2 model, with 13 billion parameters (LLaMA-2-13B), as our foundational model for training [24]. We conducted experiments using both the base and chat models of LLaMA-2. Gemini Pro by Google was chosen as the proprietary model for generating reference notes and acting as the teacher model [25]. Experiments Overview: We demonstrated the experiments pipeline in Figure 1. Initially, we undertook domain-specific adaptation of LLaMA-2-13B through continued pretraining on discharge summaries from MIMIC-IV. This was followed by task-specific fine-tuning with SFT through instruction tuning. Next, we conducted reinforcement learning from AI feedback (RLAIF) using DistillDirect, our enhanced approach to performing direct preference optimization (DPO) on a distilled dataset. We then selected the model that performed best after SFT and RLAIF, as evaluated by the ROUGE-1 score against reference notes, for reinforcement learning from human feedback (RLHF) through DPO. Finally, notes generated by our ultimate model, denoted as LLaMA-Clinic, were reviewed by a panel of physicians against notes authored by physicians and Gemini Pro, in a blinded evaluation. All training was performed using Low-Rank Adaptation (LoRA). Detailed experimental methodologies are provided in the Methods section. Additionally, we have chronicled our complete experimental process in Supplementary Method 1, offering readers a comprehensive view of this complex trial-and-error journey. Figure 1: Overview of study design. We conducted a comprehensive domainand task-specific adaptation process for the LLaMA-2-13B model. This process included continued pre-training, supervised fine-tuning, and reinforcement learning from AI and human feedback. Finally, we evaluated our model\u2019s outputs against those authored by physicians and Gemini Pro through a blinded expert evaluation. DPO: direct preference optimization. 2.3 Dataset and Preprocessing Dataset: We used three datasets for our experiments. \u2022 ACI-BENCH: ACI-BENCH represents the largest clinic dialogue-note dataset publicly available to date [23]. The dataset\u2019s dialogues were crafted by a team with medical expertise, and its clinical notes were initially generated using an automatic note generation system, then reviewed and revised by domain experts [23]. An observation from our study is the notable variation in the format, style, and quality of the \u201creference notes\u201d within ACI-BENCH, especially in the section of \u201cAssessment and Plan\u201d. While this diversity mirrors the reality of clinical practice, where different doctors may produce vastly different notes, it presents a challenge in using these notes as a \u201cgold standard\u201d for training an LLM to replicate. Consequently, we established a simple yet specific note format, recognized as \u201cbest practice\u201d by a panel of licensed internal medicine physicians, to standardize our training approach. For model training, we retained only the dialogue section from ACI-BENCH 3 \fand employed Gemini Pro to generate notes based on the \u201cbest practice\u201d format, serving as our reference notes. We demonstrated two examples of clinical notes before and after the change in Figure 2. The prompts to Gemini Pro for note generation are provided in Supplementary Method 3. Figure 2: Comparison of newly created reference notes with ACI-BENCH notes. We compared newly generated reference notes using Gemini Pro with the original reference notes from two cases in ACI-BENCH. The newly created reference notes adhere more closely to our defined \u201cbest practice\u201d format and contain more relevant medical information. \u2022 Dialogue-G: In our efforts to enhance model training through data augmentation, we created a synthetic dataset of clinical dialogue-note pairs using Gemini Pro, which we have named DialogueG. We began by compiling a set of transcribed outpatient notes from the publicly available synthetic MTSamples dataset [26, 27] and utilized Gemini Pro to transform these notes into dialogues. Subsequently, we used these dialogues as inputs for Gemini Pro once again, this time to generate clinical notes based on the \u201cbest practice\u201d format described above. We make the Dialogue-G dataset publicly available. The prompts to Gemini Pro for Dialogue-G creation are provided in Supplementary Method 2. \u2022 MIMIC-IV: MIMIC-IV encompasses 431,231 unique hospital admissions from 299,712 patients who were admitted to either the Intensive Care Unit or the Emergency Department of Beth Israel Deaconess Medical Center in Boston, Massachusetts [28]. We utilized discharge summaries from MIMIC-IV for continued pretraining. Notably, the \u201cbrief hospital course\u201d section of the discharge summaries is structurally akin to the \u201cassessment and plan\u201d section in outpatient notes. We compiled a subset of discharge summaries with only the \u201cbrief hospital course\u201d using methods detailed in [29], referred to as \u201cDischarge-short\u201d. We denoted the complete discharge summaries dataset as \u201cDischarge-long\u201d. Data Split: \u2022 For continued pretraining, we explored both the Discharge-long dataset (1.2 billion tokens) and the Discharge-short dataset (0.2 billion tokens). \u2022 We combined the training subsets from ACI-BENCH (dialogue n = 67) and Dialogue-G (dialogue n = 1291), then split this data equally for SFT and RLAIF, stratified by the data source. Note for each dialogue, there will be two separate reference notes for \u201cSubject\u201d and \u201cAssessment and Plan\u201d, respectively. 4 \f\u2022 For RLHF, we utilized dialogues from the training, task3, and taskC subsets of ACI-BENCH (dialogue n = 147) as input prompts, and sampled from our model\u2019s outputs for human preference labeling. We excluded data from Dialogue-G to ensure in-distribution training at this final stage of model development, given the observed perplexity of 2.79 for Dialogue-G compared to 5.62 from ACI-BENCH (as calculated using the LLaMA-2 chat model post-continued pretraining). \u2022 We examined results on the validation subset of ACI-BENCH throughout experiments. The final physician reader study was performed using the ACI-BENCH Test B subset (dialogue n = 40). 2.4 Analysis of Continued Pretraining We presented the training loss curve in Figure 3. Across all experiments, the training loss rapidly decreased after the initial few hundred steps, then leveled off, showing minimal improvement thereafter. The lowest training loss achieved with the Discharge-long dataset is approximately 0.9, whereas, with the Discharge-short dataset, it remained around 1.4. According to scaling laws, a lower training loss is generally expected to correlate with improved performance in downstream tasks [30, 31]. The trajectories of training loss were similar for both the chat and base models. When experimenting with various training strategies and hyperparameters, we frequently observed training loss spikes that were slow to recover, as shown in Supplementary Figure 1. This phenomenon is well-documented in LLM pretraining [32]. We proceeded with models that did not exhibit loss spikes for further studies (i.e. our SFT and RLAIF experiments). 2.5 Analysis of SFT and RLAIF As the primary objective of SFT and RLAIF is to align the output of LLaMA-2 with that of Gemini Pro, we evaluated the ROUGE-1 score, a measure of lexical similarity, against reference notes created by Gemini Pro. We reported the ROUGE-1 scores post-SFT and RLAIF, alongside those from the unmodified LLaMA-2, in Table 1. As anticipated, the vanilla chat model outperformed the base model out-of-box. However, continued pretraining with out-of-distribution corpora compromised the chat model\u2019s capacity to follow instructions. SFT significantly enhanced performance, particularly for the chat model over the base model. The model\u2019s performance was further boosted by RLAIF using DistillDirect. Notably, our training with DistillDirect frequently encountered instability\u2014a well-known challenge in reinforcement learning [33]. We have detailed our experiments, including ablation studies, to find a stable training setup in Supplementary Method 1.3. When comparing models pretrained with Discharge-long versus Discharge-short, the latter consistently scored higher, despite a higher training loss in the pretraining phase. The chat model pretrained with Discharge-short emerged as the top performer, and we selected this model checkpoint for the final RLHF stage. Interestingly, the vanilla LLaMA-13B models, directly subjected to SFT and RLAIF without prior continued pretraining, exhibited strong performances, including the highest score for \u201cSubjective\u201d. Qualitative analysis for a specific case is presented in Figure 4, aligning with the above quantitative findings. Continued pretraining effectively adopted the style and peculiarities from discharge summaries but at the expense of diminished instruction-following ability and increased hallucinations. The quality of outputs significantly improved post-SFT but remained overly verbose, while RLAIF effectively refined outputs to adhere to the format of reference notes, assisting in reducing hallucinations. Upon manually reviewing outputs from all model checkpoints post-RLAIF, our physician author noted that the \u201cSubjective\u201d sections were generally of high quality and nearly indistinguishable. However, there is room for improvement in the \u201cAssessment and Plan\u201d sections to more accurately and concisely reflect medical reasoning. 2.6 Analysis of RLHF We gathered quantitative feedback from three physician reviewers, who noted that the most frequent issues in the initial round of notes were inaccurate information or hallucinations. Two out of three reviewers observed that the quality of notes significantly improved with fewer hallucinations after one round DPO, describing notes as \u201doften indistinguishable.\u201d Consequently, we conducted only two rounds of DPO, mindful of its time-intensive nature. The example in Figure 4 illustrates that RLHF 5 \fFigure 3: Training loss curve from continued pretraining. A. Training with the Discharge-long dataset (1.2 billion tokens). B. Training with the Discharge-short dataset (0.2 billion tokens). The X-axis represents processed training tokens, and the Y-axis represents training loss. The figures illustrate results from mixed precision training with a cosine learning rate scheduler. All experiments were trained for 1 epoch on their respective training datasets. The loss curve in the solid line was smoothed with an exponential moving average and a window size of 250 steps. The original loss values are shown as the faded background. resulted in more granular changes, building on the results of RLAIF while maintaining the same output structures. We named the model after RLHF as LLaMA-Clinic for the physician reader study. 6 \fSubjective Assessment and Plan Models CP SFT DistillDirect CP SFT DistillDirect Models underwent continued pretraining, SFT and RLAIF 13B_short 0.1488 0.2463 0.4775 0.0899 0.2277 0.4956 13B_long 0.1612 0.2525 0.4494 0.1289 0.2565 0.4578 13B_chat_short 0.1520 0.3475 0.4878 0.1162 0.3055 0.5182 13B_chat_long 0.1708 0.3463 0.4601 0.1536 0.3620 0.4662 Models underwent SFT and RLAIF without continued pretraining 13B 0.2813 0.4994 0.2977 0.4941 13B_chat 0.2874 0.4783 0.3301 0.4893 Out-of-box models* 13B 0.0329 0.0100 13B_chat 0.3585 0.4543 Table 1: ROUGE-1 scores after SFT and RLAIF. All tests performed on validation subset of ACI-BENCH. Models including \u201cchat\u201d represent LLaMA-2 chat models, and otherwise represent LLaMA-2 base models. Models ended in \u201clong\u201d were pretrained using Discharge-long dataset, while models ended in \u201cshort\u201d were pretrained using Discharge-short dataset. Bolded scores denote the best performance. *: Out-of-box models did not undergo CP, SFT or RLAIF. CP: continued pretraining. RLAIF: reinforcement learning from AI feedback. SFT: supervised fine-tuning. 2.7 Analysis of Physician Reader Study We presented the results from the physician reader study in Figure 5. Two internal medicine physicians and one family medicine physician, in a blinded review, evaluated notes authored by physicians, LLaMA-Clinic, and Gemini Pro based on three criteria: real-world readiness, completeness, and accuracy (Figure 5). The median word counts and interquartile ranges (IQR) for notes authored by physicians, LLaMA-Clinic, and Gemini Pro were 118 (IQR: 94-150), 128 (IQR: 108-145), and 128 (IQR: 100-164), respectively. No statistically significant differences were observed among the three groups (Kruskal-Wallis H test: p = 0.292). We assessed inter-rater reliability (IRR) utilizing Gwet\u2019s AC2 statistics. The AC2 scores for the three metrics ranged from 0.80 to 0.83, signifying a high degree of agreement among reviewers. While minor score variances likely are not practically significant, Gemini Pro achieved the highest scores across all three criteria (Figure 5B). Remarkably, the majority (90.4%) of the individual evaluations rated the notes generated by LLaMA-Clinic as \u201cacceptable\u201d or higher across all three criteria (Figure 5C). Furthermore, the overall distribution of scores was also similar among the three groups (Figure 5C). This aligns with qualitative feedback from two out of three reviewers stating that the quality of notes was, for the \u201cmajority\u201d of the time, indistinguishable among the groups. The metric of real-world readiness yielded intriguing observations. Physicians were asked to evaluate the notes as though they were to be utilized in everyday clinical practice, under the assumption that they would proofread and make necessary edits. This inquiry was designed to assess the readiness of AI-generated notes in a practical workflow that places clinicians at the center and in full control. When compared to notes authored by physicians, LLaMA-Clinic received higher overall readiness scores, especially in the more challenging \u201cAssessment and Plan\u201d section, even though it scored slightly lower in accuracy (Figure 5b). This finding may suggest that in a real-world setting, physicians might be more tolerant of minor factual inaccuracies if the notes overall require fewer edits. To assess the potential consequences of factual errors, we employed a blinded review process. One physician evaluated the likelihood and severity of harm associated with notes that received an accuracy or completeness score below 5 in his review (n = 22, 31 and 39 for Gemini Pro, LLaMA-Clinic, and Physician, respectively). All notes within the LLaMA-clinic and physician groups were assigned scores of \u201cNone\u201d for the extent of harm and \u201cLow\u201d for the likelihood of harm. In contrast, the Gemini Pro group contained a single case rated with \u201cMild to Moderate\u201d extent of harm and a \u201cMedium\u201d likelihood of harm. 7 \fFigure 4: An example of model outputs progression with training steps. Case number D2N073 from ACI-BENCH validation set. 3 Discussion Potential of Training LLMs for the Medical Domain: Our study highlights the potential and feasibility of training a smaller open-source LLM for the important task of generating clinical notes. With domainand task-specific training steps, we have shown that even with a relatively small dataset and limited computing resources, it is possible to enhance an open-source LLM to perform comparably to physicians in this task. Distinguishing from prior research [19, 20, 5], our work differs in several aspects including the choice of model, training dataset, training strategies, and a focused alignment with real-world physician preferences. Open Source vs. Proprietary Models: Adapting an open-source LLM offers several benefits. The accessibility to model weights lays the groundwork for flexible domain adaptation, including continued pretraining, SFT, and reinforcement learning exercised in this study. Such models can be hosted within healthcare institutions\u2019 firewalls, allowing for full control over model weights and mitigating concerns around data leakage and HIPAA violations. Furthermore, at inference times, models with smaller weights could offer faster and potentially more cost-effective performance with the same computing infrastructure [8, 9]. Considerations for the Note-Generation Task: Our methodology to adapt LLM for clinical note generation revealed several key considerations. We initiated this process by establishing a \u201cbest practice\u201d note format, informed by a consensus among our physician authors. This approach tackles the issue of variability in the style, format, and quality of physician notes, which could otherwise compromise the training of LLMs. Instead of relying on the LLM to identify the most effective note 8 \fFigure 5: Physician Reader Study. A. Survey questions assessing each clinical note on three criteria: real-world readiness, completeness, and accuracy. B. Bar chart displaying the mean score among evaluators across different sections of the note. The error bars represent standard errors. The X-axis represents various metrics, and the Y-axis shows the mean scores. The subfigure labeled \u201cAll\u201d displays the combined results. C. Distribution of scores across all criteria. The X-axis represents different scores, and the Y-axis shows total counts. structure, we advocate for healthcare providers and institutions to set these standards themselves. It is important to acknowledge that the \u201cbest practice\u201d note format employed in this study reflects the consensus of a group of internal medicine physicians. This format may not be generalizable to other specialties, such as orthopedic surgery. Future work that explores specialty-specific best practices for clinical note documentation could be highly beneficial. Furthermore, we recommend a workflow in which providers critically review and refine AI-generated notes prior to their filing. Under such a workflow, physician preferences may shift toward notes that generally require fewer revisions and 9 \fedits, with a higher tolerance for minor inaccuracies. This strategy emphasizes the critical role of physicians in meticulously evaluating all AI-generated content, thereby maintaining accountability for the clinical records. Exploration of Best Practices to Adapt Open-souce LLM: The process of training a domainand task-specific open-source LLM is inherently complex, involving numerous engineering decisions without a clear consensus on best practices. These uncertainties span a range of decisions, including, but not limited to, selecting from various open-source models (e.g., LLaMA-2 [24], Mixtral [34], Gemma [35]), choosing between base and chat models, determining the most effective training corpora, selecting data for SFT and reinforcement learning, and exploring extensive combinations of hyperparameters. We have documented our complete experimental journey in Supplementary Method 1 to provide readers with insight into this trial-and-error process. Below, we discuss several key technical considerations encountered during model training. Continued Pretraining and SFT: Continued pretraining of an LLM using a domain-specific corpus is recognized for enhancing performance on downstream tasks [36, 37]. This phase is considered a knowledge injection process, given that LLMs acquire the vast majority of their knowledge during the pre-training phase [38, 39]. Several clinical LLMs that underwent continued pretraining with medical corpora, such as PubMed literature, combined with SFT have shown significant improvements in medical knowledge benchmarks [16, 14, 40]. However, a critical distinction exists between tasks focused on medical knowledge (e.g., answering USMLE questions) and those aimed at clinical note generation. Notably, GatorTronGPT [41], the only LLM trained from scratch using EMR data from real patients to date, performed lower in both MedQA and PubMedQA compared to other clinical LLMs [16, 14, 37]. This outcome indicates that EMR data alone may lack comprehensive medical knowledge. For the task of note generation, we hypothesized that continued pretraining with clinical notes could offer benefits by introducing greater lexical variance, unique semantic patterns, and diverse formatting similar to prior work [42]. Our experiments did not conclusively demonstrate the anticipated benefits of continued pretraining, as the out-of-the-box LLaMA-2 model without continued pretraining achieved the highest ROUGE-1 scores after SFT and RLAIF (Supplementary Table 3). We opted to proceed with the continued pretrained model for RLHF due to subtle peculiarities observed upon manual inspection, though these differences were minor (Supplementary Method 1.3). Given the significant time and computational resources required for continued pretraining, its utility, particularly with clinical notes, merits further exploration in future work. Data Selection for Continued Pretraining: Another potential factor in the less impressive improvement from continued pretraining may be attributed to the variance in data distribution between discharge summaries and outpatient notes. To address this, we performed experiments focused on a condensed version of discharge summaries, hypothesizing that the \u201cbrief hospital course\u201d section would contain data of higher quality than the complete discharge summary. Indeed, models trained on the \u201cbrief hospital course\u201d section outperformed those trained on the full summaries (Table 1). Interestingly, it was observed that during the pretraining stage, models trained on the full summaries achieved lower training loss (Figure 3), which, however, did not correspond to enhanced performance in the downstream task. We speculate that the structured nature of the full discharge summary, with its sections such as laboratory results, medications, and post-discharge instructions, presents more straightforward learning targets for the model, enabling it to achieve lower training losses but not necessarily translate to better task performance. This observation underscores the necessity of a thorough analysis of the data used for training. RLAIF and RLHF: The most notable performance improvement was observed during the RLAIF and RLHF stages. The advent of DPO has considerably streamlined the process of reinforcement learning by obviating the need for training an explicit reward model. However, best practices for implementing DPO are yet to be established. Our approach, which stages reinforcement learning using DPO into RLAIF and RLHF, draws inspiration from recent advancements in distilled DPO, adversarial preference optimization, and online AI feedback [43\u201345]. It offers several enhancements. Firstly, for each iteration of RLAIF with DistillDirect, we used the model\u2019s outputs as \u201creject samples\u201d to inform feedback, ensuring on-policy learning as opposed to the off-policy training in previous work [44]. Secondly, during the RLHF phase, we gathered human preferences on responses generated by the most recent model\u2019s outputs, promoting online and on-policy training. With carefully selected training hyperparameters, DistillDirect may achieve performance improvements without overfitting, 10 \fdespite the limited training data and repetitive use of the same prompts in each training cycle (see Supplementary Table 4). Prior work suggested that RLHF has robust generalization capabilities, even when facing distribution shifts between training and test data, albeit at the expense of output diversity [46]. Limitations: Our study serves as a proof of concept and encounters limitations, notably the scarcity of publicly available patient-physician dialogues for model training, with the latest ACI-BENCH dataset comprising fewer than 300 cases. Additionally, the effectiveness of outpatient note generation is fundamentally linked to the content of patient-physician dialogues. Although the ACI-BENCH data may have synthesized dialogues to encapsulate all necessary information for composing a comprehensive note, such ideal conditions may not always reflect real-world scenarios. For instance, time constraints may prevent physicians from discussing every detail of medical reasoning with patients, potentially degrading the quality of the generated notes. Our research was also constrained by limited academic computing resources and the availability of physician evaluators, which restricted our ability to conduct extensive hyperparameter searches or additional rounds of RLHF. Our final evaluation was conducted with only three physicians due to the task\u2019s time-intensive nature. Ideally, future developments would involve experiments with the latest powerful open-source models (e.g., Mixtral [34]), continued pretraining on in-domain and in-distribution data (e.g., outpatient notes rather than inpatient discharge summaries), SFT and RLAIF on larger amount of real patient-doctor dialogues, RLHF until a performance plateau is observed, and evaluation by a diverse group of medical professionals. Conclusions: Our research underscores the potential of training an open-source LLM for outpatient note generation with promising clinical applicability. The domainand task-specific adaptation processes, such as continued pretraining, SFT, and reinforcement learning, are uniquely feasible with open-source models. Healthcare institutions are in a privileged position to undertake such endeavors, given their access to extensive EMR data and a wealth of domain expertise critical for the implementation of RLHF. Notably, our work was based on fewer than 1,500 patient-doctor dialogues and limited physician preference data. When implementing a similar project in a healthcare institution, these training data could reasonably be scaled up by several orders of magnitude, which would likely boost performance further. Furthermore, the prospect of conducting similar work for other clinical note-generation tasks, such as creating discharge summaries for hospitalized patients, is particularly exciting. 4 Methods 4.1 Low-Rank Adaptation (LoRA) We used LoRA to train LLaMA-2-13B models for all phases of training. LoRA is a method that involves freezing the pre-trained model weights and only training a small percentage (<1%) of the model weights; i.e. by incorporating trainable rank decomposition matrices into each layer of the transformer architecture [47]. As a quick summary, let us assume that we have the original weight matrix W0 \u2208Rd\u00d7k. LoRA works by adding a low-rank matrix to the original weight matrix: \u2206W + W0, \u2206W = BA where B \u2208Rd\u00d7r and A \u2208Rr\u00d7k. r << d, so the matrices B, A are limited by a lower rank r, reducing the need to train all the parameters. Training is only performed on this \u2206W , and original model weights are kept the same. Training Details: In all steps of training, LoRA parameters were configured with r set to 8, an alpha value of 32, and a dropout rate of 0.05. All attention blocks were included in the Lora target modules. 4.2 Continued Pretraining Continued pretraining extends the initial pretraining process of an LLM, using the same autoregressive objective to maximize the likelihood of predicting the next token. For LLaMA-2-13B, this objective is next token prediction, based on all previously predicted tokens. Research from LIMA indicated that the majority of knowledge embedded in LLMs is acquired during the pretraining stage [39]. In the process of domain adaptation, it is a common practice to employ pretraining or continued pretraining using domain-specific corpora to improve performance in downstream tasks [16, 37]. In our work, we experimented with continued pretraining on the MIMIC-IV discharge summaries. 11 \fTraining Details: We followed the training scripts outlined in Meta\u2019s official LLaMA recipe repository [48]. We employed mixed-precision training with a batching strategy of packing and a context length of 4096 tokens. We utilized Fully Sharded Data Parallel (FSDP) on either 4 Nvidia A6000 or 4 Nvidia A100 GPUs. We maintained a batch size of 4 during training with a gradient accumulation step of 1. Consistent with LLaMA-2, we set a peak learning rate of 3e-4 for the continued pre-training stage. The AdamW optimizer with a cosine learning rate scheduler was used, and the model was trained for one epoch. The exponential moving average of training loss as shown in Figure 3 was calculated using the pracma package from R with a window size of 250 [49]. 4.3 Supervised Finetuning (SFT) We implemented SFT in the format of instruction tuning, utilizing the ACI-BENCH (with newly created reference notes) and Dialogue-G datasets. Instruction tuning plays a crucial role in the task-specific adaptation of an LLM, facilitating the model\u2019s learning of specific tasks through guided instructions [50]. In this approach, each sample consists of a prompt and an answer, with our study employing patient-doctor dialogues as prompts and clinical notes as answers. SFT employed the same autoregressive objective as continued pretraining, but losses on tokens from the prompt were zeroed out, consistent with the approach used in LLaMA-2 [24]. Training Details: We used a similar experiment setup as continued pretraining, including following LLaMA-recipes to perform mixed precision training on 4 GPUs using FSDP. For SFT, we selected the batching strategy of padding and trained on 3 epochs. Consistent with LLaMA-2, we set a peak learning rate of 2e-5. We truncate prompt (including instruction and dialogue) at a max length of 3000 tokens, and truncate note to a max length of 1000 tokens. We set a value of -100 for labels on prompt tokens to zero out losses from prompts. 4.4 Reinforcement Learning with AI Feedback (RLAIF) Here, we begin by reviewing related works on direct preference optimization and distilled direct preference optimization. Subsequently, we introduce our enhanced approach, termed DistillDirect, which is utilized in this study for RLAIF. Direct Preference Optimization (DPO): Reinforcement Learning with Human Feedback (RLHF) has been instrumental in the success of LLMs like InstructGPT [51]. However, conventional RLHF strategies, such as proximal policy optimization (PPO), are marked by their technical complexity and the high demand for computational resources [52]. The emergence of direct alignment techniques, notably DPO, has greatly streamlined this process by obviating the need for training a reward model [53]. DPO begins by collecting a preference dataset D = {(xi, y+ i , y\u2212 i )}N i=1, where for each prompt x, there is a preferred answer y+ and a rejected answer y\u2212. Following the notations as in [45], DPO optimizes the language model (target policy \u03c0\u03b8) using the following loss function: \u2212log \u03c3 \u0012 \u03b2 log \u03c0\u03b8(y+|x)\u03c0\u03b80(y\u2212|x) \u03c0\u03b80(y+|x)\u03c0\u03b8(y\u2212|x) \u0013 (1) Here, \u03c0\u03b80 represents the SFT baseline used as a reference, \u03c3 denotes the logistic function, and \u03b2 is introduced as a scalar hyperparameter. Intuitively, this loss function increases the probability of the latest model \u03c0\u03b8 generating preferred response y+ over the original model \u03c0\u03b80. Similarly, it decreases the probability of the latest model \u03c0\u03b8 generating rejected response y\u2212over the original model \u03c0\u03b80. Distilled Direct Preference Optimization (dDPO): The original dDPO methodology, applied in training Zepher, involves generating a collection of responses for each prompt from various LLMs [44]. These responses are then evaluated by a teacher model (GPT-4) to provide preference feedback. The application of RLAIF with dDPO has yielded positive outcomes in Zepher, notably enhancing intent alignment. However, dDPO\u2019s reliance on a pre-collected preference dataset renders it suboptimal due to its off-policy and offline characteristics, which are elaborated more in the next section. Online vs. Offline Training and On-policy vs. Off-policy Training: These terms describe the nuances in the creation of a preference dataset D. For any given prompt x, initially, two responses y1 and y2 are generated from an LLM denoted as \u03c1. These responses are then assessed for preference by humans or AI, being labeled as y+ (preferred) and y\u2212(rejected). In this context, training is 12 \fconsidered on-policy if \u03c1 = \u03c0\u03b8, or when the generated responses are sampled from the latest version of the LLM during RLAIF training. Off-policy training indicates otherwise. Learning is deemed online if the preference labeling and training is conducted in real-time, directly on the outputs from the currently trained policy. It is considered offline if preference labeling and training are performed in separate, discrete steps. Given the significant time and financial costs associated with collecting preference data, utilizing pre-collected preference datasets, such as those employed in dDPO, is a common practice. Defined by the criteria mentioned above, this method falls under the category of offline, off-policy learning. A critical caveat of employing a pre-collected preference dataset for offline and off-policy training is the occurrence of distribution shifts [45]. More specifically, distribution shifts arise at time step t because the preferred and rejected response is sampled from a policy \u03c1, where \u03c1 \u0338= \u03c0\u03b8t. Research has shown that online RLAIF systems, which are designed to mitigate these distribution shifts, significantly surpass the effectiveness of offline DPO methodologies [45]. RLAIF using DistillDirect: In our study, we introduce an improved approach based on dDPO, termed DistillDirect, specifically designed to ensure on-policy learning on a distilled dataset. In each training cycle, we begin by sampling a response from \u03c0\u03b8t, ensuring that the learning process remains strictly on-policy. This sampled response is then designated as the rejected response, while a reference response from Gemini Pro (the teacher model) is considered the preferred outcome. This approach implicitly assumes that the response from \u03c0\u03b8t is generally less favored than that from the teacher model\u2014an assumption that we validated through manual review in each round of training. Training details: We utilized the trl library from Huggingface to conduct DistillDirect [54]. Due to computational limitations, experiments were conducted on a single Nvidia A100 GPU with 80GB of graphics memory. To optimize memory usage, pure BF16 training was utilized with a microbatch size of 1 and gradient accumulation steps of 8. Following a limited learning rate search detailed in Supplementary Method 1.3 and Supplementary Table 4, a learning rate of 5e-6 was chosen. The optimizer used was paged_adamw_32bit. Within the DPOTrainer class, we set the beta hyperparameter to 0.1 and passed None to ref_model. Three rounds of DistillDirect were performed, with each round involving one epoch of training. Text generation was implemented using the Transformers library [55]. When generating \u201crejected\u201d samples from the latest model checkpoint, we consistently applied do_sample=True, top_p=1.0, temperature=1.0, top_k=50, repetition_penalty=1.2, and use_fast_kernels=False. The maximum number of newly generated tokens was set to 1000. 4.5 Reinforcement Learning from Human Feedback (RLHF) In the concluding phase of our training process, we select the most effective model from the preceding stage for further online and on-policy learning using DPO. Physician Preference Data Collection: In each round of DPO, for a specific prompt x, three responses are generated from \u03c0\u03b8t and evaluated by our physician reviewers. Three licensed internal medicine physicians are tasked with providing preference feedback by selecting the most and least preferred responses, with criteria focusing on clinical readiness, correctness, and adherence to the desired format. A notable adaptation in our approach is that reviewers are also instructed to make minor adjustments to improve the quality of the preferred responses, such as correcting factual inaccuracies. Detailed instructions for collecting preference data are available in Supplementary Method 4. Training details: For RLHF, we employed an experimental setup analogous to that described in RLAIF. We conducted two rounds of DPO on human preference data. Diverging from the approach taken in RLAIF, we executed three epochs of training in each DPO round due to the limited size of the dataset. Based on the findings from ablation studies detailed in Supplementary Method 1.3 and Supplementary Table 5, we opted for a lower temperature setting at this stage. In the initial round of DPO, we generated three responses using the same configuration, including a temperature setting of 0.6, for preference labeling. In the second round of DPO, we diversified the temperature settings, resulting in three responses with temperatures set at 0.6, 0.4, and 0.2, respectively, for preference labeling. 13 \f4.6 Physician Reader Study The three internal medicine physicians engaged in preference data collection were tasked with writing clinical notes based on conversations from the ACI-BENCH TestB subset, adhering to the pre-defined \u201cbest practice\u201d format. AI-generated notes were produced by LLaMA-Clinic and Gemini Pro, employing identical generation-related hyperparameters (temperature of 0.2, top_p of 0.7 and top_k of 40). To ensure consistency in presentation across all notes, we implemented a basic postprocessing step. This step standardized aspects like style, spacing, and capitalization to minimize any formatting discrepancies between human-authored and model-generated notes. The physician-authored notes, alongside those generated by Gemini Pro and LLaMA-Clinic, were reviewed by three additional physicians, who were not involved in the preference labeling. These licensed physicians, who specialize in general internal medicine or family medicine, boast rich experience in outpatient practice. The notes were presented in a random order, anonymized to remove any identifying information, and labeled as note 1, note 2, and note 3 to mask the origin of each note from the evaluators. Before assessing the notes, evaluators were instructed to read the entire patient-provider conversation. They were then asked to rate the quality of each note across three criteria: \u201caccuracy,\u201d \u201ccompleteness,\u201d and \u201creal-world readiness.\u201d For each criterion, a scoring system from 1 to 5 was used, ranging from very poor to very good, with higher scores reflecting superior quality. Specifically for \u201creal-world readiness,\u201d evaluators were prompted to consider the scenario of integrating AI-generated clinical notes into their daily practice, including the necessity to proofread and potentially edit these notes before filing them into medical records. Detailed instructions and the scoring rubric are available in the Supplementary Method 5. For the harm evaluation, we followed the same approach as in [5, 56] by asking one physician reviewer to assess all notes with an accuracy or completeness score of less than 5 and answer two questions related to the extent and likelihood of harm. The instruction for harm evaluation can be found in Supplementary Method 6. 4.7 Statistical Analysis The non-parametric Kruskal-Wallis H test was selected to compare differences in word counts among the three-note groups, utilizing the scipy package in Python [57]. We measured IRR using Gwet\u2019s AC2 statistics implemented through the irrCAC package in R [58, 59]. We reported results with quadratic weights for Gwet\u2019s AC2, as this approach is reliable for ordinal data against the grey zones [60]. Due to our relatively small reviewer pool, we chose not to conduct statistical significance testing on the physician reader study, aligning with practices observed in the deep learning community [24, 61]. 5 Data availability Access to MIMIC-IV can be requested at https://physionet.org/content/mimiciv/, which requires a signed safe usage agreement. ACI-BENCH dataset can be accessed at https://github. com/wyim/aci-bench. Dialogue-G, physician preference data, and the final test data can be found at https://github.com/hanyin88/llama-clinic. 6 Code availability Scripts for this work were written in Python. They are available with accompanied documentation at https://github.com/hanyin88/llama-clinic. 7 Acknowledgement This research was supported by NSF awards SCH-2205289, IIS-2034479, and SCH-2014438. The funder played no role in the study design, data collection, analysis, and interpretation of data, or the writing of this manuscript. This research is also supported by the Mayo Clinic Health System Southwest Minnesota Region Protected Research Grant. 14 \f8 Competing interests The authors declare no competing interests. 9 Author Contributions H.W. designed, conducted, and analyzed the results of experiments. H.W. and C.G. wrote the original draft. J.S. obtained funding and computing resources for the project. L.B., Q.X., and H.G. provided human preference labeling for RLHF and wrote notes for the physician reader study. M.E., K.I., and H.K. conducted the physician reader study. H.K. also performed the harm evaluation. All authors contributed to the conceptualization of the research questions. All authors reviewed, revised, and approved the final manuscript. 15"
|
| 18 |
+
}
|
title_10K/test_title_short_2405.00716v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00716v1",
|
| 3 |
+
"title": "Large Language Models in Healthcare: A Comprehensive Benchmark",
|
| 4 |
+
"abstract": "The adoption of large language models (LLMs) to assist clinicians has\nattracted remarkable attention. Existing works mainly adopt the close-ended\nquestion-answering task with answer options for evaluation. However, in real\nclinical settings, many clinical decisions, such as treatment recommendations,\ninvolve answering open-ended questions without pre-set options. Meanwhile,\nexisting studies mainly use accuracy to assess model performance. In this\npaper, we comprehensively benchmark diverse LLMs in healthcare, to clearly\nunderstand their strengths and weaknesses. Our benchmark contains seven tasks\nand thirteen datasets across medical language generation, understanding, and\nreasoning. We conduct a detailed evaluation of the existing sixteen LLMs in\nhealthcare under both zero-shot and few-shot (i.e., 1,3,5-shot) learning\nsettings. We report the results on five metrics (i.e. matching, faithfulness,\ncomprehensiveness, generalizability, and robustness) that are critical in\nachieving trust from clinical users. We further invite medical experts to\nconduct human evaluation.",
|
| 5 |
+
"authors": "Andrew Liu, Hongjian Zhou, Yining Hua, Omid Rohanian, Lei Clifton, David A. Clifton",
|
| 6 |
+
"published": "2024-04-25",
|
| 7 |
+
"updated": "2024-04-25",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.AI"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM AND Reasoning",
|
| 15 |
+
"gt": "Large Language Models in Healthcare: A Comprehensive Benchmark",
|
| 16 |
+
"main_content": "Introduction Large language models (LLMs), such as ChatGPT (Brown et al., 2020; OpenAI, 2023b), LLaMA (Touvron et al., 2023a), and PaLM (Chowdhery et al., 2022), are increasingly being recognized for their potential in healthcare to aid clinical decisionmaking and provide innovative solutions for complex healthcare problems (Patel et al., 2023; Shen et al., 2023), e.g., discharge summary generation (Patel and Lam, 2023), health education (Safranek et al., 2023), and care planning (Fleming et al., 2023). Several recent efforts have been made to fine-tune publicly available general LLMs, e.g., LLaMA (Touvron et al., 2023b) and ChatGLM (Tsinghua KEG, 2023), to develop medical LLMs (Singhal et al., 2023a,c), resulting in ChatDoctor (Li et al., 2023b), MedAlpaca (Han et al., 2023), BenTsao (Wang et al., 2023a), and ClinicalCamel (Toma et al., 2023). Previous research shows that medical LLMs outperform human experts across a variety of medical tasks. In particular, MedPrompt (Nori et al., 2023) and MedPaLM-2 (Singhal et al., 2023b) have respectively achieved a competitive accuracy of 90.2 and 86.5 compared to human experts 87.0 (Wu et al., 2023) on the United States Medical Licensing Examination (USMLE) (Jin et al., 2021). Admittedly, responsibility and reliability are essential requirements for tools designed to assist clinicians. Despite the promising results of existing medical LLMs, several issues need to be addressed for the responsible and reliable use of LLMs in assisting clinicians: \u2022 (i) Limited evaluation: Most existing works only focus on evaluating LLM performance in the close-ended medical question answering (QA) task, overlooking evaluation in other scenarios, such as medical language understanding and generation (Thirunavukarasu et al., 2023; He et al., 2023; Zhou et al., 2023a). This limited evaluation hinders a thorough understanding of LLM ability in diverse healthcare applications. \u2022 (ii) Limited metric: Existing works primarily utilize matching-based metrics (e.g., Accuracy and F1) to evaluate LLM performance. These metrics fail to assess important attributes in generated responses, such as reliability and trustworthiness, which are of paramount importance for clinicians and in regulatory approvals that are essential for reliable deployment in clinical practice (Shen et al., 2023; Kitamura, 2023). \u2022 (iii) Limited comparison: Existing works mainly compare LLM performance with their own basic models or use private datasets for evaluation (Tian et al., 2024). Such an aparXiv:2405.00716v1 [cs.CL] 25 Apr 2024 \fproach falls short of providing a thorough comparative analysis among different LLMs under standardized conditions. Consequently, it hampers a comprehensive understanding of the distinct advantages and limitations of various LLMs in healthcare. As a result, the accuracy, generalizability, and reliability of existing LLMs in diverse healthcare applications remain unclear. In response, (i) we construct the BenchHealth from the representative public health data to benchmark LLMs in healthcare. As shown in Table 1, BenchHealth encompasses three different evaluation scenarios (i.e., reasoning, generation, and understanding) and includes seven popular downstream tasks and thirteen representative datasets; Previous popular benchmarks, e.g., BLUE (Peng et al., 2019) and BLURB (Gu et al., 2021), only include the medical language understanding and close-ended question answering. (ii) In addition to the commonly used matchingrelated metrics, as shown in Table 2, we design additional metrics to provide insights into the reliability of LLMs in clinical settings, i.e., analyzing their ability to provide faithfulness, comprehensive, generalized, and robust information; (iii) As shown in Table 3, we collect sixteen representative LLMs that vary in the number of model parameters and structural designs. We evaluate their performance on BenchHealth for a comprehensive comparison. The main insights from our experiments are: \u2022 Commercial LLMs vs. Public LLMs: Closed-source commercial LLMs, especially GPT-4, outperform all existing open-source public LLMs on all tasks and datasets. \u2022 LLMs vs. State-of-the-art: All LLMs have a strong reasoning ability to predict accurate answers from the provided options, but perform very poorly in open-ended questions, language generation, and language understanding tasks (i.e., there are significant gaps between the state-of-the-art and LLM performance). \u2022 Medical LLMs vs. General LLMs: Finetuning general LLMs on medical data to obtain medical LLMs can improve the reasoning and understanding of medical data, but could decrease the summarization ability of LLMs. \u2022 Model parameters: A larger number of model parameters can clearly improve performance on all tasks, datasets, and metrics. \u2022 Few-shot learning: It leads to significant improvements in performance on medical language reasoning and generation tasks, but impairs performance on understanding tasks. On reasoning tasks, 1-shot or 3-shot learning performs the best; more examples do not lead to further improvements. On generation tasks, more samples lead to better performance. \u2022 Clinical usefulness: Medical LLMs can provide more faithful answers than general LLMs (avoiding misdiagnosis) and generalize well to diverse medical tasks; General LLMs can provide more comprehensive answers than medical LLMs, which may be due to \u201challucinations\u201d, thus avoiding missed diagnoses; General LLMs have better robustness and can therefore better understand a variety of diverse inputs compared to medical LLMs. Overall, our results show that among all types of tasks, the close-ended QA task is the only type of task in which current LLMs are comparable to state-of-the-art models and human experts. However, real-world open clinical practice diverges far from the structured nature of exam-taking. Clinical decisions, such as diagnosis and treatment recommendations, are often confronted with open-ended questions that lack pre-determined answer choices. This paradigm shift from a controlled test environment to the unpredictable and subtle domain of patient care challenges the conventional approach, demanding a more sophisticated understanding and application of medical knowledge. Our results also demonstrate that all LLMs display insufficient performance on crucial metrics necessary for ensuring the trustworthiness of LLMs in clinical settings. This unsatisfactory performance suggests that the current state of LLMs falls short of readiness for deployment in clinical settings to aid healthcare professionals. We hope that this work can offer a holistic view of LLMs in healthcare, aiming to bridge the current gaps and advance the integration of LLMs in clinical applications. 2 Benchmark Our benchmark is shown in Table 1. 2.1 Medical Language Reasoning We include the question answering and treatment recommendation tasks in our benchmark. \fScenarios Tasks Datasets Data Domains Sizes Matching Metrics Medical Language Reasoning Question Answering MedQA (USMLE) (Jin et al., 2021) Medical Licensing Examination 1,273 Accuracy MedMCQA (Pal et al., 2022) Medical Entrance Examination 4,183 Accuracy MMLU-Medicine (Hendrycks et al., 2020) Professional&College Medicine 272 Accuracy PubMedQA (Jin et al., 2019) Medical Literature 500 Accuracy Treatment Recommendation ChatDoctor (Li et al., 2023b) Patient-Clinician Conversations 796 Micro F1 Medical Language Generation Radiology Report Summarization MIMIC-CXR (Johnson et al., 2019) Radiography 3,269 ROUGE-L IU-Xray (Demner-Fushman et al., 2016) Radiography 341 ROUGE-L Discharge Instruction Generation MIMIC-III (Johnson et al., 2016) Critical Care 3,633 BLEU-4 Medical Language Understanding Named Entity Recognition BC5-disease (Li et al., 2016) Scientific Literature 4,797 F1 entity-level NCBI-Disease (Do\u02d8 gan et al., 2014) Scientific Literature 940 F1 entity-level Relation Extraction DDI (Segura-Bedmar et al., 2013) Drug 5,716 Micro F1 GAD (Becker et al., 2004) Genetic 534 Micro F1 Document Classification HoC (Baker et al., 2016) Scientific Literature 315 Micro F1 Table 1: Overview of the benchmark BenchHealth for evaluating LLMs in healthcare. Question Answering aims to predict the correct answer to the given question. For example, the model should answer \u2018D\u2019 to the question: \u201cWhich of the following conditions does not show multifactorial inheritance? (A) Pyloric stenosis (B) Schizophrenia (C) Spina bifida (neural tube defects) (D) Marfan syndrome\u201d. Thus, QA evaluates the correctness of the medical knowledge learned by LLMs. We include four popular datasets, i.e., MedQA (USMLE) (Jin et al., 2021), MedMCQA (Pal et al., 2022), MMLU-Medicine (Hendrycks et al., 2020), PubMedQA (Jin et al., 2019). Treatment Recommendation is an open-ended complex task and requires the models to first understand the real-world patient-clinician conversations, in which the conversation describes the conditions and symptoms, and then recommend all possible drugs for the treatment of patients. We use ChatDoctor (Li et al., 2023b) for evaluation. 2.2 Medical Language Generation We evaluate two popular generation tasks, i.e., radiology report summarization and discharge instruction generation. Radiology Report Summarization aims to distill a concise summary \u2018Impression\u2019 from the lengthy \u2018Findings\u2019 section in a radiology report. \u2018Findings\" contains detailed abnormal and normal clinical findings from radiology images like Xrays, CT scans, or MRI scans, and \u2018Impression\u2019 highlights the key diagnostic information and significant results, which are critical for accurate diagnosis and treatment (Jing et al., 2018; Liu et al., 2021b). We adopt the widely-used datasets, MIMIC-CXR (Johnson et al., 2019) and IU-Xray (Demner-Fushman et al., 2016). Discharge Instruction Generation aims to generate a discharge instruction according to the patient\u2019s health records during hospitalization when a patient is discharged from the hospital. The discharge instruction should consider diagnosis, medication, and procedure, e.g., demographics, laboratory results, admission notes, nursing notes, radiology notes, and physician notes (Liu et al., 2022). It contains multiple instructions to help the patient or carer to manage their conditions at home. We follow previous works (Liu et al., 2022) to use the MIMIC-III (Johnson et al., 2016) for evaluation. 2.3 Medical Language Understanding We include three representative tasks, i.e., named entity extraction, relation extraction, and document classification, into our benchmark. Named Entity Extraction can help organize and manage patient data (Perera et al., 2020). For example, it can extract medical entities mentioned in clinical notes and classify them according to relevant symptoms, medication, dosage, and procedures (Song et al., 2021). We adopt two representative datasets BC5-disease (Li et al., 2016) and NCBI-Disease (Do\u02d8 gan et al., 2014) for evaluation. Relation Extraction requires the model to identify the relation between medical entities. The extracted relations provide a solid basis to link the entities in a structured knowledge base or a standardized terminology system, e.g., SNOMED CT (Chang and Mostafa, 2021; Donnelly et al., 2006) and UMLS (Bodenreider, 2004), which is critical in clinical decision support systems. We employ \fMetrics Matching (Accuracy, F1, ROUGE-L, BLEU-4) Measure the match between the generated content and the ground truth content. Faithfulness The model can not generate content that appears reasonable but is factually incorrect and sometimes even harmful, thus avoiding misdiagnosis. Comprehensiveness The model can not leave out the important content, which can be used to alert clinicians to avoid missed diagnoses. Robustness For the same scenario and task, the model provides consistent and reliable performance across different formats/types/terminologies of input data (instead of overfitting specific data), measuring model stability to a range of inputs. Generalizability The model should maintain competitive performance across different scenarios and tasks (not limited to QA), to effectively assist clinicians. Table 2: Metrics used in our work for evaluation. the DDI (Segura-Bedmar et al., 2013) and GAD (Becker et al., 2004) to evaluate LLMs. Document Classification is a document-level language understanding task aiming to predict multiple correct labels to the input medical document, and can be used to improve clinical management systems. We use the representative dataset HoC (Baker et al., 2016) for evaluation. 3 Metrics As shown in Table 2, we use five metrics to benchmark LLMs in healthcare. Matching We follow the common practice to calculate the classification accuracy, F1 score, ROUGE-L (Lin, 2004), and BLEU-4 (Papineni et al., 2002) to report the matching performance. Details of used metrics for different tasks are shown in Table 1. However, matching-based metrics are not specialized for evaluating the usefulness of the LLMs in clinical practice. To assist clinicians, it is necessary to provide faithful, comprehensive, and robust content (Thirunavukarasu et al., 2023; Arora and Arora, 2023; Safranek et al., 2023). Faithfulness LLMs are susceptible to \u201challucinations\u201d (Li et al., 2023a; Ji et al., 2023), i.e., fluent content that appears credible but factually incorrect or potentially harmful. Therefore, it is crucial to ensure that LLMs generate faithful content, so that the models do not generate contents that \u201cdo not exist\u201d according to clinicians (Liu et al., 2022). For instance, if clinicians annotate the ground truth contents as [Content_A, Content_B], but the model generates [Content_A, Content_C], it becomes evident that the model has introduced \u2018Content_C\u2019, which does not exist in the annotations. Such inaccuracies could lead to misdiagnoses, particularly Types Methods # Params General LLMs Claude-2 (Anthropic, 2023) GPT-3.5-turbo (OpenAI, 2023a) GPT-4 (OpenAI, 2023c) ChatGLM (Tsinghua KEG, 2023) 6B Alpaca (Taori et al., 2023) 7B Vicuna (Chiang et al., 2023) 7B LLaMA-2-7B (Touvron et al., 2023c) 7B LLaMA-2-13B (Touvron et al., 2023c) 13B LLaMA-2-70B (Touvron et al., 2023c) 70B Medical LLMs ChatGLM-Med (Wang et al., 2023b) 6B DoctorGLM (Xiong et al., 2023) 6B Huatuo (Zhang et al., 2023a) 7B ChatDoctor (Li et al., 2023b) 7B Baize-Healthcare (Xu et al., 2023) 7B MedAlpaca-7B (Han et al., 2023) 7B MedAlpaca-13B (Han et al., 2023) 13B Table 3: We collect 16 LLMs, including 9 general LLMs and 7 medical LLMs, covering both open-source public LLMs and closed-source commercial LLMs (graycolored), across different numbers of parameters from 6 billion to 70 billion, and different model backbones (GLM and GPT). with clinicians who have less experience. We notice that the precision scores can measure the rates of such generated non-existent content. To this end, we calculate and sum the precision scores of tasks to measure the \u2018faithfulness\u2019 scores Comprehensiveness Given the ground truth contents [Content_A, Content_B], generating comprehensive content [Content_A, Content_B] diminishes the chance of leaving out important content. They can also be used to alert clinicians to avoid missed diagnoses, improving precision medicine. The recall score measures the percentage of generated accurate content out of all correct answers. Therefore, to evaluate the comprehensiveness of model-generated contents, we calculate and sum the recall scores of different tasks to measure the \u2018comprehensiveness\u2019 scores. Robustness Clinicians may express the same texts, questions, and conditions using varying formats and terminologies. For example, in the radiology report summarization task, both \u2018enlargement of the cardiac silhouette\u2019 and \u2018the heart size is enlarged\u2019 express the condition \u2019cardiomegaly\u2019. Therefore, the model needs to accurately identify \u2019cardiomegaly\u2019 for both these two different inputs. As shown in Table 1, for the report summarization task, we can compute the variance in model performance on the two datasets, IU-Xray and MIMICCXR (collected from different hospitals and regions, thus having different expression habits), to obtain the robustness of the model on this task. As a result, to measure the robustness scores of the \fPrompts Sources MedQA (USMLE), MedMCQA, MMLU-Medicine The following are multiple-choice questions about medical knowledge. Solve them in a step-by-step fashion, starting by summarizing the available information. Output a single option from the four options as the final answer. (Singhal et al., 2023b) PubMedQA This is a multiple-choice question about medical research. Determine the answer to the question based on the strength of the scientific evidence provided in the context. Valid answers are yes, no, or maybe. Answer yes or no if the evidence in the context supports a definitive answer. Answer maybe if the evidence in the context does not support a definitive answer, such as when the context discusses both conditions where the answer is yes and conditions where the answer is no. (Singhal et al., 2023b) ChatDoctor \"task\": \"Your task is to list the medications based on the provided content related to the symptom or disease mentioned in the question. Understand the question, extract relevant information, analyze it, and provide a concise and accurate answer.\", \"answer format\": Analysis: Provide an analysis that logically leads to the answer based on the relevant content. Final Answer: Provide the final answer, which should be a list of medications related to the symptom or disease. \"not to dos\": \"Do not make assumptions not supported by the content. Avoid providing personal opinions or interpretations. Summarize and interpret the information as objectively and accurately as possible. You are providing an analysis, not diagnosing or treating medical conditions.\" (Zhou et al., 2023b) MIMIC-CXR, IU-Xray You are a helpful radiology assistant. The following are questions about radiology reports. Summarize the findings in the report into diagnostic statements in a coherent paragraph. Given the findings: {Findings}. Q: Summarize the findings. A: (Tu et al., 2023) MIMIC-III Provide plain language discharge instructions, containing the following three main components from patients\u2019 perspective: (1) What is my main health condition? (i.e., why was I in the hospital?) (2) What do I need to do? (i.e., how do I manage at home, how should I best care for myself, what medications to take, and which appointments to go to next (if available)) (3) Why is it important for me to do this? (Fleming et al., 2023) BC5-disease, NCBI-Disease Paragraph: <Paragraph ID> | <text> Please extract all chemicals/genes/diseases mentioned in the paragraph. Answer with the format \"<Paragraph ID> | <recognized entities>\" (Chen et al., 2023) DDI @DRUG$ an anionic-binding resin, has a considerable effect in lowering the rate and extent of @DRUG$ bioavailability. Target: You need to identify the relationship between the two @DRUG$. Require: you must start with choose one from the [\u201cmechanism,\u201d \u201ceffect,\u201d \u201cadvice,\u201d \u201cint,\u201d \u201cNone\u201d], Specific Explanation: mechanism: This type is used to annotate DDIs that are described by their PK mechanism (e.g. Grepafloxacin may inhibit the metabolism of theobromine). effect: This type is used to annotate DDIs describing an effect (e.g. In uninfected volunteers, 46% developed rash while receiving SUSTIVA and clarithromycin) or a PD mechanism (e.g. Chlorthali done may potentiate the action of other antihypertensive drugs). advice: This type is used when a recommendation or advice Regarding a drug interaction is given (e.g. UROXATRAL should not be used in combination with other alpha-blockers). int: This type is used when a DDI appears in the text without providing any additional information (e.g. the interaction of Omeprazole and ketoconazole have been established). You should mark the final category with < >. (Chen et al., 2023) GAD Given a sentence that introduces a gene (denoted as \u201d@GENE$\u201d) and a disease (denoted as \u201d@DISEASE$\u201d), predict whether the gene and disease have a relation or not. The relation between the gene and disease can be any functional, causal, or associative connection. If there is a relation, then the label should be \u201cYes\u201d, otherwise \u201cNo\u201d. (Tang et al., 2023) HoC document: < text>; target: The correct category for this document is ? You must choose from the given list of answer categories (introduce what each category is ...)\u201d (Chen et al., 2023) Table 4: The prompts used for different evaluation tasks and datasets. We collect optimal prompts from existing state-of-the-art work. language reasoning, generation, and understanding scenarios, we respectively calculate the variance in model performance on the representative question answering, radiology report summarization, and named entity recognition tasks. Finally, we sum the variance up to obtain the overall robustness scores of the LLMs, reflecting whether their accuracy is significantly impacted by variations in the inputs. Generalizability To effectively support clinicians in different settings, LLMs should perform well in a wide range of scenarios and tasks (not limited to QA). For clarity, we directly average all the matching scores to obtain the \u2018generalizability\u2019 scores to evaluate how LLMs perform across a range of scenarios and tasks. 4 Large Language Models As shown in Table 3, to provide a comprehensive evaluation of LLMs in healthcare, we evaluate both the general and medical LLMs. Please refer to Zhao et al. (2023); Yang et al. (2023a) and Zhou et al. (2023a); He et al. (2023) for a detailed introduction to general LLMs and medical LLMs, respectively. General Large Language Models We include nine general LLMs, including three leading closedsource commercial LLMs, i.e., Claude-2 (Anthropic, 2023), GPT-3.5-turbo (OpenAI, 2023a), and GPT-4 (OpenAI, 2023c), and six open-source public LLMs, i.e., ChatGLM (Tsinghua KEG, 2023; Du et al., 2022; Zeng et al., 2022), Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), and LLaMA-2-7B/13B/70B (Touvron et al., 2023c). These general LLMs are trained on a large generalpurpose corpus with more than 1T tokens (Zhao et al., 2023; Yang et al., 2023a; Zhou et al., 2023a). Medical Large Language Models We choose seven medical LLMs with different numbers of parameters and different types of fine-tuning data. In detail, as shown in Table 3, ChatGLM-Med (Wang et al., 2023b) and DoctorGLM (Xiong et al., 2023) are fine-tuned on the ChatGLM-6B (Tsinghua KEG, 2023; Du et al., 2022; Zeng et al., 2022) using QA pairs and dialogues, respectively. Huatuo (Zhang et al., 2023a), ChatDoctor (Li et al., 2023b), Baize-Healthcare (Xu et al., 2023), and MedAlpaca-7B/13B (Han et al., 2023) are built upon the LLaMA-series models. During finetuning, both Huatuo and MedAlpaca employ the QA pairs collected from the medical knowledge graphs and medical texts, respectively. ChatDoctor and Baize-Healthcare are fine-tuned on medical dialogues generated by commercial LLMs (e.g., ChatGPT). Prompts Prompt designs are crucial for the performance of LLMs. Therefore, to ensure LLMs achieve optimal performance across different tasks, \fTypes Methods # Params Language Reasoning Language Generation Language Understanding MedQA MedMCQA MMLU PubMedQA ChatDoc. MIMIC-CXR IU-Xray MIMIC-III BC5 NCBI DDI GAD HoC Task-specific SOTA 44.6 43.0 60.2 46.1 67.9 30.5 90.0 89.4 84.1 84.0 85.1 General LLMs Claude-2 65.1 60.3 78.7 70.8 9.1 13.3 9.4 26.1 52.9 44.2 50.4 50.7 70.8 GPT-3.5-turbo 61.2 59.4 73.5 70.2 7.3 14.1 10.3 28.6 52.3 46.1 49.3 50.8 66.4 GPT-4 81.2 74.6 90.8 76.6 13.7 15.2 11.4 30.1 65.7 55.3 62.6 64.4 78.1 ChatGLM 6B 25.7 24.2 33.5 53.0 2.9 13.3 7.5 18.6 37.2 31.9 34.1 36.6 47.5 Alpaca 7B 34.2 30.1 40.8 65.2 3.5 12.6 8.7 20.4 41.2 36.5 37.4 36.9 52.6 Vicuna 7B 34.5 33.4 43.4 64.8 2.6 13.8 8.2 23.4 44.5 37.0 39.4 41.2 53.8 LLaMA-2-7B 7B 32.9 30.6 42.3 63.4 3.3 12.3 8.6 20.2 40.1 34.8 37.9 39.3 48.6 LLaMA-2-13B 13B 38.1 35.5 46.0 66.8 4.8 12.0 9.1 21.1 46.6 38.3 39.7 41.2 55.9 LLaMA-2-70B 70B 45.8 42.7 54.0 67.4 5.5 13.9 8.0 23.2 47.8 41.5 45.6 44.7 63.2 Medical LLMs ChatGLM-Med 6B 27.3 25.8 35.3 58.8 3.3 9.5 4.7 19.4 40.5 35.2 37.4 33.6 49.3 DoctorGLM 6B 25.9 23.1 36.8 57.4 3.1 6.5 3.5 15.2 38.7 33.6 35.6 34.7 50.8 Huatuo 7B 28.4 24.8 31.6 61.0 3.8 8.7 3.8 17.8 43.6 37.5 40.1 38.2 50.2 ChatDoctor 7B 33.2 31.5 40.4 63.8 5.3 8.9 4.2 20.7 45.8 40.9 41.2 40.1 55.7 Baize-Healthcare 7B 34.9 31.3 41.9 64.4 4.7 9.8 4.4 19.3 44.4 38.5 41.9 45.8 54.5 MedAlpaca-7B 7B 35.1 32.9 48.5 62.4 4.8 10.4 7.6 22.1 47.3 39.0 43.5 44.0 58.7 MedAlpaca-13B 13B 37.3 35.7 51.5 65.6 5.1 11.7 8.6 24.7 49.2 41.6 44.1 44.5 59.4 Table 5: Performance (measured by traditional matching scores) of LLMs under the zero-shot learning setting. We denote the results of three commercial LLMs (gray-colored) as upper bounds on the performance of open-source public LLMs. For comparison, in the first row, we also report the results of task-specific state-of-the-art (SOTA) models, which are fine-tuned in a fully supervised manner on downstream data and tasks. The close-ended QA task is the only task for which the current LLMs are comparable to the SOTA. we use tailored prompts for each task, so that LLMs can effectively understand the task and questions. In implementation, we adopt prompts used in the current state-of-the-art work for each task in the benchmark to evaluate the LLMs. Table 4 shows the prompts we used and their references. 5 Results 5.1 Medical Language Reasoning From Table 5, we observe that on all datasets, the three leading commercial LLMs, i.e., Claude-2, GPT-3.5-turbo, and GPT-4, significantly outperform other LLMs, general or medical. In particular, on the close-ended QA task with provided options, GPT-4 even achieves a competitive accuracy of 81.2 compared to human experts (87.0) (Wu et al., 2023). In terms of open-source public LLMs, medical LLMs, e.g., ChatGLM-Med and DoctorGLM, achieve better results than general LLMs, e.g., ChatGLM, on all datasets. It indicates that fine-tuning the general LLMs on medical data can improve their performances on reasoning tasks. Discussion The results show that, on all closeended QA datasets, all LLMs significantly outperform existing task-specific SOTA models, e.g., PubMedBERT (Gu et al., 2022). It proves that existing LLMs have a strong reasoning ability to give accurate answers from the options. However, on the open-ended treatment recommendation task, compared with SOTA models, all LLMs achieve poor F1 scores (<15%) on the ChatDoctor dataset. This indicates a considerable need for advancement before LLMs can be integrated into actual clinical decision-making processes. 5.2 Medical Language Generation This application is particularly useful in reducing the heavy workload of clinicians in medical text writing. Table 5 show that, among all LLMs, GPT4 (OpenAI, 2023c) consistently achieves the best results on all generation tasks, showcasing its exceptional capability in capturing and summarizing important clinical findings compared to other LLMs. Nonetheless, the task-specific SOTA model (Hu et al., 2022) achieves 46.1 and 67.9 ROUGE-L scores on MIMIC-CXR and IU-Xray, respectively, significantly higher than all LLMs. Discussion On the MIMIC-CXR and IU-Xray radiology report summarization datasets, most medical LLMs that have been fine-tuned on medical data, perform worse than general LLMs. In contrast, on the discharge instruction generation task, which requires the model to understand various types of medical data to provide accurate discharge instructions, medical LLMs perform better than the general LLMs. These observations may imply that the instruction fine-tuning on medical data could decrease the summarization ability of LLMs, but improve the understanding of medical data. 5.3 Medical Language Understanding All LLMs exhibit poor performances in this scenario, including named entity extraction, relation \f25.0 29.0 33.0 37.0 41.0 45.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Average Scores (a) Language Reasoning 0-shot 1-shot 3-shots 5-shots 2.0 7.0 12.0 17.0 22.0 27.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Average Scores (b) Language Generation 0-shot 1-shot 3-shots 5-shots 25.0 30.0 35.0 40.0 45.0 50.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Average Scores (c) Language Understanding 0-shot 1-shot 3-shots 5-shots Figure 1: Performance (measured by traditional matching scores) of LLMs under few (1,3,5)-shot learning setting. extraction, and document classification tasks. For example, the best results of LLMs are achieved by GPT-4 on the BC5-Disease and NCBI-Disease datasets, with 65.7 and 55.3 F1 scores, which are significantly far from current state-of-the-art performances, i.e., 90.0 F1 score achieved by ScienceBERT (Beltagy et al., 2019) and 89.4 F1 score achieved by BioBERT (Lee et al., 2020), respectively. The medical LLMs have better language understanding than general LLMs in healthcare. With the same parameters, all medical LLMs outperform the general LLMs over datasets. Discussion The inadequate performance of all LLMs may be attributed to the missing of taskspecific supervised training and thus a lack of necessary medical knowledge, such as the medical terminologies for named entity extraction, the medical relations between drugs, conditions, and symptoms for relation extraction, and the background of diseases for document classification (Chen et al., 2023). As a result, existing LLMs fail to comprehend texts that typically require extensive expert knowledge to interpret. This observation underscores the effectiveness of efficiently using clinicalstandard knowledge of diseases, symptoms, and medications, to fine-tune the LLMs. 5.4 Few-shot Learning Setting We further evaluate the performance of LLMs on the few-shot learning settings, i.e., 1-shot, 3-shot, and 5-shot learning settings. We analyze the three scenarios, i.e., reasoning, generation, and understanding. For reasoning and understanding scenarios, we calculate the average performance of all datasets under that scenario to report the performance of LLMs. For the generation scenario, since the text length of the input for the discharge report generation task is long, we do not report the few-shot learning performance on the MIMIC-III dataset. Therefore, we compute the average of the performance of the other generation datasets to obtain the generation results of the LLMs. The results are reported in Figure 1. (a) We observe that the few-shot learning can significantly boost the performances of LLMs in language reasoning. It proves the effectiveness of few-shot learning, in which the provided examples could provide efficient knowledge of medical reasoning to reason about the correct answers. However, most LLMs achieve the best results under the 1-shot and 3-shot settings. More examples (e.g., 5 shots) may not only make it difficult for LLMs to deal with long inputs but also potentially introduce \f25.0 30.0 35.0 40.0 45.0 50.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Score (%) Faithfulness General LLMs Medical LLMs 25.0 28.0 31.0 34.0 37.0 40.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Score (%) Generalizability General LLMs Medical LLMs 15.0 17.0 19.0 21.0 23.0 25.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Score (%) Robustness General LLMs Medical LLMs 25.0 29.0 33.0 37.0 41.0 45.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Score (%) Comprehensiveness General LLMs Medical LLMs 25.0 30.0 35.0 40.0 45.0 50.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Score (%) Faithfulness General LLMs Medical LLMs 25.0 28.0 31.0 34.0 37.0 40.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Score (%) Generalizability General LLMs Medical LLMs 15.0 17.0 19.0 21.0 23.0 25.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Score (%) Robustness General LLMs Medical LLMs 25.0 29.0 33.0 37.0 41.0 45.0 ChatGLM Alpaca Vicuna LLaMA-2-7B LLaMA-2-13B LLaMA-2-70B ChatGLM-Med DoctorGLM Huatuo ChatDoctor Baize-Healthcare MedAlpaca-7B MedAlpaca-13B HealthLLaMA Score (%) Comprehensiveness General LLMs Medical LLMs Figure 2: Performance of existing LLMs on our BenchHealth benchmark in terms of clinical usefulness. Higher faithfulness, comprehensiveness, and generalizability scores are better. Lower robustness scores are better. noise into the LLMs, i.e., the provided examples may not be relevant to the input problem, thus affecting performance. As a result, providing more examples does not lead to further improvements. (b) In text generation, few-shot learning can directly demonstrate how to capture and summarize important clinical information and provide a desirable writing style. As a result, few-shot learning can consistently and substantially improve the performance of the LLMs, with more samples leading to better performance. It proves the effectiveness of using few-shot learning to significantly boost the performance of medical text generation. (c) However, in the case of language understanding, it clearly shows that few-shot learning impairs performance. This may be because, in language understanding tasks, the characteristics of different input data are usually very different from each other, resulting in the entities or knowledge involved in the examples often being irrelevant to the test data, making the model unable to effectively utilize the examples to improve performance. 5.5 Clinical Usefulness In Figure 2, we report the performances of LLMs in terms of clinical usefulness. (a) In terms of faithfulness, all medical LLMs outperform general LLMs, resulting in providing more faithful answers than general LLMs, avoiding misdiagnosis. (b) In contrast, general LLMs demonstrate better results than medical LLMs in terms of comprehensiveness, likely due to their susceptibility to \u201challucinations\u201d, meaning the LLMs tend to generate massive content including both correct and incorrect information. (c) In terms of generalizability, we notice that medical LLMs achieve optimal results, showing that fine-tuning using the medical data can boost the overall performance of LLMs in healthcare. (d) The general LLMs have better robustness and achieve lower robustness values than medical LLMs. For example, ChatGLM achieves 21.1 points, lower than ChatGLM-Med (22.4) and DoctorGLM (21.3). Discussion We hypothesize that the better comprehensiveness of the general LLMs could potentially be due to that a certain degree of hallucination may offer benefits. This hypothetical advantage might assist clinicians by providing a broader spectrum of diagnostic suggestions, which could be advantageous in the diagnosis and treatment of rare diseases. However, any content generated by LLMs must be supported by factual knowledge and evidence to provide reliable, rather than misleading, results. General LLMs have better robustness, and thus can better understand a variety of diverse inputs. We speculate that the reason may be the limited diversity of fine-tuning data and tasks used \fTypes Methods # Params Faithfulness Comprehensiveness Generalizability Robustness Claude-2 GPT-3.5 GPT-4 Claude-2 GPT-3.5 GPT-4 Claude-2 GPT-3.5 GPT-4 Claude-2 GPT-3.5 GPT-4 General LLMs Alpaca 7B 29.5 35.0 9.5 29.0 40.0 18.0 21.0 26.0 17.5 32.0 42.5 23.5 Vicuna 7B 34.0 39.5 14.0 33.5 43.5 30.5 35.5 41.0 22.0 39.0 37.5 20.0 LLaMA-2-7B 7B 32.5 37.0 13.0 40.5 48.0 26.5 29.0 34.5 21.5 44.5 46.5 27.5 LLaMA-2-13B 13B 39.5 44.0 16.0 47.0 52.5 37.0 43.0 49.0 32.5 52.5 49.5 31.0 LLaMA-2-70B 70B 43.0 49.5 19.5 52.5 58.0 41.5 54.5 56.5 39.0 58.5 61.0 45.0 Medical LLMs ChatDoctor 7B 38.0 46.5 23.0 18.0 16.5 8.0 25.0 27.0 15.5 20.0 24.5 11.0 Baize-Healthcare 7B 44.5 52.0 28.5 29.5 33.5 18.5 39.5 45.5 28.0 36.0 42.0 22.5 MedAlpaca-7B 7B 47.0 55.5 31.5 26.0 33.0 15.5 33.5 31.0 19.0 30.5 37.5 17.0 MedAlpaca-13B 13B 50.5 61.0 34.0 31.0 35.5 19.5 38.5 39.0 20.5 37.5 43.0 24.0 Table 6: Performance of human evaluation on our BenchHealth benchmark. We compare open-source public LLM with three leading commercial LLMs. All values are Win+Tie rates for public LLM. Higher is better in all columns. to develop medical LLMs (Rohanian et al., 2023). It leads to overfitting to specific types of data and thus reduces the robustness of the model during instruction fine-tuning. 5.6 Human Evaluation We invite two junior annotators (medical students) and a senior annotator (medical professor) to conduct the human evaluation. All three annotators have sufficient medical knowledge. In implementations, we follow previous works (Li et al., 2023b; Zhang et al., 2023b) to randomly select 200 real patient-doctor conversations from Li et al. (2023b). We require the LLMs to simulate a doctor and provide responses based on various patient inquiries. Each junior annotator is assigned to independently compare the responses from public LLMs and those from the leading commercial LLMs, i.e., Claude-2, GPT-3.5-turbo, and GPT-4, in terms of the perceived quality of the responses. It includes faithfulness, comprehensiveness, generalizability, and robustness. The senior annotator re-evaluates the cases that are difficult for junior annotators to decide. The annotators are unaware of which model generates these reports. We report the results (win+tie rates) of public LLMs in Table 6. We observe that with the same number of model parameters, medical LLMs outperform general LLMs in terms of faithfulness and generalizability, but underperform general LLMs in comprehensiveness and robustness. These results are consistent with those shown in Figure 2, which demonstrates the validity and appropriateness of our metric and benchmark. 6 Conclusions This paper introduces BenchHealth, a healthcare benchmark encompassing medical language reasoning, generation, and comprehension scenarios. It employs metrics that extend beyond mere accuracy, aiming to evaluate the utility and reliability of LLMs for clinical applications. Although LLMs have made promising advances, our analysis uncovers a gap between the capabilities of LLMs and the requirements for clinical application, especially in open-ended non-QA tasks that lack pre-determined answer choices, underscoring the challenges LLMs face in providing reliable support in healthcare. Limitations A limitation of this work is that the recent development of LLMs is rapid and we do not evaluate the latest LLMs, e.g., GPT-4.5 and Qwen (Bai et al., 2023), and medical LLMs, e.g., Zhongjing (Yang et al., 2023b) and Qilin-Med (Ye et al., 2023)."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00718v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00718v1",
|
| 3 |
+
"title": "Can't say cant? Measuring and Reasoning of Dark Jargons in Large Language Models",
|
| 4 |
+
"abstract": "Ensuring the resilience of Large Language Models (LLMs) against malicious\nexploitation is paramount, with recent focus on mitigating offensive responses.\nYet, the understanding of cant or dark jargon remains unexplored. This paper\nintroduces a domain-specific Cant dataset and CantCounter evaluation framework,\nemploying Fine-Tuning, Co-Tuning, Data-Diffusion, and Data-Analysis stages.\nExperiments reveal LLMs, including ChatGPT, are susceptible to cant bypassing\nfilters, with varying recognition accuracy influenced by question types,\nsetups, and prompt clues. Updated models exhibit higher acceptance rates for\ncant queries. Moreover, LLM reactions differ across domains, e.g., reluctance\nto engage in racism versus LGBT topics. These findings underscore LLMs'\nunderstanding of cant and reflect training data characteristics and vendor\napproaches to sensitive topics. Additionally, we assess LLMs' ability to\ndemonstrate reasoning capabilities. Access to our datasets and code is\navailable at https://github.com/cistineup/CantCounter.",
|
| 5 |
+
"authors": "Xu Ji, Jianyi Zhang, Ziyin Zhou, Zhangchi Zhao, Qianqian Qiao, Kaiying Han, Md Imran Hossen, Xiali Hei",
|
| 6 |
+
"published": "2024-04-25",
|
| 7 |
+
"updated": "2024-04-25",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.AI"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM AND Reasoning",
|
| 15 |
+
"gt": "Can't say cant? Measuring and Reasoning of Dark Jargons in Large Language Models",
|
| 16 |
+
"main_content": "Introduction Large Language Models (LLMs), exemplified by ChatGPT[1], redefine information acquisition, communication, and problem-solving[2]. These models are trained on extensive datasets or fine-tuned from pre-existing models, necessitating vast amounts of data. However, LLMs also pose security and ethical concerns as attackers can exploit their generative capabilities for malicious purposes [3]. Such misuse encompasses disinformation dissemination [4], AI-driven crime [5], privacy breaches [6], and social engineering [7]. Despite efforts by regulators like OpenAI to implement content filters [8], there remains a risk of attackers disguising malicious content using \u201ccant\u201d or \u201cdark jargon\u201d concealed language elements requiring deeper comprehension [9]. LLMs excel in understanding and generating natural language responses, fostering user trust. While research evaluates their efficacy in providing accurate responses [10], little attention has been paid to LLMs\u2019 interaction with cant in specific domains. Prior studies often lack depth in understanding the intricacies of cant [11], especially its varied representations in domains like politics and drugs. In this paper, we investigate LLMs\u2019 ability to recognize and reason about cant, particularly in domains prone to offensive content like politics and drugs. Despite progress in filtering harmful content, attackers can still exploit cant to evade detection. Understanding LLMs\u2019 response to cant in specific domains is essential for addressing emerging security challenges. Additionally, we assess LLMs\u2019 ability to demonstrate reasoning capabilities. \u2217Corresponding author: [email protected] arXiv:2405.00718v1 [cs.CL] 25 Apr 2024 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Research Questions. To address the above issues, in this paper, we evaluate the reasoning abilities of current LLMs involving cant or dark jargon from the following four perspectives: 1. RQ1: Do different types of questions help LLM understand the cant? 2. RQ2: Do different question setups and prompt clues help LLM understand cant? 3. RQ3: Do different LLMs have the same understanding of the same cant? 4. RQ4: How well does LLM understand cant in different domains? CantCounter: Addressing past shortcomings[11], CantCounter is a system crafted to evaluate LLM\u2019s grasp of cant within specific domains. We compile Cant and Scene datasets from various sources to form adversarial texts. These datasets fine-tune the GPT-2 model and generate Scene fragments for assessing LLM comprehension. Co-Tuning methods align the Cant dataset and Scene fragments, while Data-Diffusion techniques augment and refine adversarial text. Employing Type, Sample learning, and Clue approaches enrich our experiments. Finally, Data-Analysis methods systematically evaluate 1.67 million data points. CantCounter is locally deployable and adaptable to any open-world dialogue system. Its replication has both advantages and drawbacks, aiding attackers in bypassing LLM classifiers while facilitating safety filter development. We define \u201centities\u201d as distinct objects or concepts and \u201cscenes\u201d as related events in specific environments. Ethical Considerations: CantCounter draws from public datasets such as Reddit [12] and 4chan [13], avoiding direct user interaction. However, its misuse poses risks, despite its benefits in addressing LLM\u2019s challenges. Despite these potential risks, we believe that the benefits of CantCounter far outweigh the risks. LLM has become a hot topic [14], and we need to fully recognize the potential problems of LLM and promote its safer development and application. We caution that this paper may contain sensitive content, including drug and violence-related examples, which could cause discomfort. Comprehensive data is available upon request. We have only open sourced part of the dataset. Contributions. This paper introduces three key contributions: 1. We present the Cant and Scene datasets, addressing data scarcity in domains like drugs, weapons, and racism, laying groundwork for future large language model assessment. 2. CantCounter, our framework, assesses large language models\u2019 understanding of domain-specific cants through four stages: Fine-Tuning for scene fragment generation, Co-Tuning for cross-matching, DataDiffusion for text expansion, and Data-Analysis for simplifying complex calculations. 3. Our evaluation of CantCounter reveals its efficacy in bypassing security filters of mainstream dialogue LLMs, providing insights into LLM reasoning within specific domains and guiding future research. 2 Background 2.1 Large Language Model Security Issues ChatGPT, developed by OpenAI in November 2022 [1], has undergone upgrades and fine-tuning [15] to prevent harmful content generation. However, users can still provoke negative responses by using specific prompts [16]. Researchers are investigating security risks, including the generation of toxic outputs from benign inputs [17]. Recent studies have shown that attackers can bypass detection by encrypting inputs with methods like Caesar ciphers and exploiting language nuances [18]. This paper proposes a Q&A query approach to evaluate LLMs\u2019 reasoning abilities in handling such content. 2.2 Cant Cant, a specialized language used by social groups for secrecy [19], varies in names like argot [20], slang [21], and secret language across history. While LLMs excel in traditional cant analysis, understanding criminal cant poses challenges. Criminal groups use innocuous terms to hide illegal activities, necessitating mastery for law enforcement [22]. Our study explores cant in politics, drugs, racism, weapons, and LGBT issues. These cants share ambiguity, indirect messaging, and potential for social harm. Political cant conveys biases, drug cant evades regulation, racism cant reinforces biases, weapons cant enables illegal dealings, and LGBT cant discriminates. Mastering these cants is vital for addressing societal and security concerns. 2.3 Question Answering (Q&A) Task Dialogue systems fall into task-oriented and non-task-oriented categories. Task-oriented systems serve specific purposes like reservations, while non-task-oriented systems engage in free conversation. Examples include ChatGPT, Bard, ERNIE, and Claude, offering services in entertainment, social interaction, and information retrieval [23].Question-answering (Q&A) tasks in NLP evaluate language processing capabilities [24], including reading comprehension and logical reasoning. Q&A formats include abstractive, Yes/No, and Multiple-Choice, each requiring specific evaluation metrics [25]. We employ Zero-shot/One-shot learning for testing. 2 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 3 CantCounter 3.1 High-level Idea We observe that the responses generated by LLMs vary with different cants, allowing adversaries to bypass filters or security restrictions. Thus, understanding how LLMs react to different cants is very important. However, exhaustively trying different cants queries with different scenes across numerous domains to find those capable of bypassing LLM restrictions and generating harmful outputs would be time-consuming and impractical. Therefore, we investigate whether adversaries can independently combine different cants and scenes to generate context that is reasonable and coherent, bypassing LLM filters or restrictions. To this end, we introduce CantCounter, the first evaluation (attack) framework targeting open-world dialogue systems (LLM). 3.2 Threat Model We adopt a threat model similar to \u201cWhy so toxic\u201d [17], targeting deployed dialogue LLMs like ChatGPT. Firstly, the adversary requires scene data different from the target LLM\u2019s training data. Secondly, they interact with the LLM, combining cants and scenarios to extract detectable cants. Finally, they access the victim LLM via CantCounter in a black-box manner, querying it through an API-like interface. 3.3 Dataset In our study, we extensively gathered cant related to five domains: politics, drugs, racism, weapons, and LGBT. The cant, comprising common and less common usages, holds practical meanings in real life. This Cant dataset forms a robust basis for evaluating the veracity and reliability of LLMs across specific domains. These five areas were chosen to address pressing societal issues impacting fundamental values such as social justice and human rights. Exploration of politics, drugs, racism, weapons, and homosexuality enables LLMs to tackle real-world challenges effectively. While other domains like hacking and fraud are significant, we focused on these due to data availability and processing feasibility, leaving room for future research on sensitive topics. Donald Trump Cheeto Jesus Agent Orange ... 45 Barack\u00a0 Obama Nobama Odumbo ... Deporter\u00a0 in chief Cocaine Blow C ... White Lady LSD Microdot L ... Cubes gun Rod Heater ... Cuete Sailor Geardo fobbit ... Cage Kicker Jews Crikey German Candle\u00a0 ... Penny Chaser Asians Bananas Sleepwalker ... Tiny gay man chicken queen ... \u00a0silver fox lesbins baby butch\u00a0 kitty puncher ... pillow\u00a0 princess Politics Drugs Weapons Racism LGBT ... ... ... ... ... Data Collection Data Classification\u00a0 & Categories Domain Platform Object Style Type Abjectives \u00a0Information Network Data and Social Forum Community Dark Web Figure 1: Construction of the Cant dataset involves collecting, summarizing security-related data, forming interconnected relationships between cants into an information network, and establishing the dataset through data classification and categorization, encompassing various domain-related entities and their corresponding cants. In constructing the Cant dataset (Figure 2 2 \u20dd), we crawled or manually screened multiple sources, including government agency websites [26], online forums like Reddit [12], 4chan [13], and X [27], publicly available datasets from Kaggle [28] and Hugging Face [29], dark web, and public compilations of cant. Multi-source data encompasses various text types closely related to specific domains. CantCounter utilizes information networks [30] to address redundancy challenges between cants, capturing their interdependency. The Cant dataset covers five domains, totaling 1,778 cants across 187 entities. We randomly selected 53 entities, totaling 692 cants, ensuring even representation across domains and prevalence in the open world. Selected entities and cants were cross-validated with authoritative sources [31, 32, 33, 34, 35] to ensure wide presence and reflection in publicly accessible information sources. Criteria like content relevance and topic specificity guided information selection and filtering, aiming for transparency and consistency. The resulting high-quality data forms the Scene dataset, laying the groundwork for subsequent simulation scene generation models. During information selection and filtering (Figure 2 1 \u20dd), explicit criteria were used to judge relevance and adherence to study definitions. Decisions were reached through participatory discussion to mitigate subjectivity and ensure alignment with research objectives. This rigorous process yields a refined dataset for accurate and relevant analysis. 3 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 3.4 Pipeline The CantCounter pipeline (Figure 2) consists of four stages: Fine-Tuning, Co-Tuning, Data-Diffusion, and DataAnalysis, as detailed below. GPT-2 Scene Dataset Scene Generation Model Scene fragments Multi-Source Data Cant Dataset Q&A-Queries\u00a0 Analysis Target Model Cross Matching Constrain \u2463 \u2464 \u2465 \u2462 \u2466 \u2467 \u2468 \u2461 \u2460 Fine-Tuning Co-Tuning Data-Diffusion Data-Analysis Figure 2: The pipeline of CantCounter. Cant is prevalent in the open world, so we aggregate raw text data from various sources to construct Cant and Scene datasets (Section 3.3). Although Cant and Scene datasets provide specific entities and scenes, they may not align well with the domain\u2019s requirements. Therefore, in Stage 3 \u20dd, we fine-tune GPT-2 using the Scene dataset to build five scene generation models for large-scale scenes, tailored to our specific domains. However, the fine-tuned scenes may not match the entities in the Cant dataset. In Stage 4 \u20dd, we address this issue by using entities from the Cant dataset to constrain the output of the generated model, ensuring scenes closely relate to the cant entities. Next, we conduct semi-automatic screening of the generated simulation scenes to form a set of Scene fragments. While these fragments contain entities, linking them with specific questions requires a method we have not yet discovered. Hence, in Steps 5 \u20dd6 \u20dd, we devise the Co-Tuning stage, where Scene fragments cross-match with cants from the Cant dataset to form Fragments. To enable multi-task comparison, we construct detection tests through different combinations of specific domains, question types, learning methods, and prompt clue methods in Stage 7 \u20dd. This completes and diffuses Fragments to form Q&A-Query datasets. Finally, in Stages 8 \u20dd9 \u20dd, Q&A-Queries are sent to the target model API for completion, and a segmented data statistics algorithm is applied to obtain and analyze test results, conducting analyses in the Data-Analysis stage. 3.5 Stage 1: Fine-Tuning During the fine-tuning stage, we use the Scene dataset to guide GPT-2 in generating tailored scenarios for specific domains. Despite more advanced models like GPT-3.5 and GPT-4 being available, we opt for GPT-2 due to its open-source nature, facilitating better control over training details. The fine-tuning code is publicly accessible for replication. The fine-tuning process is outlined in Algorithm 1. The Transformer model [36] forms the basis for GPT-2, featuring encoders and decoders with identical modules. GPT2 employs a partially masked self-attention mechanism and positional coding to understand sequence relationships. It has been successfully applied in various tasks like AI detection and text summarization. Overall, GPT-2\u2019s fine-tuning with the Scene dataset enables the generation of Question-Answer patterns tailored to specific domains, aiding in simulated scene generation tasks. 3.6 Stage 2: Co-Tuning To solve the problem of many intersecting data processes in CantCounter, we use the Cant dataset and Scene fragments to collaborate and design a Co-Tuning method. Co-Tuning realizes the generation and collaboration of cross-matching and solves the problem of detection data insufficiency. The Cant dataset provides detailed entity information for the generated model. The entities could constrain the generative model and make the Scene Fragments more consistent and coherent in the need for a specific domain during the Co-Tuning stage. In the end, we also manually review the results to ensure the relevance of cants to scenes and the distinctiveness of all scenes corresponding to the same cant. In this paper, we design formulas in the Co-Tuning to mathematically represent this part of the stage. The generation model is specified as Mp(p \u2208[1, 5]), and it includes five fine-tuned models, which are denoted as M1, M2, M3, M4, and M5. As shown in Figure 3, entity Oi represents the i-th entity (i \u2208[1, 15]) in the Cant dataset, and cant \u03c9ij represents the j-th cant of Oi (j \u2208[1, 20]). For example, in the case of the politics domain, there are 10 entities used in our 4 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Algorithm 1: Fine-Tuning Input: pre-trained model parameters \u03b8p, fine-tuned dataset Df, loss function L(\u03b8) that depends on the model parameters \u03b8, optimizer O(\u03b8) for updating the parameters \u03b8, learning rate \u03b7, number of iterations T for fine-tuning, x is the input sample, y is the label, \u02c6 y is the model\u2019s predicted value, and \u2207\u03b8 denotes the gradient operation Output: fine-tuned model parameters \u03b8f 1 Initialization \u03b8f = \u03b8p 2 for each t in range(1, T + 1) do 3 for batchx, batchy in Df do 4 \u02c6 y = model(batchx, \u03b8f); 5 loss = L(\u02c6 y, batchy); 6 gradient = torch.autograd.grad(loss,\u03b8f) ; 7 \u03b8f = O(\u03b8f, gradient, \u03b7); 8 return \u03b8f entity \ufffd1 . . . ... ... entity \ufffd2 entity \ufffd\ufffd \ufffd\ufffd Generation Models GPT-2 Scene Datasets FineTuning cant \ufffd1\ufffd cant \ufffd11 cant \ufffd12 ... cant \ufffd\ufffd\ufffd cant \ufffdi1 cant \ufffd\ufffd2 ... ... k scenes \ufffd11 \u2019 . . . ... \ufffd12 \u2019 ... \ufffd1\ufffd \u2019 ... ... \ufffd\ufffd2 \u2019 ... \ufffd\ufffd\ufffd \u2019 . . . . . . . . . Fragment 1 ... \ufffd\ufffd1 \u2019 k scenes Fragment 2 k scenes Fragment j (belong to \ufffd1) (belong to \ufffd\ufffd) . . . . . . Fragments (i entities correspond to i*j*k scenes) Fragment 1 Fragment 2 Fragment j Fragments belonging to \ufffd1 Fragments belonging to \ufffd\ufffd (Scene fragment) (Scene fragment) Scene \ufffd11 Scene \ufffd12 Scene \ufffd1\ufffd Scene \ufffd\ufffd1 Scene \ufffd\ufffd\ufffd Scene \ufffd\ufffd2 (Cant Dataset) Cross-Matching Figure 3: The overall structure and process of Co-Tuning. experiments, each entity has twenty cants, j is taken as [1, 20]. The entity Oi can constrain the fine-tuned model Mp\u2019s output, and the result of the constraint is the Scene fragment; this part corresponds to Eq. (1). The Scene is Sik (i \u2208[1, 10], k \u2208[1, 101]). The Scene Sik represents the k-th scene fragment (i \u2208[1, 10], k \u2208[1, 101]) that the i-th entity enters into the output of the fine-tuning model (Mp). Sik = Mp(Oi) (1) Eq. (2) denotes the cross-match of Cant and Scene fragment and was saved in S \u2032 ik. S \u2032 ik = Sik \u222a\u03c9ij (2) There are k orange boxes in the O1 Scene fragment. These orange boxes represent the Mp-generated text containing the Cant dataset\u2019s entities. The function of Eq. 2 is to replace the entities in the Scene fragments with cant in the Cant dataset. As shown in Figure 3, for example, from O1 Scene fragment to Fragment 1. We replace entities in Scene [S11, S1k] with the cant (\u03c911), forming Fragment 1. By analogy, we built j Fragments in the Co-Tuning stage. In the Co-Tuning stage, we can obtain scene fragments related to entities in specific domains that have a high degree of context consistency and express various characteristics of the entities in different contexts. At the same time, our finetuned model is flexible enough to introduce multiple entities during the generation process and allow scene fragments to describe the relationships among multiple entities. This stage generates diverse scene fragments. While the scene fragments are generated through a generative process, the Scene dataset we provide undergoes manual review to mitigate errors in both the generated content and the language utilized within the experimental environment. 3.7 Stage 3: Data-Diffusion At this stage, Fragments from the Co-Tuning stage are transformed into Q&A-Queries to enhance interaction with LLM and diversify evaluation. We employ three diffusion methods: two sample learning techniques, three question types, and four prompt clue methods. Each Fragment generates 24 Q&A-Queries. First, we introduce sample learning techniques for zero-shot and one-shot learning transformations of Fragments. Second, we categorize Fragments into Abstractive, Yes/No, and Multiple-choice question types. Finally, prompts are classified into None-tip, Tip-1, Tip-2, and All-tip categories, considering information retrieval difficulty and situational prompting. 5 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT ... ... ... ... ... ... Zero-shot learning One-shot learning Abstractive Yes/NO Multiple-choice None-tip Tip-1 Tip-2 All-tip None-tip Tip-1 Tip-2 All-tip None-tip Tip-1 Tip-2 All-tip None-tip Tip-2 None-tip Tip-2 None-tip Tip-1 Tip-2 All-tip Tip-1 All-tip Tip-1 All-tip Abstractive Yes/NO Multiple-choice ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Data-Diffusion ... ... Fragment 1 Fragments ... Fragment 2 ... Fragment \ufffd ... Q&A-Query ... Figure 4: Schematic diagram of Data-Diffusion. The introduction of Data-Diffusion in extended Fragments has significantly increased Q&A queries, providing diverse test cases for evaluating the generation model\u2019s performance comprehensively. This approach promises to establish a diverse database for future research and applications. 3.8 Stage 4: Data-Analysis As shown 8 \u20ddand 9 \u20ddin Figure 2, 8 \u20ddmeans sending the data expanded by Data-Diffusion to ChatGPT and other target models. 9 \u20ddshows data analysis of the output results of LLMs such as ChatGPT. After completing the Data-Diffusion, we submit the generated Q&A-Queries to the LLM API interface to obtain a large number of data results. These data results are complex and diverse, including the interplay of relationships. Therefore, we devise a data analysis algorithm to yield both numerical and analysis outcomes. Algorithm 2: Segmented data statistics algorithm Input: J \u2013 j Fragments in Co-Tuning (Fig 4), K \u2013 k Scenes in a Fragment, tasks \u2013 [\u201cAbstractive\u201d: {\u201cAZ\u201d, \u201cAO\u201d}, \u201cYes/NO\u201d: {\u201cJZ\u201d, \u201cJO\u201d}, \u201cMultiple-choice\u201d: {\u201cMZ\u201d, \u201cMO\u201d} ], clues \u2013 [\u201cNone-tip\u201d, \u201cTip-1\u201d, \u201cTip-2\u201d, \u201cAll-tip\u201d], intervals \u2013 [0, 1-10, 11-20, ..., 91-101] Output: Rj,t,c, Nj,t,c,z, Sumt, SumPCM 1 for each j in J do 2 for each t in tasks do 3 for each c in clues do 4 Nj,t,c=Pk a=1 ia (if hit then ia=1, else ia = 0) 5 Rj,t,c \u2190 \u2212 Nj,t,c k 6 for each z in interval do 7 if Nj,t,c in z then Nj,t,c,z++ 8 Sumt=Pj \u03b1=1 Pc \u03b2=1 N\u03b1,t,\u03b2 9 SumPCM=Pj \u03b1=1 Pt \u03b2=1 N\u03b1,\u03b2,c 10 return Rj,t,c, Nj,t,c,z, Sumt, SumPCM After the Co-Tuning and Data-Diffusion stages, the test data generated by CantCounter is very complex. Therefore, in the Data-Analysis stage, we implement Algorithm 2 to conduct data statistics from various angles. During analysis, when the entity Oi is modified in the Co-Tuning stage (see Figure 3), Algorithm 2 will be called accordingly. We analyze the results based on different tasks. We learn and analyze data features from Question Type Method (See 4.2 QTM) and Sample Learning Method (See 4.3 SLM) based on different question types and samples learning to get Sumt; we analyze the data based on different prompt clues from Prompt Clue Method (See 4.4 PCM) to get SumP CM. In Algorithm 2, we set the matching conditions, calculate the number of fragments, and obtain Nj,t,c and accuracy Rj,t,c. At the same time, we set eleven intervals: 0, 1-10, 11-20, ..., 91-101 to distinguish different feedbacks and obtain Nj,t,c,z. As shown in the Algorithm 2, we put Zero-shot learning, One-shot learning, and three tasks together as a loop. We define that in the Abstractive task, the output is AZ in the Zero-shot learning input; the output is AO in the Oneshot learning input. In the Yes/NO task, the output is expressed as JZ in the Zero-shot learning input; the output is 6 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT expressed as JO in the One-shot learning input. In the Multiple-choice task, the output is represented as MZ in the Zero-shot learning input; the output is expressed as MO in the One-shot learning input. The above content has been integrated into our code to form semi-automation. 4 Experimental Design and Results To explore our research questions, we conducted experiments in CantCounter, outlined sequentially in this section. We examined various question types in RQ1 (Section 4.1), different question setups in RQ2 (Section 4.2), and diverse prompt clues in RQ2 (Section 4.3). Focusing primarily on ChatGPT-3.5 (version gpt-3.5-turbo-0613) due to its convenience and wide usage, similar experiments were also conducted with other language models. All experiments were performed on a server equipped with an RTX 3090 Ti GPU. In this section, we analyze using cant and scene to bypass the LLM filter in the CantCounter framework quantitatively. We conduct open-world query experiments across five domains: politics, drugs, racism, weapons, and LGBT. Initially setting k to 101, we match 692 cants to 53 entities, resulting in 69,892 scenes. These undergo Data-Diffusion, expanding to 1,677,408 scenes. This study enables a comprehensive analysis of corpus performance and language changes within specific domains. 4.1 Question Type Method (QTM) In the Q&A task, we conduct three types of tasks: \u2022 Abstractive Task: Models generate responses freely, without relying on specific information extraction. \u2022 Yes/No Task: Models provide binary responses, \u201cTrue\u201d or \u201cFalse,\u201d based solely on the presented question and existing knowledge. \u2022 Multiple-choice Task: Models select the correct answer from a set of options, demonstrating comprehension of semantics and accurate identification. Table 1 shows that Multiple-choice tasks achieve the highest accuracy (45.38%), while Yes/No tasks have the lowest (22.91%). The discovery that ChatGPT performs well in multiple-choice questions is intriguing. In this task, there are five options (A) to (E), with (A) to (D) relevant to a specific domain, and (E) set as \u201cI don\u2019t know.\u201d \u201cOther\u201d signifies an answer unrelated to these options, with (A) as the correct choice. Figure 5 displays the box plot analysis results. Analyzing the Multiple-choice task results, we find key factors for its success. Firstly, it offers a set of answers with one correct option and distractors, aiding comprehension. Secondly, its structured format simplifies the process of eliminating incorrect options, improving accuracy. Lastly, the inclusion of an \u201cI don\u2019t know\u201d option enhances accuracy in uncertain situations. Figure 5: The vertical axis refers to the number of correct answers under the four tips. The total number is 404. (A) and (E) stand out in Multiple-choice, being the correct answer and \u201cI don\u2019t know\u201d respectively. After carefully studying ChatGPT-3.5\u2019s interpretation of option (E), we find that when the context is ambiguous or the entities in the implicit context are rare, ChatGPT-3.5\u2019s accuracy will drop significantly; that is, it will prefer option (E). We also explore the low accuracy in the Yes/No task. Comparing ChatGPT-3.5\u2019s \u201cFalse\u201d answers with Multiple-choice task data, we find they often include option (E) and incorrect choices from the Multiple-choice task due to the clarity of options. Additionally, differences in response styles and keyword detection criteria impact ChatGPT-3.5\u2019s performance across Abstractive and Yes/No tasks, where Yes/No tasks restrict responses to \u201cTrue\u201d or \u201cFalse.\u201d Overall, our analysis highlights how different Q&A types affect ChatGPT-3.5\u2019s accuracy in specific domains, with Multiple-choice tasks showing higher performance. Further research is needed to improve ChatGPT-3.5\u2019s accuracy and adaptability in these domains. 7 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 4.2 Sample Learning Method (SLM) In our experiments, we explore two sample setups: Zero-shot and One-shot learning. \u2022 Zero-shot learning. No examples are provided in the prompt, which only includes instructions and questions. \u2022 One-shot learning. The prompt includes an example relevant to the discussion, consisting of a sample message and user information. Zero-shot learning involves a single user message, while One-shot learning processes a sample message and a user message. These methods help understand LLM\u2019s performance in different sample learning approaches and reveal its inference capabilities in information-poor settings. Further investigation uncovers learning patterns and effects of the model in specific domains, with default hyper-parameter settings used to avoid extensive tuning. In this section, we explore how Zero-shot and One-shot learning methods affect LLM accuracy in recognizing cant scenes for RQ2. Traditionally, One-shot learning often outperforms Zero-shot learning due to more available data [37]. However, our cross-domain analysis, depicted in Figure 6 and reflected in Table 1 (red section), reveals a trend favoring Zero-shot learning overall. We find this trend varies by domain. Figure 6: Comparison of the number of correct Zero-shot learning and One-shot learning in different domains. In the politics domain, One-shot learning performs better due to ample data and contextual understanding. Conversely, in the LGBT domain, Zero-shot learning outperforms One-shot learning due to limited publicly available examples. One-shot learning aids ChatGPT-3.5 in better contextual comprehension of sensitive topics, but it may also introduce biases, leading to lower overall accuracy in specific domains. Similar analyses across other domains yield consistent results. 4.3 Prompt Clue Method (PCM) In this part of the study, the purpose of CantCounter is to explore the impact of different clues on LLM recognition and reasoning abilities. To this end, we provide four different clues to experiment with: \u2022 None-tip. Keeps the same as the original prompt and does not add any additional clues. \u2022 Tip 1. Add relevant tip for \u201cNone-tip\u201d. For example, when describing Trump\u2019s cant, we can add the clue \u201cPolitician\u201d in the political domain to make the prompt more directional. \u2022 Tip 2. Add another relevant tip for \u201cNone-tip\u201d. For example, when describing Trump\u2019s cant, add the \u201cUnited States\u201d prompt in the domain of politics to enrich the prompt content. \u2022 All-tip. Add both Tip 1 and Tip 2 on the basis of \u201cNone-tip\u201d; for example, when describing Trump\u2019s cant, add both \u201cpolitician\u201d and \u201cAmerican\u201d in the political domain to make the prompt more appropriate. By observing the effects of these different clues on LLMs, CantCounter can assess the fluctuating changes they induce in recognition and reasoning abilities. This study will help further understand the influence of cues on LLM and provide directions for improving its application and performance. To answer RQ2, Table 1 displays ChatGPT-3.5\u2019s accuracy across five domains using different prompt clues. Generally, more clue-related information improves recognition accuracy, as seen in the political domain where All-tip prompts perform significantly better. However, increasing clues doesn\u2019t always lead to higher accuracy, possibly due to information redundancy or LLM filter triggering. Too many clues may reduce accuracy, as seen in the LGBT domain where Tip 1 prompts were less accurate than none-tip prompts. Our analysis stresses the importance of a balanced clue selection approach to maximize external information usage without compromising accuracy. Thus, choosing appropriate clues in moderate quantities is key to enhancing ChatGPT-3.5\u2019s domain-specific performance. 8 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Table 1: CantCounter stats highlight top performances(%): Multiple-choice excels in QTM (45.38%), Zero-shot learning shines in SLM (52.13%), and All-tip prevails in PCM (29.11%). \u201cA\u201d represents \u201cAbstractive\u201d. \u201cY/N\u201d represents \u201cYes/No\u201d. \u201cMc\u201d represents \u201cMultiple-choice\u201d. \u201cZs\u201d represents \u201cZero-shot\u201d. \u201cOs\u201d represents \u201cOne-shot\u201d. \u201cNT\u201d represents \u201cNone-tip\u201d. \u201cT1\u201d represents \u201cTip 1\u201d. \u201cT2\u201d represents \u201cTip 2\u201d. \u201cAllT\u201d represents \u201cAll-tip\u201d. QTM SLM PCM Domain A Y/N Mc Zs Os NT T1 T2 AllT Politics 26.81 22.55 50.64 42.85 57.15 19.01 24.75 25.19 31.05 Drugs 21.16 22.41 56.43 55.41 44.59 17.32 27.43 25.47 29.78 Racism 29.05 27.60 43.35 41.39 58.61 11.22 19.63 37.50 31.66 Weapons 50.89 16.20 32.91 54.96 45.04 18.73 28.11 25.27 27.90 LGBT 34.41 25.75 39.84 59.78 40.22 22.58 22.10 28.53 26.79 Total 31.71 22.91 45.38 52.13 47.87 19.03 24.61 27.24 29.11 Table 2: Zero-shot learning and One-shot learning Q&A accuracy in CantCounter for GPT-4, Bard, New Bing, and SparkDesk. \u201cAcc\u201d represents \u201cAccuracy Rate\u201d. \u201cRej\u201d represents \u201cRejection Rate\u201d. \u201cDon\u2019t know\u201d represents \u201c\u2019I don\u2019t know\u2019 Rate\u201d. Zero-shot learning One-shot learning Acc Rej Don\u2019t know Acc Rej Don\u2019t know ChatGPT-3.5[1] 47.61 4.66 39.91 45.52 1.63 46.45 GPT-4[38] 27.27 0.00 70.45 50.00 0.00 34.09 Bard[39] 47.73 4.55 13.64 65.91 15.91 6.82 New Bing[40] 50.00 11.36 34.09 50.00 36.36 2.27 SparkDesk[41] 29.55 45.45 9.09 20.45 68.18 2.27 4.4 Comparison with other LLMs In our study, we examine several LLMs alongside ChatGPT-3.5 to address RQ3, including GPT-4[1], New Bing [40], Bard [39], Claude [42], ERNIE [43], and SparkDesk [41]. While ERNIE is optimized for Chinese content, translating cant prompts may compromise their subtlety and effectiveness. Moreover, ERNIE\u2019s frequent account suspensions hindered extensive trials [44]. Claude\u2019s sensitive content handling also led to account suspensions [42]. Thus, we focus on comparing and validating four other LLMs: GPT-4, Bard, New Bing, and SparkDesk. Table 2 presents ratios of correct answers, refused answers, and \u201cI don\u2019t know\u201d responses. Interestingly, GPT-4 consistently responds in all situations, avoiding refusal to answer. This contrasts with other models that often refuse to respond due to content filtering. GPT-4\u2019s tendency to use \u201cI don\u2019t know\u201d may stem from our controlled comparisons in the QTM and PCM methods, particularly in Multiple-choice scenarios. Conversely, other LLMs tend to refuse to answer, likely due to content categorization by filters and classifiers. SparkDesk exhibits the highest refusal rate, possibly due to overly strict filters. Furthermore, One-shot learning models are more prone to refusal to answer, as they rely on context understanding, potentially triggering filters. These findings offer insights into the performance of these LLMs across different learning tasks, informing future research directions. 4.5 Takeaways We observe varying accuracy across different Q&A-Query types (RQ1), with Multiple-choice tasks being most accurate and Yes/No tasks the least. In sensitive domains, Zero-shot learning performs better than One-shot learning (RQ2). Increasing prompt clues improves cant identification accuracy (RQ2). More recent LLM models consistently avoid refusing to answer (RQ3), but they are more likely to refuse answering questions related to racism compared to LGBT (RQ4). 5 Conclusion This paper presents the first comprehensive evaluation of LLM\u2019s reasoning capability using cants or dark jargons. We created two domain-specific datasets: Cant and Scene datasets, and developed an evaluation framework to assess LLM\u2019s reasoning abilities through cant comprehension. We proposed a four-stage strategy Fine-Tuning, Co-Tuning, Data-Diffusion, and Data-Analysis to address cross-matching and complex data calculation problems. Our experiments reveal varying comprehension levels of LLM under different question types (Abstractive, Yes/No, Multiplechoice), sample learning methods (Zero-shot/One-shot learning), and prompt clues (None-tip, Tip1, Tip2, All-tip). 9 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Additionally, across different domains (Politics, Drugs, Racism, Weapons, LGBT), different LLMs (GPT-3.5, GPT-4, New Bing, Bard, SparkDesk) demonstrate varying refusal rates to answer questions. Our findings provide insights for the security research community into LLM\u2019s reasoning capabilities regarding \u201ccant\u201d, emphasizing the importance of implementing effective safety filters and measures for screening potentially hazardous LLM-generated content."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00722v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00722v1",
|
| 3 |
+
"title": "LLMs for Generating and Evaluating Counterfactuals: A Comprehensive Study",
|
| 4 |
+
"abstract": "As NLP models become more complex, understanding their decisions becomes more\ncrucial. Counterfactuals (CFs), where minimal changes to inputs flip a model's\nprediction, offer a way to explain these models. While Large Language Models\n(LLMs) have shown remarkable performance in NLP tasks, their efficacy in\ngenerating high-quality CFs remains uncertain. This work fills this gap by\ninvestigating how well LLMs generate CFs for two NLU tasks. We conduct a\ncomprehensive comparison of several common LLMs, and evaluate their CFs,\nassessing both intrinsic metrics, and the impact of these CFs on data\naugmentation. Moreover, we analyze differences between human and LLM-generated\nCFs, providing insights for future research directions. Our results show that\nLLMs generate fluent CFs, but struggle to keep the induced changes minimal.\nGenerating CFs for Sentiment Analysis (SA) is less challenging than NLI where\nLLMs show weaknesses in generating CFs that flip the original label. This also\nreflects on the data augmentation performance, where we observe a large gap\nbetween augmenting with human and LLMs CFs. Furthermore, we evaluate LLMs'\nability to assess CFs in a mislabelled data setting, and show that they have a\nstrong bias towards agreeing with the provided labels. GPT4 is more robust\nagainst this bias and its scores correlate well with automatic metrics. Our\nfindings reveal several limitations and point to potential future work\ndirections.",
|
| 5 |
+
"authors": "Van Bach Nguyen, Paul Youssef, J\u00f6rg Schl\u00f6tterer, Christin Seifert",
|
| 6 |
+
"published": "2024-04-26",
|
| 7 |
+
"updated": "2024-04-26",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.AI"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Sentiment AND Analysis",
|
| 15 |
+
"gt": "LLMs for Generating and Evaluating Counterfactuals: A Comprehensive Study",
|
| 16 |
+
"main_content": "Introduction The growing popularity of artificial intelligence (AI) and increasingly complex \u201cblack-box\u201d models have triggered a critical need for interpretability. As Miller (2019) highlights, explanations often seek to understand why an event P occurred instead of an alternative Q. Ideally, explanations \u2217 Equal contribution Positive: If you haven't seen this, it's incredible. It is pure gold. I saw this about 17 years ago, and I'm still hype about it. Positive:\u00a0If you haven't seen this, it's amazing. It is incredible. I saw this about 17 years ago, and I'm still amazed from it. Positive:\u00a0If you haven\u2019t seen this, it\u2019s terrible. It is pure beauty. I saw this about 17 years ago, and I\u2019m still impressed from it. Positive:\u00a0If you haven't seen this, it's amazing. It is a hidden gem. I saw this about 17 years ago, and I'm still enlightened from it. Negative: If you haven't seen this, it's terrible. It is pure trash. I saw this about 17 years ago, and I'm still screwed up from it. Original 3.3 3.0 3.4 3.5 score Figure 1: Counterfactual for Sentiment Analysis from several LLMs with their evaluation scores from GPT4. should demonstrate how minimal changes to an instance could have led to different outcomes. In the context of textual data, this translates to introducing minimal modifications to the text through word additions, replacements, or deletions, to flip the label assigned by a given classifier. Counterfactual generation in NLP aims to foster an understanding of models, thereby facilitating their improvement (Kaushik et al., 2020), debugging (Ross et al., 2021), or rectification (Balashankar et al., 2023). In the field of NLP, LLMs have consistently demonstrated remarkable performance across diverse tasks. However, despite significant advancements in counterfactual generation methods, the efficacy of LLMs in producing high-quality CFs remains an open question. Our study bridges this gap by rigorously assessing the inherent capability of LLMs to generate CFs and identifying the most effective ones. We conduct a comprehensive comparison of several common LLMs, spanning different sizes and accessibility levels, evaluating their performance specifically on the counterfactual generation task. Our assessment encompasses standard metrics for counterfactual quality, as well as an in-depth evaluation of language fluency tailored to the context of counterfactual generation. Furthermore, we extend our analysis to data augmentation. We consider generating CFs for two arXiv:2405.00722v1 [cs.CL] 26 Apr 2024 \fNLU tasks in this study: Sentiment Analysis (SA) and Natural Language Inference (NLI). Our analysis demonstrates that LLMs are able to generate fluent text. However, they have difficulties in inducing minimal changes. Generating CFs for SA is less challenging than NLI, where LLMs exhibit weaknesses in generating CFs that flip the labels. For data augmentation, CFs from LLMs can be an alternative to human CFs, as they are able to achieve similar performance, while on NLI further improvements are needed. Furthermore, we show a positive correlation between keeping minimal changes and data augmentation performance. This suggests a new direction to generate improved data for augmentation, potentially leading to more efficient augmentation approaches. We further assess the ability of LLMs to act as evaluators of CFs. By conducting controlled experiments, we show that LLMs have a strong bias to agree with the given labels, even if these are incorrect. Additionally, we show the alignment between GPT4-based evaluation and intrinsic metrics for CFs, indicating that GPT-4 is a reliable evaluator for counterfactual generation. These findings suggest that GPT-4 is a suitable choice of LLM to use when evaluating the quality of generated CFs. A sample of CFs from different LLMs with the corresponding scores is shown in Figure 1. Finally, to facilitate further research, we contribute a new dataset of CFs generated by various LLMs. 2 Evaluation Methodology We conduct a multi-faceted evaluation, considering several use cases where CFs could be beneficial. 2.1 Intrinsic Evaluation Given a fixed classifier f and a dataset with N samples (x1, x2, . . . , xN), x = (z1, z2, . . . , zn) represents a sequence of n tokens with a ground truth label y. A valid counterfactual x\u2032 should: (1) achieve the desired target label y\u2032 with (2) minimal changes, and (3) align with likely feature distributions (Molnar, 2022). Therefore, in this evaluation, we consider the intrinsic properties of Flip Rate, Textual Similarity, and Perplexity that correspond to each criterion, respectively: Flip Rate (FR): measures how effectively a method can change labels of instances with respect to a pretrained classifier. FR is defined as the percentage of generated instances where the labels are flipped over the total number of instances N (Bhattacharjee et al., 2024): FR = 1 N N X i=1 1[f(xi) = y\u2032] Textual Similarity (TS): quantifies the closeness between an original instance and the counterfactual. Lower distances indicate greater similarity. We use the Levenshtein distance for d to quantify the tokendistance between the original instance x and the counterfactual x\u2032. This choice is motivated by the Levenshtein distance\u2019s ability to capture all type of edits (insertions, deletions, or substitutions) and also its widespread use in related work (Ross et al., 2021; Treviso et al., 2023): TS = 1 N N X i=1 d(xi, x\u2032 i) |xi| Perplexity (PPL): To ensure that the generated text is plausible, realistic, and follows a natural text distribution, we leverage perplexity from GPT-2 because of its effectiveness in capturing such distributions. (Radford et al., 2019)1 PPL(x) = exp ( \u22121 n n X i=1 log p\u03b8(zi | z<i) ) 2.2 Data Augmentation After detecting failures in task-specific models, CFs can be used to augment the training data, and help close potential flaws in the reasoning of these models (Kaushik et al., 2020). Additionally, data augmentation with CFs increases generalization and OOD performance (Sen et al., 2021; Ding et al., 2024). In this evaluation, we examine how augmenting original training data with human and LLMs-generated CFs reflects on the performance of task-specific models. 2.3 LLMs for CFs Evaluation Evaluation with LLMs has been shown to be a valid alternative to human evaluation on various tasks like open-ended story generation and adversarial attacks (Chiang and Lee, 2023), open-ended questions (Zheng et al., 2023), translation (Kocmi 1While GPT-2 is used for simplicity in this study, any other LLM can be substituted as long as it demonstrates strong text generation capabilities \fand Federmann, 2023) and natural language generation (Liusie et al., 2024). In this work, we examine how well can LLMs evaluate CFs. Detecting mistakes in CFs with LLMs opens the door for iteratively refining CFs (Madaan et al., 2023). For assessing LLMs in CFs evaluation, we leverage them to evaluate two sets of CFs. An honest set of CFs from humans, and a corrupted set, where we corrupt the ground truth labels. We compare the scores between the two sets and draw conclusions about the realiablity of LLMs for evaluating CFs. 3 Experimental Setup 3.1 Data We compare CFs generated by LLMs against CFs generated by crowd workers (Kaushik et al., 2020) and experts (Gardner et al., 2020) (hereinafter referred to as \u201cHuman Crowd\u201d and \u201cHuman Experts\u201d respectively). Sentiment Analysis (SA). We experiment with the IMDb dataset (Maas et al., 2011). For better comparability, we use the data splits from Kaushik et al. (2020). Natural Language Inference (NLI). We experiment with SNLI (Bowman et al., 2015). Here too, we use the data splits from Kaushik et al. (2020). 3.2 Generating Countefactuals In order to make our study LLMs-focused and computationally feasible, we decided to generate counterfactual in a way that fulfills the following criteria: \u2022 Generated counterfactuals can be used for data augmentation (an evaluation aspect) \u2022 Does not require human intervention (e.g., specifying edits or labeling) \u2022 Does not require additional training in order to make the study computationally feasible \u2022 The resulting counterfactuals should depend only on the evaluated LLM in order to exclude any other confounding factors To create the prompt for the LLMs to generate the counterfactual, we combine two techniques: (1) Selecting the closest factual instance to the current instance (Liu et al., 2022). Since the provided example has a crucial effect on performance (Liu et al., 2022), we select the closest factual/counterfactual pair that has been generated by humans. We use SentenceBERT (Reimers and Gurevych, 2019) to obtain the latent space representation, and then calculate the distance using cosine similarity from that latent space. (2) Chainof-Thought (CoT) prompting (Wei et al., 2022), showing the necessary steps to generate a counterfactual instance based on a factual one, since it has been shown to help LLMs reason better and provide higher-quality answers. An an overview of the process for generating CFs is depicted in Figure 2. Specifically, we use the validation set in each dataset as a reference to select the closest example when generating the counterfactual for the train and test sets. After obtaining the pair of closest instances, we apply CoT prompting by defining three steps to generate the counterfactual: \u2022 Step 1: Identify all of the important words that contribute to flipping the label. \u2022 Step 2: Find replacements for the words identified in Step 1 that lead to the target label. \u2022 Step 3: Replace the words from Step 2 into the original text to obtain the counterfactual instance. This prompt aligns with the principle of other work (Ross et al., 2021; Treviso et al., 2023; Li et al., 2024), which involve identifying significant words that impact the label and altering them to flip the label, thereby generating counterfactual instances. The prompt examples can be found in the Appendix F. If you haven't seen this, it's terrible. It is pure trash. I saw this about 17 years ago, and I'm still screwed up from it. 1.Identify important words: terrible, pure trash, screwed up 2.Find replacements : terrible \u2192 amazing pure trash \u2192 hidden gem screwed up \u2192 enlightened 3.Replace the words in the original text If you haven't seen this, it's amazing. It is a hidden gem. I saw this about 17 years ago, and I'm still enlightened from it. Original Counterfactual Instructions on closest example Figure 2: An overview of CFs generation process. Stepby-step instructions are shown on closest example. 3.3 LLMs We compare open-source LLMs with closed-source LLMs. We choose LLAMA-2 (Touvron et al., 2023) and Mistral (Jiang et al., 2023) as representatives for open-source models, and GPT-3.5 and \fGPT-42 as a representative for closed-source LLMs. Table 1 summarizes the properties of each LLM. Model Size HF Instruct OS LLAMA2 7B/70B \u2713 \u2713 \u2713 Mistral 7B \u2717 \u2713 \u2713 Mistral 56B \u2717 \u2717 \u2713 GPT3.5 \u2713 \u2713 \u2717 GPT4 \u2713 \u2713 \u2717 Table 1: Characteristics of Large Language Models (LLMs, including Size, Human Feedback (HF), Instruction, and Open-Source (OS). 4 Results and Discussion 4.1 Intrinsic Evaluation We show the results for the intrinsic evaluation in Table 2. For flip rate, we use SOTA BERT-based models from (Morris et al., 2020). The obtained perplexity values reflect the high fluency of LLMs, some of which are even more fluent than humans.3 Distance values show that LLMs do not necessarily adhere to conducting minimal changes. One exception here is GPT3.5, whose average distance values resemble that of humangenerated CFs. The large distance values for LLMsgenerated CFs could be explained by their tendency to overgenerate (Guerreiro et al., 2023). In terms of flip rate, we notice the some LLMsgenerated CFs can have a higher flip rate than human-generated CFs on SA, whereas the opposite can can be observed on NLI. Indeed, NLI CFs could be more difficult to generate than CFs for SA, which explains the gap in flip rate between LLMs and humans on the one hand, and GPT-4 and other LLMs on the other hand (this is especially apparent on the NLI hypothesis test set). This suggests that GPT-4 should be the preferred choice to generate CFs for explaining models behavior. Note that LLAMA2 70B appears to have a higher flip rate on the SA and NLI premise test sets, but it also has a much lower success rate for generating CFs (cf. Table 11 in Appendix A). We examine the LLMs-generated CFs in more detail in Section 4.4. We also notice that Mistral 56B, which has not be trained with instruction-tuning or RLHF, produces low quality CFs in terms of the metrics con2We use API from https://openai.com/ 3Note that the shown perplexity values are based on GPT-2 LLM PPL \u2193 TS\u2193 FR\u2191 SA Expert Humans 51.07 0.16 81.15 Crowd Humans 48.03 0.14 85.66 GPT3.5 49.53 0.16 79.51 GPT4 49.05 0.29 94.03 LLAMA2 7B 46.99 0.64 78.26 LLAMA2 70B 60.46 0.82 96.31 Mistral 7B 48.55 1.06 95.13 Mistral 56B 64.05 1.50 58.55 NLI premise Crowd Humans 74.89 0.17 59.13 GPT3.5 71.62 0.15 35.50 GPT4 73.39 0.28 57.12 LLAMA2 7B 70.34 0.36 41.02 LLAMA2 70B 70.74 0.33 58.20 Mistral 7B 78.34 0.36 37.71 Mistral 56B 75.57 0.34 24.32 NLI hypothesis Crowd Humans 65.67 0.19 79.75 GPT3.5 51.30 0.19 41.50 GPT4 58.35 0.21 65.88 LLAMA2 7B 59.60 0.28 38.64 LLAMA2 70B 47.69 0.32 48.85 Mistral 7B 39.06 0.46 44.11 Mistral 56B 62.77 0.33 37.79 Table 2: Metrics for intrinsic evaluation. PPL is perpelexity using GPT-2. TS is Levenshtein distance. FR is flip rate with respect to a SOTA classifier. sidered here. This highlights the importance of instruction-tuning and RLHF in generating highquality texts. This part of the evaluation shows us that LLMs are able to generate fluent CFs, but struggle to induce minimal changes. It also demonstrates that it is challenging to generate NLI CFs that flip the label, whereas generating SA CFs is less difficult. 4.2 Data Augmentation We train on both original training data and CFs from different LLMs to see if augmenting the training data leads to an improved performance. For comparison, we conduct data augmentation with human CFs as well. The results for SA and NLI are shown in Table 3 and 4 respectively. \fSA. On the crowd CFs and expert CFs test sets for SA including the LLMs-generated CFs lead to improved performance. Perhaps surprisingly, GPT3.5 and LLAMA2 7B provide the most useful CFs for data augmentation. However, augmenting with human CFs works the best. On the original test set, augmenting with CFs does not improve performance. This shows that the gains in performance from data augmentation are visible only if the test set contains challenging examples. Test Data crowd CFs expert CFs original Original only 92.21 86.27 90.16 Human Crowd 95.90 92.62 89.96 GPT3.5 95.08 91.19 89.96 GPT4 93.65 88.73 90.37 LLAMA2 70B 94.26 90.78 89.55 LLAMA2 7B 95.49 91.19 89.75 Mistral 7B 92.42 88.11 88.93 Mistral 56B 93.85 86.89 84.02 Table 3: Data augmentation results for SA. Classification model is trained on original and LLMs or humangenerated CFs. Evaluation is done on 3 test sets: crowd CFs, expert CFs, original with accuracy as a metric. NLI. On the crowd premise test set of NLI, which consists of CFs that were generated by changing the premise only, we notice that most of the LLMsgenerated CFs help improve the model\u2019s performance by a good margin (> 13 pp). The gap to augmenting with human CFs, however, is still large (\u223c9 pp). On the crowd hyothesis test set, most LLMs lead to a lower performance with two exceptions (GPT4 and LLAMA2 70B), where the performance is on par with no data augmentation. However, here too, there is a large gap to human CFs (\u223c13 pp). On the original test set, augmenting with LLMs-generated CFs hurts performance, while augmenting with human-generated CFs bring good improvements (5 pp). This shows how highquality CFs improve the model\u2019s capabilities, and points to a problem with the LLMs-generated CFs for NLI. Connection with intrinsic metrics. We examine the relation between data augmentation performance on the one hand and perplexity and Levenshtein distance on the other hand. The correlation Test Data crowd Premise crowd Hypothesis original Original only 39.88 58.50 73.50 Human Crowd 65.38 71.25 78.50 GPT3.5 55.62 51.88 55.00 GPT4 55.88 58.63 70.75 LLAMA2 7B 56.25 51.12 57.50 LLAMA2 70B 53.50 58.13 65.75 Mistral 7B 52.88 48.38 57.75 Mistral 56B 43.75 39.75 45.75 Table 4: Data augmentation results for SA. Classification model is trained on original and LLMs or humangenerated CFs with Accuracy metric. values in Table 5 suggest that CFs with lower distance (to the factual instances) bring more improvements. Indeed, classifiers could be insensitive to small changes (Glockner et al., 2018), and having such examples in the training can make classifiers more robust. Having more fluent CFs correlates well with higher performance, but only on SA. SA classifiers could be benefiting from fluent inputs, which are more probable and appear in more contexts. Conversely, the inputs in NLI are short image captions that are not necessarily fluent or grammatical texts. Compared values SA NLI Accuracy & \u2212PPL 0.58 -0.05 Accuracy & \u2212TS 0.86 0.51 Table 5: Spearman correlations between intrinsic metrics and data augmentation performance. In summary, only few LLMs produce CFs that come close to human CFs in terms of performance improvements on SA. On NLI, the results are less positive: LLM CFs bring no improvements on two out of three test sets, and the gap to human CFs is still large. CFs with less changes to the factual instances are more beneficial for data augmentation. 4.3 LLMs for CFs Evaluation We examine how reliable are LLMs for CFs evaluation by asking them to evaluate two sets of human CFs: an honest set and a corrupted set. The \u201chonest set\u201d refers to a collection of human CFs, for which the ground truth labels are provided, whereas \fthe \u201ccorrupted set\u201d consists of instances, for which wrong labels differing from the gold labels are provided. In the context of NLI, the third label, distinct from both the target and factual labels, is selected for inclusion in the corrupted set. For SA, the reverse label is chosen while the factual label remains undisclosed. Initially, we prompt GPT3.5 and GPT4 to assess whether the provided CFs accurately represent the target labels by assigning a score from 1 to 4 (cf. Appendix F). Here, a score of 1 or 2 indicates disagreement (complete or partial) with the target label, while a score of 3 or 4 indicates agreement (partial or complete) with the target label. Ideally, the evaluation LLMs should give high scores to the honest set, and low scores to the corrupted set. We show the distributions for disagreements and agreements in Table 6. On SA, both LLMs perform well, but GPT4 exhibits higher sensitivity to the corrupted examples. On NLI, we notice that GPT3.5 gives high flip label scores to humans CFs with both correct and incorrect labels. GPT4 performs much better, but still exhibits high tendency to agree with wrong labels (\u223c40%). The results can be explained by the tendency of LLMs to agree with the provided answers, especially on reasoning tasks (Zheng et al., 2023). To verify this, we prompt both LLMs to classify the same set of NLI CFs by choosing one the three labels (entailment, neutral, contradiction) using a similar prompt. The classification results in Table 7 show an improved performance compared to asking the same LLMs if they agree with incorrect labels (cf. Table 6). We also compare the flip label score distributions of GPT3.5 and GPT4 on the corrupted set in Table 10, and observe that even though GPT3.5 gives high scores to corrupted inputs it is less certain (most frequent score is 3), whereas GPT4 tends to be more certain and assigns mostly 1 or 4 (> 93%). Evaluation with GPT4. We conduct a widescale CFs evaluation with GPT4. Besides verifying the target label FL, we also ask GPT4 to judge if there are any unnecessary alterations UA, and if the CF is realistic RS. For these aspects, we use a scoring scheme ranging from 1 to 4, where higher scores indicate better performance. The results for GPT-4 evaluation can be found in Table 8. The evaluation scores from GPT4 show that GPT4 CFs are the best overall. On SA, Mistral 7b comes in second place after GPT4, whereas on NLI human CFs take the second position. GPT4 LLM/Set Task 1/2 3/4 Avg. GPT3.5 Honest SA 3.61 96.39 3.43 Corrupted SA 77.42 22.58 1.61 Honest premise 0.63 99.37 3.57 Corrupted premise 5.56 94.44 3.13 Honest hypothesis 1.38 98.62 3.56 Corrupted hypothesis 3.53 96.47 3.28 GPT4 Honest SA 7.53 92.47 3.66 Corrupted SA 98.93 1.08 1.04 Honest premise 12.31 87.69 3.58 Corrupted premise 59.51 40.50 2.19 Honest hypothesis 4.50 95.50 3.81 Corrupted hypothesis 57.87 42.12 2.29 Table 6: Flip label scores distribution for GPT3.5 and GPT4 on honest and corrupted sets. might have a bias to prefer its own generations (Panickssery et al., 2024). The scores for SA are lower, which might be explained by the fact that SA is an easier task than NLI, and therefore evaluating SA CFs is also easier for GPT4, and potentially more accurate. To further verify the evaluation scores from GPT4, we calculate the correlations between GPT4 scores and the scores from the intrinsic evaluation. The correlations shown in Table 9 reflect strong relation on SA with respect to all three aspects (flipping label, minimal changes, realisticness). On NLI, we notice the same pattern except for realisticness/perplexity, where we notice a weak negative relation. This might be due to the nature of NLI instances that are short image captions, and are not necessarily grammatical. LLMs show a high tendency to agree with the provided labels even if these are incorrect, especially on tasks that require reasoning such as NLI. GPT4 evaluation scores correlate with automatic metrics for flipping label, distance and fluency. 4.4 Qualitative Analysis 4.4.1 CFs for NLI We look into a selected set of examples based on the evaluation from GPT4. For each LLM, we pick 2 NLI examples with the highest/lowest scores. We end up with 28 examples. We identify three \fSet LLM Part Acc. Honest GPT3.5 hypothesis 63.08 Honest GPT3.5 premise 54.90 Honest GPT4 hypothesis 75.75 Honest GPT4 premise 59.25 Table 7: Classification performance on human CFs. Note the improved performance compared to asking LLMs if they agree with a given label (cf. Table 6). categories of errors based on this sample : \u2022 Copy-Paste: When asked to generate a CF, and change the label form contradiction to entailment, LLMs will use the unchanged part (premise or hypothesis) as output. This a clever but lazy way to flip the label to entailment, since two identical sentences would naturally have the label entailment. These CFs were given perfect scores by GPT4. Table 12 in the Appendix shows the percentage of copypaste CFs for all LLMs (at most 4.27% for GPT3.5). \u2022 Negation: When asked to to change the label from entailment to contradiction, LLMs would attempt to negate the premise/hypothesis. The negations does not make sense in the observed CFs, and GPT4 rightly assigns them the lowest scores possible. \u2022 Inconsistency: These examples contain contradictory or illogical sentences. Here, GPT4 sometimes incorrectly assigned these CFs high scores. We show some examples alongside the assigned categories in Table 13 in the Appendix. 4.4.2 Evaluation Scores We also look into the evaluation scores from GPT4 on the same set of examples. We show correct and incorrect evaluations in Table 14. GPT4 assigns high scores to contradictory examples, which partially fulfill the target label, and low scores to examples which contain valid minimal changes. GPT4 could be insensitive to such small changes. 5 Related Work Large Language Models. LLMs have demonstrated impressive capabilities across a diverse natural language processing tasks, such as question LLM FL UA RS Avg. SA Human Crowd 3.66 2.95 2.58 3.06 Human Expert 3.54 2.69 2.49 2.91 GPT3.5 3.58 2.91 2.65 3.05 GPT4 3.79 3.15 2.91 3.28 LLAMA2 7B 3.60 2.74 2.63 2.99 LLAMA2 70B 3.70 2.75 2.47 2.97 Mistral 7B 3.85 2.84 2.69 3.13 Mistral 56B 2.58 1.74 1.75 2.02 hypothesis Human Crowd 3.81 3.96 3.81 3.86 GPT3.5 3.19 3.93 3.74 3.62 GPT4 3.96 3.98 3.92 3.95 LLAMA2 7B 3.23 3.74 3.66 3.54 LLAMA2 70B 3.49 3.68 3.56 3.58 Mistral 7B 3.50 3.70 3.65 3.62 Mistral 56B 3.02 3.45 3.48 3.32 premise Human Crowd 3.58 3.88 3.86 3.77 GPT3.5 2.51 3.82 3.69 3.34 GPT4 3.68 3.83 3.84 3.78 LLAMA2 7B 2.96 3.38 3.67 3.34 LLAMA2 70B 3.35 3.46 3.67 3.49 Mistral 7B 2.97 3.63 3.74 3.45 Mistral 56B 2.37 3.11 3.49 2.99 Table 8: Scores for evaluation with GPT4. FL refers to flipping label score, UA to unncessary alteration, RS is the realisticness score, and Avg. is the average of the three scores. Best score for each task is in bold. Second best score is underlined. answering, wherein the model needs to retrieve relevant information from its training data and generate a concise response, or text summarization, which distills lengthy texts into concise summaries while retaining crucial information (Maynez et al., 2023). However, the task of CFs generation has not been comprehensively evaluated for LLMs. A large number of LLMs exist, exhibiting variations in model size, architecture, training dataset, the incorporation of human feedback loops and accessibility (open-source or proprietary) (Zhao et al., 2023). Consequently, there is a necessity to conduct comparative evaluations across different models on a standardized task. Since the architectures of the LLMs under consideration are predominantly similar, and the training datasets are either known \fCompared Values SA NLI FL & FR 0.83 0.92 UA & -TS 0.50 0.75 RS & -PPL 0.62 -0.23 Table 9: Spearman correlations between intrinsic metrics and GPT-4 evaluation scores. PPL and TS scores are negated so that higher is better. LLM/Score 1 2 3 4 GPT3.5 0.70 3.85 69.61 25.84 GPT4 55.50 3.19 2.94 38.37 Table 10: Flip label score distributions on the corrupted set of NLI. Distribution is an average of the distributions on the premise and hypothesis sets. public sources or undisclosed, the primary focus of this study is to compare LLMs that are different in model size, the implementation of human feedback, and accessibility. To enhance the performance of LLMs across various tasks, in-context learning (ICL) techniques have been employed to optimize the prompts provided to these models. Numerous prompt engineering approaches during the inference phase have been proposed, either by selecting the demonstration instances, or formatting the prompt in form of instruction or reasoning steps (Dong et al., 2022). In this study, leverage chainof-thought prompting (CoT) (Wei et al., 2022) and selecting closest instance retrieval strategies(Liu et al., 2022) to optimize the generation process. CFs generation methods. There exists several methods for generating CFs, but most of them are desigend for a specific LLM. The CFs generated by MICE (Ross et al., 2021) are intended for debugging models, and not for data augmentation. Polyjuice (Wu et al., 2021) requires specifying the type of edits that should be conducted, and the resulting CFs should be manually labeled. (Robeer et al., 2021). DISCO (Chen et al., 2023) uses GPT3\u2019s fill-in-the-blanks mode, which is unavailable in most open source LLMs and would require adapting them. CREST (Treviso et al., 2023) depends on a rationalizer module and the editor module is a masked LM that needs to be further trained. Instead, we decided to prompt LLMs to generate CFs by providing instructions and an example. We provide more details in Section 3.2. LLMs for CFs generation (Li et al., 2024) investigated the strengths and weaknesses of LLMs as CFs generators. Additionally, they disclosed the factors that impact LLMs during CFs generation, including both intrinsic properties of LLMs and prompt design considerations. However, this study lacks intrinsic evaluation of CFs and omits comparison with human-generated CFs. Sachdeva et al. (2024) leverage LLMs to generate CFs for extractive question answering, showing that data augmentation with CFs improve OOD performance, and that this improvement correlates with the diversity of the generated CFs. Prior work by Bhattacharjee et al. (2024) investigated the capability of GPT models in generating CFs for explanations by optimizing their prompts. However, their analysis was limited to the GPT family and did not consider downstream tasks or comparison with humangenerated CFs. In this work, we conduct a more comprehensive evaluation of LLMs on multiple aspects, including intrinsic metrics of CFs explanation quality and performance on downstream tasks. Furthermore, we compare the LLM-generated CFs against those produced by humans, and propose a novel approach to evaluate CFs using LLMs. 6 Conclusion In this work, we investigated the use of various LLMs for CFs generation. Our results show that LLMs generate fluent CFs, but struggle to keep the induced changes minimal. Generating CFs for SA is less challenging than NLI, where LLMs show weaknesses in generating CFs that change the original label. CFs from LLMs can replace human CFs for the purpose of data augmentation on SA and achieve similar performance, while on NLI further improvements are needed. Furthermore, our results suggest that CFs with minimal changes are essential for data augmentation. We also showed that when asked to asses CFs, LLMs exhibit a strong bias towards agreeing with the provided label even if this label is incorrect. GPT4 appears to be more robust than GPT3.5 against this bias, and its scores correlate well with automatic metrics. However, some failures still exist as shown in our qualitative analysis, indicating limitations of LLMs that warrant further investigation. Future work should focus on leveraging LLMs for higher quality NLI CFs, which correctly changes the label and keep minimal changes, and assessing the evaluation abilities of LLMs in mislabeled data settings. \f7 Limitations We used the default parameters for generating counterfactuals. Experimenting with different parameters might have a non-negligble effect on the results. We included various LLMs in our experiments to be inclusive and be able to compare open-source and closed LLMs. However, these LLMs might have been exposed, during their training, to the data we use from (Kaushik et al., 2020). In this regard, the training data of most open-source and all closed-source LLMs remains unknown. In our qualitative analysis (see Section 4.4), we noticed that GPT4 generated a CF that is identical to a human CF from (Kaushik et al., 2020)."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00738v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00738v1",
|
| 3 |
+
"title": "HLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis",
|
| 4 |
+
"abstract": "Graphics Processing Units (GPUs) have become the leading hardware accelerator\nfor deep learning applications and are used widely in training and inference of\ntransformers; transformers have achieved state-of-the-art performance in many\nareas of machine learning and are especially used in most modern Large Language\nModels (LLMs). However, GPUs require large amounts of energy, which poses\nenvironmental concerns, demands high operational costs, and causes GPUs to be\nunsuitable for edge computing. We develop an accelerator for transformers,\nnamely, Llama 2, an open-source state-of-the-art LLM, using high level\nsynthesis (HLS) on Field Programmable Gate Arrays (FPGAs). HLS allows us to\nrapidly prototype FPGA designs without writing code at the register-transfer\nlevel (RTL). We name our method HLSTransform, and the FPGA designs we\nsynthesize with HLS achieve up to a 12.75x reduction and 8.25x reduction in\nenergy used per token on the Xilinx Virtex UltraScale+ VU9P FPGA compared to an\nIntel Xeon Broadwell E5-2686 v4 CPU and NVIDIA RTX 3090 GPU respectively, while\nincreasing inference speeds by up to 2.46x compared to CPU and maintaining\n0.53x the speed of an RTX 3090 GPU despite the GPU's 4 times higher base clock\nrate. With the lack of existing open-source FPGA accelerators for transformers,\nwe open-source our code and document our steps for synthesis. We hope this work\nwill serve as a step in democratizing the use of FPGAs in transformer inference\nand inspire research into energy-efficient inference methods as a whole. The\ncode can be found on https://github.com/HLSTransform/submission.",
|
| 5 |
+
"authors": "Andy He, Darren Key, Mason Bulling, Andrew Chang, Skyler Shapiro, Everett Lee",
|
| 6 |
+
"published": "2024-04-29",
|
| 7 |
+
"updated": "2024-04-29",
|
| 8 |
+
"primary_cat": "cs.AR",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.AR",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.LG"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "LLM Fairness",
|
| 16 |
+
"gt": "HLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis",
|
| 17 |
+
"main_content": "Introduction Hardware accelerators have long appeared in computing (Merritt, 2021) to improve performance compared to general-purpose CPUs through specialized operations, high parallelism, and efficient memory systems (Dally et al., 2020). The use of accelerators for deep learning have been especially significant to accommodate models that are rapidly scaling up in size and complexity, such as transformer-based Large Language Models (LLMs) which have become increasingly complex with a massive influx of research following the advent of OpenAI\u2019s ChatGPT. Meta\u2019s popular Llama 2 model, for instance, is trained on 2 trillion tokens and ranges up to 70 billion parameters (Touvron et al., 2023a). GPUs are currently the dominant accelerators for general deep learning tasks as they can be easily leveraged to develop extremely efficient implementations of parallel basic linear algebra subroutines (BLAS), which are commonly used in deep learning algorithms. (Xiong & Xu, 2020). However, the most glaring tradeoff to using GPUs is their massive demand for power, resulting in high carbon emissions and energy costs. The carbon footprint of training Llama 2 is officially estimated at 539 tons carbon dioxide equivalent (Touvron et al., 2023b), which is almost 72x the amount the average US household produces per year at 7.5 tons (CCFPD). However, while model training takes large amounts of energy, energy spent running inference on the model is typically larger; NVIDIA and Amazon estimate that over 80% of their energy usage for AI models is spent in inference, and for Google, 60% of their energy usage for AI models is for inference (McDonald et al., 2022) (Patterson, 2022). Inference dominates emissions in ChatGPT-like services from the querying of millions of users, producing 25x the carbon emissions of GPT-3 (Samsi et al., 2023) (Chien et al., 2023). High energy consumption also poses a problem for operational costs and for edge computing applications. High energy consumption forces the inference of LLMs and deep learning models to be mostly allocated to GPU clusters. An article from Sequoia Capital estimates that for data centers, the price from energy alone is roughly equal to the amount spent on buying GPUs (Cahn, 2023). For applica1 arXiv:2405.00738v1 [cs.AR] 29 Apr 2024 \fHLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis tions requiring real-time inference on the edge, in addition to monetary reasons, a dedicated GPU is often impractical as it cannot draw sufficient and sustained power. While GPU acceleration will likely remain dominant in the near future despite the power disadvantage, there is value in exploring different avenues of hardware acceleration as deep learning tasks continue to diverge into highly specific applications. Further, as transformers become more and more ubiquitous, there is a case to be made for designing model-specific hardware accelerators solely to optimize inference. To that end, Field Programmable Gate Arrays (FPGAs) are another desirable choice for accelerators as they offer a hardware reconfigurable for specific tasks enabled by a large number of programmable logic gates, making them inexpensive to iterate hardware designs on. Furthermore, FPGAs are distinguished for their reduced power consumption, which on average is only 28% of GPUs (Cong et al., 2018). What limits the adoption of FPGAs currently is the high barrier of entry and relative lack of research compared to GPUs. FPGAs are commonly used to prototype hardware designs for system-on-chip (SoC) and Application Specific Integrated Circuit (ASIC), which is typically done on the register-transfer level (RTL) using hardware description languages like Verilog. However, the design and verification of RTL modules are known to be extremely complex and timeconsuming. High Level Synthesis (HLS) is a methodology that seeks to address that complexity by allowing developers to write hardware descriptions in more accessible, high-level languages like C or C++. HLS tools convert high-level code input into RTL code that optimizes for performance, area, and energy consumption, leading to faster prototyping and iteration for FPGAs. Furthermore, the nature of HLS tools and availability of Vitis C / RTL co-simulation make it simple to verify the correctness of the synthesized hardware designs; these factors allow HLS to significantly shorten the traditional hardware development cycle. In this literature, we employ HLS tools to design FPGAs for accelerating Llama 2 inference. In addition to the large GPU power footprint of LLMs that may be addressed with FPGAs, the complex data flow of transformer models (Li et al., 2020) often comprises of nonlinearities or token encoding subroutines (such as RoPE) that are difficult to accelerate on GPUs but could be better suited for FPGAs. Llama 2 is chosen in particular due to its open-source implementations and superb performance (Touvron et al., 2023b), making it a popular and well researched choice. We use Andrej Karpathy\u2019s llama2.c repository (Karpathy, 2023) to develop our methods on a relatively small (110M parameters) model to allow for our financial and compute constraints. We focus on inference over training due to its higher energy usage and greater suitability for FPGAs. In summary, through our methods which we name HLSTransform, we demonstrate the following: 1. Low power and energy consumption Energy savings up to a 12.75x reduction of total energy consumption compared to CPU and an 8.25x reduction of total energy consumption compared to GPU. 2. Fast inference speeds and low latency Acceleration up to 2.46x in inference speed in comparison to CPU, and maintaining up to 0.53x in inference speed in comparison to GPU, despite the GPU having 4x higher base clock rate. 3. Verification of HLS tools for faster deployment Ensuring HLS tools run properly to synthesize appropriate FPGA designs for this study. We also test the learning curve of the tools for the use of any developer without extensive hardware backgrounds. We open-source our code and document our FPGA synthesis to the public, available in our GitHub repo here: github.com/HLSTransform/submission. To the best of our knowledge, our model is one of the first opensource HLS-based implementations for transformers. In our research process, the lack of documentation for many steps of the process combined with the absence of existing open-source FPGA accelerators for transformers served as a high barrier to entry, and we hope our work serves a step forward in democratizing the usage and research of FPGAs for transformer inference. 2. Related Work We delineate a few studies that relate to FPGA accelerators for transformers and the application of high level synthesis. 2.1. Existing Hardware Accelerators for Transformers on FPGA Existing hardware accelerators for transformers on FPGA incorporate specialized techniques to optimize performance on FPGAs. Column Balanced Block Pruning (Peng et al., 2021) and FTrans (Li et al., 2020) are two novel frameworks for transformer models suitable for FPGA acceleration. By incorporating weight pruning to employ sparse matrix multiplication, these papers are able to achieve multiple folds of improvements in transformer inference compared to CPUs and GPUs in terms of performance and energy efficiency. We instead strive to maintain dense matrix multiplication in our methods to allow for general application to existing transformer models. Similarly, NPE (Khan et al., 2021) introduces a framework for FPGA acceleration on transformers, utilizing piecewise linear approximations for nonlinear 2 \fHLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis functions (e.g. softmax and GELU) to achieve speedups. In contrast, we compute exact values for nonlinear functions. Our methodology allows us to avoid needing to train FPGA-specific models and avoid potential accuracy tradeoffs associated with these novel pruning or approximation techniques. The only potential accuracy tradeoffs are from our usage of quantization, where we follow the well-tested quantization algorithm \u201cQ8 0\u201d, explored further in Section 3.2. 2.2. hls4ml We aim to inspire the democratization of FPGA accelerators for deep learning using HLS. Fast Machine Learning Lab\u2019s hls4ml (Fahim et al., 2021) is an open-source workflow that enables fast prototyping of machine learning algorithms via Python for FPGAs (FastML Team, 2023) (Duarte et al., 2018). hls4ml has been successful in being one of the first open source HLS tools for deep learning especially in Python, but a major limitation is its lack of support for attention layers used in transformer models. The tool mainly supports convolutional neural networks (CNNs) and feed-forward deep neural networks (DNNs), but transformer models like Llama 2 are unique in requiring attention layers and novel techniques such as Rotary Position Embeddings, which are not yet supported by this framework. 3. Methods We follow the same architecture outlined in the original Llama 2 paper (Touvron et al., 2023a): the standard Transformer architecture (Vaswani et al., 2023), rotary position embeddings (Su et al., 2021), grouped query attention (Ainslie et al., 2023), RMS norm for pre-normalization (Zhang & Sennrich, 2019), and the SwiGlu activation function (Shazeer, 2020). Since FPGAs are constrained in performance by the amount of on-chip memory, we selected a small 110M parameter model trained on the TinyStories dataset to test our designs (Eldan & Li, 2023). We discuss the limitations of the small model size further in the Limitations and Future Works section. More details on model architecture are included in the Appendix. 3.1. Implementation Our implementation of Llama 2 is built on Andrej Karpathy\u2019s llama2.c repository. For our HLS toolchain, we chose Vitis, as it is both widely used and directly supported by the FPGAs available to us on AWS. The code is split into two portions, the host and the kernel. The kernel code contains the hardware description for one iteration of the computationally-intensive forward inference pass and is synthesized for the FPGA, while the host is responsible for driving the kernel code. The host interfaces with the FPGA accelerator through the Xilinx Runtime Library (XRT). Kernel Code (C++/Verilog) Host Code (C++) g++ Vitis HLS Kernel Binary Host Executable User Code Vitis Platform Xilinx Runtime Hardware Platform Figure 1. Vitis HLS development workflow. The host sends the input parameters, such as the token and position to the FPGA via direct memory access (DMA). The FPGA is responsible for writing the output to a sharedbuffer that can be accessed by both the host and the kernel. The host reads the output and performs sampling to extract the next token. We focus on three HLS optimizations: pipelining, unrolling, and array partitioning. We also implement software-level optimizations; in addition to memory limitations, FPGAs also have constraints regarding Digital Signal Processor (DSP) blocks, which are specialized hardware modules within an FPGA that are optimized for efficient floating point arithmetic calculations. However, the number of available DSP blocks is limited and varies depending on the FPGA model; to address DSP and on-chip memory bottlenecks, we first quantized the weights from 32-bit (single-precision) IEEE floating points to 8-bit signed integers. 3.2. Int-8 Quantization Included in Karpathy\u2019s work, we employ an 8-bit integer quantized forward pass to run our inference on FPGAs (Karpathy, 2023). The quantization process is post-training; i.e. it is independent from model training. We perform symmetric quantization, scaling each weight between [-127, 127]. Each weight is divided into sections of equal size, each of which is quantized by the following formula, where w here represents a vector of weights in that section and the square brackets denote the rounding function. w = \u2308127 \u2217 w \u2225w\u2225\u221e \u230b This quantization has been noted to perform well empirically, used in Georgi Gerganov\u2019s popular GGML library 3 \fHLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis for efficient CPU transformer inference and referred to as \u201cQ8 0\u201d quantization in the library (Gerganov). We quantize the embedding, attention, and the feedforward weights. The RMSNorm params, which are sensitive to error, are kept in float32 precision. Although quantization leads to decreased model accuracy, the accuracy dropoff is minimal, and we explore the effects of quantization in Section 4.1. Quantization allows for smaller weights, which permits us to better utilize the limited memory bandwidth on the FPGA and perform integer-only calculations, which provides inference speedups through lower precision arithmetic calculations (Kim et al., 2021). 3.3. Optimization of Llama 2 Accelerator Using HLS Pragmas Pragmas in High-Level Synthesis (HLS) are directives used to guide the HLS compiler in the process of converting the high-level code into a hardware description, typically used when indicating to the compiler that a specific optimization should be performed on some section of the code. Pipelining with II = 1 (4 cycle latency) No pipelining (6 cycle latency) RD RD RD RD RD RD RD EX EX EX EX EX WR WR WR WR WR for(int i = 0; i < 2; i++) { op_read; op_execute; op_write; } Figure 2. Pipelining two iterations of instructions with read, execute, and write stages. 3.3.1. PIPELINING Pipelining HLS is a technique used to enhance the performance of hardware circuits generated from high-level code. This method involves dividing a process into several stages, each separated by registers. Similar to an assembly line, pipelining allows different stages of a computation to occur in parallel but on different sets of data. In HLS, this means translating high-level programming constructs into pipelined hardware structures. For example, in a computation involving multiple arithmetic operations, HLS can break down these operations into stages, where each stage performs a part of the computation. By doing so, while one stage is processing one set of data, the next stage can work on another, leading to increased throughput. The pipeline pragma is applied to the main loops responsible for computing matrix-vector multiplication and rotary position embeddings. 3.3.2. LOOP UNROLLING Loop unrolling is an optimization technique that increases the efficiency of hardware implementations derived from high-level code. This process involves expanding the loop body multiple times in order to reduce the number of iterations. By doing this, loop unrolling enables the simultaneous execution of multiple consecutive loop iterations, as long as there are no intra-loop data dependencies. In other words, if a loop is executed N times and we unroll it M times, the loop body will be replicated M times within each iteration, thereby reducing the total number of iterations to N/M. This technique is especially useful in hardware design because it can lead to more parallel operations, allowing the hardware to perform more tasks simultaneously at the cost of chip space. 3.3.3. MEMORY PARTITIONING The application of HLS partitioning pragmas is a critical step in the design of the Llama 2 deep learning accelerator. Typically, FPGA BRAM is implemented as a dual-port memory, which greatly restricts the degree to which code can be parallelized on chip. By dividing arrays and memory structures into smaller, independent blocks, different data segments can be processed in parallel. Memory partitioning ensures more efficient utilization of the available computational resources, thereby enhancing the throughput for matrix multiplication operations, a common bottleneck in neural network computations. 3.3.4. BURST READS / WRITES OVER AXI4 In general, a dual-port memory bank can support two reads per cycle. Since global memory cannot be partitioned completely due to the limitation on the number of memory channels available to the FPGA, we instead utilize burst reads and writes into local on-chip buffers. By using a technique called widening, global memory can be accessed through dual-port 256-bit wide lines, allowing the simultaneous read of 64 8-bit integers per cycle. Efficient data transfer between the FPGA and external memory is essential, given the large amount of parameters that need to be read from memory before any computations can begin. 4. Results and Discussion We evaluate the perplexity, latency, power, and energy consumption of the 110M parameter Llama 2 model across CPU, GPU, and FPGA. We provide more details of the evaluation setup in the Appendix. We run our benchmarks for 4 \fHLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis 256 tokens and the max context length of 1024 tokens to test both the short and long text generation domains. Our FPGA designs were synthesized targeting the Ultrascale+ VU9P platform available on AWS, and the synthesized designs were then exported to an Amazon Machine Image (AMI) using a custom toolchain provided by Amazon (AWS). We use the f1.2xlarge instance from AWS to host the FPGA, and we use the t2.2xlarge instance for our CPU benchmarks (8 vCPUs, 2.3 GHz Intel Xeon Broadwell E5-2686 v4), the same CPUs used in the FPGA instance, and an NVIDIA RTX 3090 GPU for our GPU benchmarks. We use the original Llama 2 implementation provided by Meta for our GPU experiments. We run all samples with non-batched inference (batch size 1). While we run benchmarks of FPGA performance against CPUs and GPUs, we are unable to provide equitable quantized benchmarks for GPUs, as the different scaling factors per section in the quantization algorithm used would require specialized kernels to make this efficient. To provide equitable comparisons, we also provide perplexity benchmarks, a common metric for model quality, along with inference latency and energy consumption benchmarks to demonstrate minimal tradeoffs to accuracy while fully utilizing the optimized integer-arithmetic abilities of FPGAs. 4.1. Perplexity We measure perplexity on the validation dataset for TinyStories, for both the quantized and unquantized models of the 110M parameter model; perplexity is a common metric for model quality that measures a model\u2019s uncertainty about its predictions. Our experimental setup is detailed further in the Appendix. Table 1. PERPLEXITY (LOWER IS BETTER) MODEL AVERAGE PERPLEXITY (PPL) \u2193 QUANTIZED 110M 2.9679 UNQUANTIZED 110M 2.9667 UNQUANTIZED 42M 3.1810 The quantized model is able to retain nearly identical levels of performance (0.04% increase in perplexity) as the unquantized model while utilizing integer only computations. We include the perplexity benchmark for a 42 million parameter model as reference, which is 7.22% higher than the unquantized 110 million parameter model. 4.2. Latency and Speed We measure inference latency in milliseconds and inference speed in tokens per second. Similar to NPE, an existing hardware accelerator for FPGAs, we obtain our timing results from the system simulations (Khan et al., 2021), and we provide a report of our full timings in the Appendix. Table 2. INFERENCE SPEED (TOKENS PER SECOND) HARDWARE 256 TOKENS \u2191 1024 TOKENS \u2191 CPU 23.21 TOKS/S 19.63 TOKS/S GPU 107.00 TOKS/S 107.24 TOKS/S FPGA 57.11 TOKS/S 57.11 TOKS/S Table 3. INFERENCE LATENCY (MILLISECONDS) HARDWARE 256 TOKENS \u2193 1024 TOKENS \u2193 CPU 43.08 MS 50.94 MS GPU 9.34 MS 9.32 MS FPGA 17.51 MS 17.51 MS According to Table 2, the FPGA is 2.46x the inference speed of CPU and 0.53x the inference speed of GPU. Although the GPU performs inference faster than the FPGA, one of the primary bottlenecks of deep learning inference is memory bandwidth and the availability of on-chip memory (Balasubramanian et al., 2021). A RTX 3090 has 24GB VRAM running at 1219 MHz with a base core clock of 1395 MHz (TechPowerUp, 2024). In comparison, a VU9P FPGA has 345.9 MB of combined on-chip BRAM and URAM, running at a much slower clock speed of around 200-300 MHz depending on the module; however, with much lower clock speeds, the FPGA is able to achieve better efficiency on power and energy consumption, as shown below. 4.3. Energy and Power Consumption We utilize the CodeCarbon library, also used by HuggingFace to provide carbon estimations for the BLOOM LLM, to provide energy consumption metrics for CPU and GPU performance (Heikkil\u00a8 aarchive, 2022) (Workshop et al., 2022) (Courty et al., 2023). For GPU benchmarks, CodeCarbon sources energy consumption directly from NVIDIA\u2019s NVML library. For the AWS CPU benchmarks, energy consumption cannot be directly sourced since AWS uses hypervisors, and CodeCarbon uses an estimation derived from empirical energy consumption data (Courty et al., 2023). As CodeCarbon does not handle FPGA energy consumption measurement, energy consumption metrics for FPGA is provided by Vivado and AWS provided tools (AWS). The average power consumption of the FPGA is considerably lower than the average power consumption for both CPU and GPU. For 256 tokens, the average FPGA power consumption achieves a 4.72x reduction in the average 5 \fHLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis Table 4. POWER CONSUMPTION ON FPGA (WATTS) FPGA 256 TOKENS \u2193 1024 TOKENS \u2193 AVERAGE 9 W 9 W MAX 12 W 11 W Table 5. AVERAGE POWER CONSUMPTION (WATTS) HARDWARE 256 TOKENS \u2193 1024 TOKENS \u2193 CPU 42.5 W 42.5 W GPU 126.9 W 130.6 W FPGA 9 W 9 W power consumption of the CPU, and a 14.10x reduction in the average power consumption of the GPU. For 1024 tokens, the FPGA achieves a 14.51x reduction of the power consumption of the GPU, reaching a maximum of only 12 watts. To calculate the total energy consumption, we need the duration of inference; therefore we introduce a new metric, the total energy consumption per token, calculated by using the inference latency and average power consumption. We measure the energy consumption per token in milliwatt hours per token. Table 6. TOTAL ENERGY CONSUMPTION (MILLIWATT HOUR PER TOKEN, MWH/TOK) HARDWARE 256 TOKENS \u2193 1024 TOKENS \u2193 CPU 0.51 MWH/TOK 0.60 MWH/TOK GPU 0.33 MWH/TOK 0.34 MWH/TOK FPGA 0.04 MWH/TOK 0.04 MWH/TOK For 256 tokens, the FPGA reaches a 12.75x reduction in energy consumption over the CPU and 8.25x reduction in energy consumption over the GPU, while for 1024 tokens, the FPGA achieves a 15x reduction over the CPU and a 8.5x reduction over the GPU. Through HLSTransform, we are able to achieve high savings in energy per token. 5. Limitations and Future Work We note several limitations regarding our work, and we provide potential research directions: 5.1. Model Size A key limitation of our work is the on-chip memory bottlenecks that accompany FPGAs; for example, one of Xilinx\u2019s high-end commercial FPGAs, the Virtex UltraScale+ VU19P, has an on-chip memory capacity of 224 MB (AMD). In contrast, most LLMs are much larger than the maximum size FPGAs can load on chip; for instance, Llama 2 has three pretrained LLMs of size 7, 13, and 70 billion, while GPT-3 uses 175 billion parameters (Touvron et al., 2023a) (Brown et al., 2020). Since the parameters cannot be pre-initialized on on-chip memory banks due to memory constraints, the weights are instead on off-chip global memory interfaced via the AXI4 protocol, making it possible to run inference on larger models. However, external memory accesses quickly become a major bottleneck in inference latency as only 64 8-bit integers can be read per cycle. As a result, we limit our model size to 110M parameters. Despite the model size, there are many practical applications of similar model sizes. For instance, BERT has a base model size of 110M parameters, while ALBERT xlarge has a model size of 68M parameters; these models achieve state-of-the-art or near state-of-the-art performances on a multitude of NLP tasks and are in widespread use (Rogers et al., 2020). Several Llama variants, such as LiteLlama and TinyLlama, also have considerably smaller parameter sizes of 460M parameters and 1.1B parameters respectively, while achieving considerable generation capabilities for the size (Han) (Zhang et al., 2024). Several future directions to be explored for fitting larger models on FPGA include using greater levels of quantization (i.e. 4-bit precision) or using multiple FPGAs in unison. \u201cQ4 0\u201d quantization utilizes the same quantization technique applied to 4-bit integers, and has seen success in implementations in Gerganov\u2019s GGML library, and ongoing research exists for other quantization schemes, such as 2-bit LLMs (Chee et al., 2023). Fully-integer quantization methods also serve as a potential research path, which both reduces parameter size and inference latency by making all weights and all calculations involve only integers, such as the ones explored in I-BERT (Kim et al., 2021). Model parallelism schema utilizing multiple FPGAs may also help run larger models by sharding a model across multiple FPGAs. 5.2. Batch Size Another limitation of our work is our focus on the nonbatched inference domain; i.e. inference with batch size 1. The large VRAM capacity and parallel computation nature of GPUs make the GPUs suitable for tasks requiring high throughput, which may make the GPU overall more power efficient in the high batch regime. An interesting future research direction is the optimization of batched inference on FPGAs. 6 \fHLSTransform: Energy-Efficient Llama 2 Inference on FPGAs Via High Level Synthesis 6. Conclusion We propose a new hardware accelerator for transformers on FPGA, HLSTranform, which achieves up to a 12.75x reduction and 8.25x reduction in total energy consumption per token, compared to a 2.3 GHz Intel Xeon Broadwell E52686 v4 CPU and a NVIDIA RTX 3090 GPU, respectively. Our FPGA accelerator maintains 0.53x the inference speed of an RTX 3090 GPU and is 2.46x as fast as the inference speed of the Intel Xeon Broadwell E5-2686 v4 CPU; these results are achieved via synthesis combined with pipelining, memory unrolling, and memory partitioning and transfer optimizations, with the addition of 8-bit integer quantization. Through our study, we provide a proof-of-concept for the usage of High Level Synthesis (HLS) as a much quicker way of prototyping FPGA designs. As transformers become more widely used and as model sizes continue to increase, energy consumption from AIrelated applications will increase correspondingly. Increased energy consumption comes with vast environmental concerns and monetary costs, as well as limiting applications that restrict power consumption such as edge computing; as a result, energy-efficient methods for inference that provide more sustainable solutions may become a much more pressing issue. We hope that our work serves as a step forward in energy-efficient methods for AI."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.00739v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00739v1",
|
| 3 |
+
"title": "Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism",
|
| 4 |
+
"abstract": "Does Knowledge Distillation (KD) really work? Conventional wisdom viewed it\nas a knowledge transfer procedure where a perfect mimicry of the student to its\nteacher is desired. However, paradoxical studies indicate that closely\nreplicating the teacher's behavior does not consistently improve student\ngeneralization, posing questions on its possible causes. Confronted with this\ngap, we hypothesize that diverse attentions in teachers contribute to better\nstudent generalization at the expense of reduced fidelity in ensemble KD\nsetups. By increasing data augmentation strengths, our key findings reveal a\ndecrease in the Intersection over Union (IoU) of attentions between teacher\nmodels, leading to reduced student overfitting and decreased fidelity. We\npropose this low-fidelity phenomenon as an underlying characteristic rather\nthan a pathology when training KD. This suggests that stronger data\naugmentation fosters a broader perspective provided by the divergent teacher\nensemble and lower student-teacher mutual information, benefiting\ngeneralization performance. These insights clarify the mechanism on\nlow-fidelity phenomenon in KD. Thus, we offer new perspectives on optimizing\nstudent model performance, by emphasizing increased diversity in teacher\nattentions and reduced mimicry behavior between teachers and student.",
|
| 5 |
+
"authors": "Chenqi Guo, Shiwei Zhong, Xiaofeng Liu, Qianli Feng, Yinglong Ma",
|
| 6 |
+
"published": "2024-04-30",
|
| 7 |
+
"updated": "2024-04-30",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"cs.CV",
|
| 12 |
+
"eess.IV"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "Distillation",
|
| 16 |
+
"gt": "Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism",
|
| 17 |
+
"main_content": "Introduction Knowledge Distillation (KD) (Hinton et al, 2015) is renowned for its effectiveness in deep model compression and enhancement, emerging as a critical technique for knowledge transfer. Previously, this process has been understood and evaluated through model fidelity (Stanton et al, 2021), measured by the student model replication degree to its teachers. High fidelity, assessed by metrics like low averaged predictive Kullback-Leibler (KL) divergence and high top-1 agreement (Stanton et al, 2021), have conventionally been used to assess the success of KD. While fidelity has traditionally guided enhancements in model architectures, optimization, and training frameworks, repeated high-fidelity results corresponding to strong student performance seem to indicate that a high degree of mimicry between the student and teachers is desirable (Wang et al, 2022; Li et al, 2022; Lao et al, 2023). Yet this notion was initially challenged in (Stanton et al, 2021), which empirically shows that good student accuracy does not imply good distillation fidelity in self and ensemble distillation. However, though (Stanton et al, 2021) underscores their empirical findings on the low-fidelity phenomenon, they still believe that closely matching the teacher is beneficial for KD in terms of knowledge transfer. Further, they identify optimization difficulties as one key reason of student\u2019s poor emulation behavior to its teachers. Thus, this paradox highlights a need for further exploration on model fidelity and its mechanism in KD. Among factors in KD analysis, the attention map mechanism serves as a pivotal role in understanding the student-teacher interplay. It is known that in ensemble learning, diverse models improve the overall performance, and one can check their diversities through looking into the attention maps. Nonetheless, whether we can take it granted to transferring this conclusion into the case of KD has not been systematically studied yet. For example, (Tsantekidis et al, 2021) empirically shows that diversifying teachers\u2019 learnt policies by training them in different subsets of learning environment, can enhance the distilled student performance in KD. Yet, a theoretical foundation is lack for doing so. And it would be intriguing to check the student-teacher fidelity under such circumstance, to see if diversifying teacher models in an ensemble consistently corresponds with low-fidelity as well. If so, one can devote model attention map diversities to explain the existing fidelity paradox. Thus in this paper, we utilize the Intersection over Union (IoU) (Rezatofighi et al, 2019) of attention maps (Zhou et al, 2016) between different teacher 2 \fmodels in ensemble KD to help elucidate the existing fidelity paradox. Following the investigation paradigm in (Stanton et al, 2021), where the model fidelity variations were observed with different data augmentations, we adapt this paradigm to our case with a more cautious control over the degree of randomness in augmentation during ensemble KD training. By varying data augmentation strengths, measured by Affinity (Cubuk et al, 2021), we varied the model diversities trained on them. Impacts not only on traditional metrics like student-teacher fidelity, but also on less-explored aspects of attention maps diversity between different teachers, and mutual information between student and teachers are witnessed. Our empirical observations appear to challenge the traditional wisdom on the student-teacher relationship in distillation during training procedure and thus provide further insights on explaining the fidelity paradox. Specifically, in support and further complement to (Stanton et al, 2021), we highlight attention map diversification existed within teacher ensembles as a deeper reason why a student with good generalization performance may be unable to match the teacher during KD training: Stronger data augmentation increases attention divergence in the teacher ensemble, enabling teachers to offer a broader perspective to the student. Consequently, the student surpassing the knowledge of single teacher becomes more independent as measured by lower student-teacher mutual information. And the low-fidelity observed is a demonstration of this phenomenon. Furthermore, though (Stanton et al, 2021) has demonstrated the lowfidelity observation, they still proposed the difficulties in optimization as the primary reason for it. And recent works including (Sun et al, 2024) remain optimizing in the direction of facilitating the student-teacher emulation procedure. Yet our empirical and theoretically analysis demonstrate that, optimization with logits matching does improve the student generalization ability but is still at the cost of fidelity reduction. Our primary goal is to explain the fidelity paradox and understand the student learning and knowledge transfer dynamics in ensemble KD, by observing the implications of data augmentation on the student-teacher relationship. By doing so, we seek to provide insights that challenge the traditional or extend preliminary wisdom in KD fidelity by leveraging the attention mechanism in ensemble learning. As shown in Figure 1, we summarize our contributions as follows: (1) We demonstrate the correlation between teachers\u2019 attention map di3 \fFigure 1: Left: Attention map visualizations for teacher ensembles and student model in Knowledge Distillation (KD) on ImageNet dataset. Stronger data augmentation (T1wT2sSs and T1sT2wSs in this case) as measured by Affinity improves teachers\u2019 attentional divergence, thus providing the student a more comprehensive perspective on the overall characteristics of the target images, leading to a better generalization ability. Middle and Right: Scatter plots of Intersection over Union (IoU) in Attention maps, and Fidelity between teacher ensembles and student during KD training. The decreasing tendency in fidelity challenges the conventional wisdom that higher fidelity consistently correlate with better student performance. Later we will demonstrate that the low-fidelity observation is caused by attention map diversification existed within teacher ensembles, and even optimization towards logits-matching can hardly mitigate this low-fidelity effect. versity and student model accuracy in ensemble KD training. Stronger data augmentation improves attentional divergence among teacher models, offering the student a more comprehensive perspective. (2) We affirm the viewpoint from (Stanton et al, 2021) that higher fidelity between teachers and student does not consistently improve student performance. What is more, through analyzing attention maps between teachers in ensemble KD, we highlight this low-fidelity phenomenon as an underlying characteristic rather than a pathology: Student\u2019s generalization is enhanced with more diverse teacher models, which causes the reduction in student-teacher fidelity. (3) We further investigate if optimization towards facilitating the studentteacher logits matching procedure can enhance the KD fidelity. Our empirical and theoretically analysis demonstrate that such optimization improve the student generalization ability but still at the cost of fidelity reduction. The rest of the paper is structured as follows: Section 2 summarizes the 4 \frelated works, Section 3 clarifies the problem and hypothesis focused in this work, and Section 4 introduces the evaluation metrics used to validate our argues. Section 5 further gives the experimental settings, and the empirical results and theoretical analysis are provided in Section 6. Section 7 finally summarizes the work of this paper. 2. Related Works Our study contributes to a growing body of research that explores the interactions between data augmentation, model fidelity, attention mechanisms, and their impact on student performance in Knowledge Distillation (KD) with teacher ensembles. In (Bai et al, 2023), a KD framework utilizing Masked Autoencoder, one of the primary factors influencing student performance is the randomness introduced by masks in its teacher ensembles. It comes naturally if incorporating randomness into the dataset, through a simple yet effective method like data augmentation, and carefully controlling its strength, will be as effective as integrating it into model architectures. Theories on the impacts of data augmentation on KD remain diverse and varied. (Li et al, 2022) offers theoretical insights, suggesting that leveraging diverse augmented samples to aid the teacher model\u2019s training can enhance its performance but will not extend the same benefit to the student. (Shen et al, 2022) emphasizes how data augmentation can alter the relative importance of features, making challenging features more likely to be captured during the learning process. This effect is analogous to the multi-view data setting in ensemble learning, suggesting that data augmentation is likely to be beneficial for ensemble KD. On the application font, research proposing novel attention-based KD frameworks usually accompanied with intricate designs in model architectures or data augmentation strategies (\u00a8 Ozg\u00a8 ur et al, 2022; Lewy et al, 2023). For instance, studies like (Tian et al, 2022) aim to address the few shot learning in KD with a novel data augmentation strategy based on the attentional response of the teacher model. Although their concentration is different from ours, the study nevertheless shows the significance of attention mechanism in KD. In align with the initial \u201cknowledge transfer\u201d definition of KD, as an underlying assumption that a higher degree of emulation between the student and teachers benefits its training, previous studies are devoted to optimizing 5 \ftowards increased student-teacher fidelity or mutual information (Wang et al, 2022; Li et al, 2022; Lao et al, 2023). Recent work (Sun et al, 2024) also optimizes in this direction, where a z-score logit standardization process is proposed to mitigate the logits matching difficulties caused by logit shift and variance match between teacher and student. Nevertheless, this idea faced initial challenge in (Stanton et al, 2021), indicating that closely replicating the teacher\u2019s behavior does not consistently lead to significantly improved student generalization performance during testing, whether in self-distillation or ensemble distillation. (Stanton et al, 2021) first investigates if the low-fidelity is an identifiability problem that can be solved by augmenting the dataset, and the answer is no: experimental results show subtle benefits of this increased distillation dataset. They further explore if the low-fidelity is an optimization problem resulting in a failure of the student to match the teacher even on the original training dataset, and their answer is yes: A shared initialization does make the student slightly more similar to the teacher in activation space, but in function space the results are indistinguishable from randomly initialized students. Though insightful, it prompts further questions and drives us to think: Is low-fidelity truly undesirable and problematic for KD, especially if it does not harm student performance? Thus, additional exploration into this student fidelity-performance relation is required to elucidate the above paradox. Adopting a similar investigative approach which observes model fidelity variations with different data augmentations, we tailor it to our case, exercising a more cautious control over the data augmentation strength and thus the randomness into the distillation dataset during KD training. In our work, we applied various data augmentations on KD, aiming to provide a more comprehensive understanding of model fidelity and attention mechanisms. Our empirical results and theoretical analysis challenge conventional wisdom, supporting and extending (Stanton et al, 2021) by demonstrating that student-teacher fidelity or mutual information does decrease with improved student performance during KD training. And, this low-fidelity phenomenon can hardly be mitigated with optimization aimed at improving student generalization. We thus advocate for more cautious practices in future research when designing KD strategies. 6 \f3. Problem and Hypothesis We focus on Knowledge Distillation (KD) with teacher ensembles in supervised image classification. In this realm, the efficacy of the process has traditionally been evaluated through the model fidelity and student validation accuracy. However, this conventional approach may not fully capture the complexity and nuances inherent in the knowledge transfer process, especially in light of evolving practices like data augmentation and the growing importance of attention mechanisms in neural networks. This study is driven by a series of interconnected research questions that challenge and extend the traditional understanding of KD as follows. Impact of Varied Data Augmentation Strengths on Model Diversity in Attention Map Mechanisms. The application of diverse data augmentation strengths during the training of teacher and student models plays a crucial role in shaping KD (Stanton et al, 2021). Consequently, it is natural to inquire whether, across augmentation strategies, stronger data augmentation results in an increase or decrease in model fidelity within teacher ensembles during training. And if so, how does this correlate with the student model\u2019s performance. Inspired by the theory in machine learning that diversity among models can enhance ensemble learning performance (Zhou, 2012; Asif et al, 2019), our hypothesis is that varying augmentation strengths in different teachersinject randomness into the data, thereby diversifying teacher models\u2019 attention (Zhou et al, 2016) mechanisms trained on them. This diversity promotes heterogeneity in learning features, enables the student to learn diverse solutions to the target problem, and thus enhances the KD process. As a result, the student surpasses the knowledge of a single teacher, leading to a better overall performance, and the observed low-fidelity serves as a demonstration of this phenomenon. Interplay Between Student Fidelity, Mutual Information and Generalization. (Stanton et al, 2021; Shrivastava et al, 2023) have observed that fidelity or mutual information between teacher and student models interact with varying data augmentation strengths, influencing the overall effectiveness of distilled knowledge. The critical questions then arise: Does lower or higher fidelity and mutual information benefit the KD training and student performance, and why does it happen? We hypothesize that, varied augmentation strengths in different teachers in ensemble KD would provide a broader view for the student to learn. Thus, the student surpassing the knowledge of a specific teacher. Contrary to the traditional perspective, we 7 \fexpect a decreased mimicry behavior of the student to benefit the student generalization ability during training, as it learns more intricate patterns from the diverse set of teachers. Effect of Optimization towards Student-Teacher Logits Matching on Fidelity. Question also comes on why some works thought a highfidelity is beneficial, while others thought a low-fidelity is inevitable during training. Our intuition is that the researches devoted to optimizing towards increased student-teacher fidelity or mutual information do achieve the ultimate goal of improving the overall student performance, but in fact fail at enhancing the mimicry behavior during training. In this paper, we try to answer this question by delving into a logits matching KD case as in (Sun et al, 2024). Specifically, we experiment with a z-score standardization method to mitigate the logits magnitudes and variance gap between teacher and student, which facilitates the student-teacher emulation procedure. Our hypothesis is that though such an optimization can relieve the logit shift and variance match problem, in reality its benefit lies in the student generalization rather than the fidelity improvement. These questions aim to dissect the underlying learning dynamics in KD, moving beyond traditional metrics and exploring how newer facets like data augmentation strength, attention map diversity, fidelity and mutual information interplay to influence the student\u2019s learning and generalization abilities. Here, the data augmentation strength is measured by Affinity (Cubuk et al, 2021), the offset in data distribution between the original one and the one after data augmentation as captured by the student model, which we will talk more later. By addressing these questions, this study seeks to provide a more comprehensive understanding of KD. 4. Evaluation Metrics This section introduces evaluation metrics aimed at quantifying the learning dynamics and thus explains the existing fidelity paradox of Knowledge Distillation (KD) with teacher ensemble training, particularly when subject to varied data augmentation strengths. 4.1. IoU in Attention Maps To elucidate divergent attentional patterns within teacher ensembles, we examine their attention maps (Zhou et al, 2016) in ResNet (He et al, 2016) or Transformer (Vaswani et al, 2017) during the training and validation stage. 8 \fSubsequently, the Intersection over Union (IoU) (Rezatofighi et al, 2019) is computed between the attention maps of different teachers to measure their diversities. Take the 2-teacher ensemble KD as an example, for an image sample S, to compute the IoU between the teacher models, two attention maps At1, At2 \u2286S are obtained associated with each teacher model, with the final metric value computed as in Equation 1: IoU = |At1 \u2229At2| |At1 \u222aAt2| (1) 4.2. Model Dependency in KD We use fidelity metrics, namely the averaged predictive Kullback-Leibler (KL) divergence and top-1 agreement (Stanton et al, 2021), along with mutual information calculated between models\u2019 logits. This enables us to showcase the mimicry behavior and dependency between teachers and the student. Given a classification task with input space X = {xi}N i=1 and label space Y = {yc}C c=1. Let f : X \u2192RC be a classifier whose outputs define a categorical predictive distribution over Y, \u02c6 p(yc|xi) = \u03c3c(zi), where \u03c3c(\u00b7) is the softmax function and zi := f(xi) denotes the model logits when xi is feed into f. The formal definition of KL divergence, top-1 agreement (Top-1 A), and mutual information (MI) are formulated as follows: KL(Pt||Ps) = C X c=1 \u02c6 pt(yc|x) log \u02c6 pt(yc|x) \u02c6 ps(yc|x) (2) Top-1 A = 1 N N X i=1 1{arg max c \u03c3c(zt) = arg max c \u03c3c(zs)} (3) MI(Yt; Ys) = X yt\u2208Yt X ys\u2208Ys P(yt, ys) log P(yt, ys) P(yt)P(ys) (4) where P(yt, ys) is the joint probability distribution of the teacher and student. P(yt) and P(ys) represent the marginal probability distributions of the teacher and student. For metrics calculated between teach ensemble and student, the logits or outputs of different teachers are first averaged and then computed with the student. This paper uses Top-1 A for fidelity measurement in the main text, and results with KL divergence can be found in Appendix B. 9 \f4.3. Quantify Data Augmentation Strength within Ensemble KD In our experiments, we employ various data augmentation techniques on both teacher ensembles and the student model to modulate the level of randomness introduced into the dataset, as detailed in Section 5. To quantify the strength of these applied data augmentations and demonstrate their effects on KD, we leverage Affinity measurements (Cubuk et al, 2021), specifically adapted to our KD scenario: Affinity = Acc(D\u2032 val) Acc(Dval) (5) where Acc(D\u2032 val) denotes the validation accuracy of the student model trained with augmented distillation dataset and tested on the augmented validation set. Acc(Dval) represents the accuracy of the same model tested on clean validation set. This metric measures the offset in data distribution between the original one and the augmented one captured by the student model after KD training: Higher Affinity value corresponds to smaller offset between the data distributions. In this paper, Affinity is used as a tool to quantify and thus help on controlling the degree of randomness injected into the distillation dataset. This provides us with a systematic approach to analyze how data augmentation interacts with KD generalization, fidelity, and attention mechanisms. We anticipate that when the data augmentation strength of the student model aligns with that of the teacher model, the Affinity will be higher. And, lower Affinity corresponds to stronger data augmentation, leading to higher student accuracy and better generalization performance. It is noteworthy that what we mean low Affinity is a \u201cmoderate low but cannot be as low as 0\u201d notion: An Affinity of 0 presupposes a situation where the augmented data is so drastically different from the original that it no longer retains any of the original data\u2019s informative features, or the model has entirely failed to learn from the augmented data. Our claim that models with low Affinity can still exhibit good generalization performance is based on the understanding that these models, through diverse and challenging augmentationss, learn to abstract and generalize from complex patterns. This does not necessarily imply that an Affinity of 0, resulting from complete misalignment with the augmented data, is desirable or indicative of strong generalization. Instead, we suggest that moderate to low Affinity, within a range that indicates the model has been challenged but still retains learning efficacy, can foster robustness and generalization. 10 \f5. Experimental Setup In our ensemble Knowledge Distillation (KD), experiments are conducted with two or three teachers. Each teacher model is a ResNet50 classifier pretrained on ImageNet (Deng et al, 2009) and then fine-tuned on their respective target datasets. The student model is ResNet18 trained from scratch using vanilla KD (Hinton et al, 2015). Take the ensemble KD with two teachers as an example, the loss function is defined as: LNLL(zs, ys) = \u2212 C X c=1 yc log \u03c3c(zs) (6) LKD1,2(zs, zt1.2) = \u2212\u03c4 2 C X c=1 \u03c3c(zt1,2 \u03c4 ) log \u03c3c(zs \u03c4 ) (7) L = LNLL + 1 2(LKD1 + LKD2) (8) where LNLL is the usual supervised cross-entropy between the student logits zs and the one-hot labels ys. LKD1,2 is the added knowledge distillation term that encourages the student to match the teacher ensembles. In this paper, we are focusing on ensemble KD with 2 teachers T1 and T2. Results with 3 teachers are discussed in Appendix F. We also provide experiments with Vision Transformers (ViTs) (Dosovitskiy et al, 2021) where the attention map can be obtained directly with the built-in attention module in Appendix E. Experiments are conducted on well-recognized long-tailed datasets ImageNetLT (Liu et al, 2019), CIFAR100 (Krizhevsky, 2009) with an imbalanced factor of 100, and their balanced counterparts. Hyperparameters remain consistent across experiments for each dataset. More detailed settings, including learning rates and temperatures, are provided in Appendix A. In this paper, we distinguish between two types of data augmentation: (1) Weak data augmentation, encompassing conventional methods such as random resized crop, random horizontal flip, and color jitters. (2) Strong data augmentation, which includes RandAugment (RA) (Cubuk et al, 2020) applied on the ImageNet-LT dataset and AutoAugment (AA) (Cubuk et al, 2019) applied on all other datasets. For denotation purposes, we use Ts, Ss to represent teacher or student models trained with strong augmentation, while Tw, Sw denote those trained with weak augmentation. 11 \fIt is essential to highlight that technically, the strong data augmentation applied to both teacher ensemble and student model in KD does not necessarily result in the highest data augmentation strength, as measured by our Affinity metric (defined in Equation 5). This will be shown and clarified further in Section 6.1 Table 1. Therefore, in this study, we varied the data augmentation strengths in ensemble KD. Specifically, in the series of experiments conducted on each dataset, we utilized the entire permutation set of Tw, Ts, Sw, Ss to construct trials (for example, T1sT2wSs is one trial denotation), and then computed their Affinity to quantify their data augmentation strength. In practice, for evaluation, we computed our metrics introduced in Section 4 on both the training set and validation set, considering each trial\u2019s corresponding data augmentation strength. 6. Results and Analysis Our comprehensive set of experiments has yielded several intriguing insights into the learning dynamics of Knowledge Distillation (KD) and explains the fidelity paradox through various data augmentation strengths. We particularly emphasize the roles of attention map diversity, model fidelity, and mutual information, as they interact with student performance in terms of top-1 accuracy and overfitting during both the training and validation procedures. 6.1. Impact on Attention Map Diversity Figure 2 Top shows that during training, a consistent decrease is observed in the Intersection over Union (IoU) of attention maps between different teacher models with stronger data augmentation. This decrease is correlated with an increase in the student model\u2019s accuracy. Trial denotations are also marked as data labels in these scatter plots, together with Table 1 demonstrating their data augmentation strengths. These Affinity values aid in understanding the data augmentation strengths and the decreasing tendencies in the scatter plots: Recall that Affinity measures the offset in data distribution between the original one and the one after data augmentation captured by the student, and lower Affinity corresponds to higher augmentation strength, leading to higher student accuracy. As evidence, for those trials with strong data augmentation and low Affinity, e.g., T1sT2wSs in CIFAR-100, T1wT2sSs in CIFAR-100 imb100, T1sT2wSs in 12 \fFigure 2: Top: Scatter plots of IoU between T1 and T2 attention maps during KD training. Bottom: Exampled attention maps of T1, T2 and S. This attention divergence among teacher ensembles, attributed to the randomness injected by data augmentation, gives the student distilled on them a more comprehensive perspective. ImageNet, and T1sT2wSs in ImageNet-LT, a relatively high validation accuracy is observed for each dataset. It is important to emphasize that the application of strong data augmentation to both teacher ensemble and student model in KD does not lead to the highest level of data augmentation strength, as quantified by our Affinity metric defined in Equation 5. That is, it is the diversity of teachers\u2019 augmentation strength but not the strong data augmentation for a single teacher or student model matters: T1sT2wSs is stronger than T1sT2sSs. Appendix D also offers scatter plots of IoU between T1 and T2 attention maps versus Affinity during KD training. Significantly, this observation suggests that as the ensemble of teachers focuses on increasingly diverse aspects of the input data, the student model benefits from a richer, more varied set of learned representations, leading to enhanced performance, as visualized in Figure 2 Bottom. This finding aligns with and extends ensemble learning theories in KD, where diversity among models enhances overall student performance even by simply manipulating the data augmentation strength. It introduces a new dimension to Knowledge Distillation theory, emphasizing the value of diverse learning stimuli. 13 \fTable 1: Affinity, and Validation Accuracy (Val-Acc) of models with various data augmentation strengths. Dataset Metric Model T1wT2wSw T1wT2wSs T1sT2wSw T1sT2wSs T1wT2sSw T1wT2sSs T1sT2sSw T1sT2sSs Cifar100 Affinity 0.9807 0.8611 0.9805 0.9083 0.9858 0.9143 0.9729 0.9310 Val-Acc 0.7952 0.8129 0.8103 0.8195 0.8015 0.8161 0.8107 0.8137 Cifar100 imb100 Affinity 0.9763 0.8132 0.9810 0.8637 0.9751 0.8635 0.9723 0.8955 Val-Acc 0.4621 0.5111 0.4850 0.5220 0.4862 0.5148 0.5028 0.5210 ImageNet Affinity 0.9901 0.8767 0.9930 0.8988 0.9845 0.9131 0.9871 0.9122 Val-Acc 0.6902 0.6908 0.6878 0.6917 0.6895 0.6914 0.6891 0.6898 ImageNet long-tail Affinity 0.9850 0.8311 0.9755 0.8704 0.9782 0.8751 0.9903 0.8971 Val-Acc 0.4791 0.4929 0.4839 0.4966 0.4846 0.4968 0.4842 0.4942 6.2. Revisiting the Role of Fidelity and Mutual Information As in Figure 3, during training, we observed a decrease in both fidelity and mutual information between teacher ensembles and the student model with stronger data augmentation. Intriguingly, this decrease was accompanied by improved validation accuracy in the student model. This indicates that a lower level of direct mimicry, in terms of output logits distribution, between teacher ensembles and the student is conducive to more effective learning in KD, possibly due to student learning from more divergent teachers\u2019 attentions. To further demonstrate the causality between teachers\u2019 attention divergence and low student-teacher fidelity, i.e., a more diverse attention maps within teacher ensemble causes a lower fidelity, an A/B test is conducted in the setup of ensemble KD with two teachers. Specifically, the control group is the vanilla KD (denoted as vKD) with different data augmentation strengths we used in all previous experiments, and the experimental group (denoted as hKD) is designed as follows: Each training image is first cropped into two parts, left and right, as input to teacher model T1 and T2 respectively. This allows us to proactively diversify the attention maps of each teacher model, rather than passively altering it in the case of varying data augmentation strengths. Then in average, we can expect the experimental group to have far less attention IoU values than the control group, while keeping comparable generalization performance, because in the former each teacher\u2019s attention is constrained to one half of each image. The null hypothesis H0 is that from control (vKD) to experimental (hKD) group, as the teachers\u2019 attention maps IoU decrease, an increase in student-teacher fidelity is observed. Denoting 14 \fFigure 3: Scatter plots of Top: Fidelity (measured by top-1 A) and Bottom: Mutual Information (MI) between teacher ensembles and student during KD training. These decreasing tendencies along with the improved student validation accuracy are in contrast to the traditional viewpoint that higher fidelity consistently benefits student performance, indicating that some extent of student independency may be desired during KD training. the total number of trials as Num, the corresponding p-value is calculated as: p-value = #|fidelity(hKD) > fidelity(vKD)| Num (9) Experiments reveal a p-value less than 0.05, suggesting that we should reject this null hypothesis. Detailed experimental results are provided in Appendix C. In summary, more divergent teacher attentions (i.e., lower IoU values) does cause the decrease in student-teacher fidelity. This counterintuitive result aligns with and complements the paradoxical observation in (Stanton et al, 2021). It implies that while the student model develops a certain level of independence from the teachers (evidenced by lower fidelity and mutual information), it still effectively captures and generalizes the core knowledge of the teachers. Combining with the observation on how varying data augmentation strengths influence the teachers\u2019 attention divergence in Section 6.1, we highlight attention diversification in teacher ensembles as a deeper reason why a student with good generalization may be unable to match the teacher during KD training: Stronger data augmentation 15 \fincreases attention divergence, enabling teachers to offer a broader perspective to the student. Consequently, the student surpasses the knowledge of a single teacher, becoming more independent, and the observed low-fidelity is a demonstration of this phenomenon rather than a pathology. 6.3. Effects of Logits Matching Optimization on KD Although (Stanton et al, 2021) has shown the phenomenon of low-fidelity, they attributed the challenges in optimization as the key factor for the student\u2019s inability to match the teacher. Recent studies, such as (Sun et al, 2024), continue to focus on optimizing the student-teacher logits matching process. Yet in Section 3 the 3rd hypothesis, we suggested that the optimization towards increasing student-teacher mimicry behavior in fact benefits generalization performance rather than the fidelity. To illustrate, here we compared the aforementioned vanilla KD with a logits-matching optimization method in KD (Sun et al, 2024) under different data augmentation strengths, for dataset CIFAR100, CIFAR100-imb100, and ImageNet-LT. Specifically, we experiment with a z-score standardization method applied on logits before the softmax. This mitigates the logits magnitudes and variance gap between teacher and student, which facilitates the student-teacher emulation procedure. Theoretically, denote the logits of teacher model and student model as zt and zs respectively, and the softmax function as \u03c3(\u00b7). Then for a finally well-distilled student with predicted probability density perfectly matching the teacher, i.e., \u03c3(zs) = \u03c3(zt), we have the following two properties proved in (Sun et al, 2024): Logit shift: zs = zt + \u2206 (10) Variance match: Var(zs) Var(zt) = \u03c4s \u03c4t (11) Where \u2206can be considered constant for each sample image, and \u03c4s, \u03c4t are temperatures for the student and teacher respectively during training. That is, even for the student with highest fidelity to its teacher such that \u03c3c(zs) = \u03c3c(zt) for any class c in the dataset, still we have zs = p\u03c4s \u03c4t \u00b7 zt + \u2206 which means the student logits cannot match the teacher logtis. A z-score normalization applied on both the student and teacher logits during KD training can soothe this mismatch by making their logtis distribution equal 16 \fFigure 4: Bar plots comparing between vanilla KD and z-score standardization KD. Top: Generalization performance in terms of train-validation accuracy gap. bottom: Studentteacher fidelity. The z-score standardization, aimed at facilitating the student-teacher logits matching procedure, does improve student generalization performance (indicated by a lower accuracy gap) in most cases. However, it also leads to a decrease in student-teacher fidelity during training, suggesting that the benefit lies more in student generalization than in fidelity improvement. mean and variance, and thus improve generalization performance. However, from the fidelity definition in Equation 3, since the softmax function is monotonic, what we are looking for is the agreed index c of maximum logits between the teacher and student arg maxc(zt) = arg maxc(zs), which unfortunately cannot be directly affected by such optimization method. In conclusion, though an optimization towards student-teacher logits matching can relieve the logit shift and variance match problem, in reality its benefit lies in the student generalization rather than the fidelity improvement. As shown in Figure 4, the z-score standardization does improve the student train-validation accuracy gap in most cases, but a decrease in the studentteacher fidelity is still witnessed. 17 \f7. Conclusion Our research, aiming to explain the fidelity paradox, intersects with and expands upon existing theories for ensemble Knowledge Distillation (KD) in several ways. (1) It introduces a novel perspective on the learning and knowledge transfer process by investigating the impact of attention map diversity on fidelity in KD with various data augmentation strength. (2) It reevaluates the teacher-student fidelity and mutual information challenge, providing insights into the ongoing debates about the relation between student\u2019s ability to mimic its teachers and its generalization performance in KD. (3) It highlights that for optimization towards facilitating student-teacher logits matching which relieves the logit shift and variance match problem, its benefit lies in the student generalization rather than the fidelity improvement. These insights have the potential to catalyze further theoretical advancements in the pursuit of robust KD. Appendix A. Detailed Experimental Settings The experiments are run on a GPU machine with RTX 4090 GPU, AMD 5995WX CPU and 128 GB memory. In each trial, the teacher model of ResNet50 is trained for 30 epochs for ImageNet-LT dataset, and 60 epochs for all the others. The student model of ResNet18 is distilled for: 200 epochs for CIFAR-100; 175 epochs for CIFAR-100 imb100; 60 epochs for ImageNet; and 165 epochs for ImageNet-LT dataset, when their validation accuracy converges. Hyper-parameters, including temperatures of \u03c4 = 10, hard label weight of \u03b1 = 0.2, initial learning rate of 0.1, momentum of 0.9, and batch size of 128, remain the same throughout the entire procedure in each case, ensuring consistent and reliable results for evaluation. For training with balanced ImageNet dataset, we use a cosine annealing learning rate scheduler, with Tmax = 30, etamin = 0 for teacher training, and Tmax = 60, etamin = 0 for student distillation. For other datasets, a lambda learning rate scheduler is used. Specifically, during teacher training, with the following hyperparameters: step1 = 25, step2 = 40, step3 = 60 for CIFAR-100; step1 = 25, step2 = 40, step3 = 60 for CIFAR-100 imb100; and step1 = 35, step2 = 50 for ImageNet-LT. During student distillation, with the following hyperparameters: step1 = 190, step2 = 195 for CIFAR100; step1 = 160, step2 = 165, step3 = 170 for CIFAR-100 imb100; and step1 = 150, step2 = 155, step3 = 160 for ImageNet-LT. 18 \fFigure B.5: Scatter plots of fidelity (measured by KL divergence) between teacher ensembles and student during KD training. For KL divergence, a higher value implies lower fidelity. Thus, these increasing tendencies align with the decreasing ones with Top-1 A in the main text. Appendix B. Fidelity with KL divergence Measurement In the main text Section 6.2, Top-1 A is used for the fidelity metric. Here we also provide results with Kullback-Leibler (KL) divergence between teacher ensembles and student during KD training, as in Figure B.5. Note that for KL divergence, a higher value implies lower fidelity. Appendix C. In-Depth Results for the A/B Test In the main text, to demonstrate the causality between teachers\u2019 attention divergence and low student-teacher fidelity, an A/B test is conducted for ensemble KD with two teachers. Experiments reveal a p-value less than 0.05, suggesting that more divergent teacher attentions (i.e., lower IoU values) does cause the decrease in student-teacher fidelity. In this section, we further provides the detailed experimental results of the A/B test, as shown in Table C.2, C.3 and C.4. Here, vKD denotes the control group of vanilla KD experiments, and hKD denotes the control group of half-image inputs experiments. From these results, it can be seen that in average, hKD has far less attention IoU values than vKD, while keeping comparable generalization performance (indicated by a lower accuracy gap). Appendix D. IoU between T1 and T2 Attentions versus Affinity In the main text, we show that during training, a consistent decrease is observed in the Intersection over Union (IoU) of attention maps between 19 \fTable C.2: Results for the A/B Test on CIFAR100 Dataset. Model Acc Gap IoU Fidelity vKD hKD vKD hKD vKD hKD T1wT2wSw 0.1593 0.1631 0.5860 0.3188 0.9523 0.7564 T1wT2wSs 0.0122 0.0171 0.5560 0.3062 0.7859 0.5921 T1sT2wSw 0.1411 0.1560 0.5678 0.3033 0.9411 0.7295 T1sT2wSs 0.0537 0.0654 0.5097 0.2970 0.8536 0.6520 T1wT2sSw 0.1468 0.1784 0.5519 0.2619 0.9387 0.7248 T1wT2sSs 0.0553 0.0759 0.4925 0.2549 0.8568 0.6513 T1sT2sSw 0.1333 0.1541 0.5539 0.2738 0.9048 0.6621 T1sT2sSs 0.0714 0.0657 0.5361 0.2747 0.8897 0.6801 Table C.3: Results for the A/B Test on CIFAR100 IMB100 Dataset. Model Acc Gap IoU Fidelity vKD hKD vKD hKD vKD hKD T1wT2wSw 0.4854 0.4836 0.4900 0.3195 0.9580 0.7114 T1wT2wSs 0.3206 0.3742 0.4419 0.3094 0.8078 0.5406 T1sT2wSw 0.4712 0.4995 0.5309 0.3041 0.9467 0.6892 T1sT2wSs 0.3604 0.3994 0.4560 0.2992 0.8675 0.6040 T1wT2sSw 0.4641 0.4860 0.4329 0.2643 0.9467 0.6892 T1wT2sSs 0.3570 0.3827 0.4084 0.2558 0.8664 0.5997 T1sT2sSw 0.4444 0.4790 0.4410 0.2717 0.9145 0.6192 T1sT2sSs 0.3738 0.3225 0.4107 0.2721 0.8953 0.6242 20 \fTable C.4: Results for the A/B Test on ImageNet Long-tail Dataset. Model Acc Gap IoU Fidelity vKD hKD vKD hKD vKD hKD T1wT2wSw 0.3937 0.4104 0.7391 0.6245 0.8873 0.5657 T1wT2wSs 0.2453 0.2426 0.7122 0.6311 0.7240 0.4542 T1sT2wSw 0.3873 0.4152 0.7287 0.5948 0.8850 0.5554 T1sT2wSs 0.2639 0.2713 0.6708 0.5607 0.7786 0.4901 T1wT2sSw 0.3871 0.4161 0.7204 0.5798 0.8856 0.5559 T1wT2sSs 0.2622 0.2680 0.6608 0.5537 0.7795 0.4916 T1sT2sSw 0.3816 0.4133 0.7563 0.6244 0.8745 0.5308 T1sT2sSs 0.2700 0.2663 0.7431 0.6490 0.7941 0.5138 different teacher models versus student validation accuracy, suggesting that more divergent teacher attentions correlate with higher accuracy. Here, we also provide the scatter plots of IoU between T1 and T2 attention maps versus Affinity during KD training, as in Figure D.6. These increasing trends demonstrate that stronger data augmentation (indicated by smaller Affinity) does correlate with more divergent teacher attentions (indicated by lower IoU). Figure D.6: Scatter plots of IoU between T1 and T2 attention maps versus Affinity during KD training. These increasing tendencies demonstrate that stronger data augmentation (indicated by smaller Affinity) does correlate with more divergent teacher attentions (indicated by lower IoU). 21 \fAppendix E. Experiments with Vision Transformers In this section, we also provide experiments with Vision Transformers (ViTs) (Dosovitskiy et al, 2021) on CIFAR100 imb100 dataset where the attention map can be obtained directly with the built-in attention module. As shown in Figure E.7, our analysis method can be applied to attentionbased methods such as ViT. The only difference is that when calculating IoU, we can directly use the built-in attention module of ViT to obtain the attention maps. In this experiment, two ViT-b32 teachers are distilled on one ViT-b16 student for CIFAR100 imb100 dataset. And the conclusions in our manuscript still holds for these two cases. That is, lower student-teacher fidelity and larger teachers\u2019 attention diversity correlate with higher student validation accuracy. Figure E.7: Scatter plots for experiments with Vision Transformer (ViT) on CIFAR100 imb100 dataset. Left: Fidelity (measured by top-1 A) and Right: IoU between T1 and T2 during KD training. These decreasing tendencies align with our conclusions drawn from ResNet experiments, suggesting the applicability of our analysis method to attention-based methods like ViT. The main distinction is in calculating IoU, where we can directly use ViT\u2019s built-in attention module to obtain the attention maps. Appendix F. Results with More Teacher Numbers in Ensemble Knowledge Distillation In the main text, we focused on Knowledge Distillation (KD) with 2 teachers in the ensemble. Results with 3 teachers are discussed here. Figure 22 \fF.8 provides scatter plots of teacher attention IoU, fidelity, mutual information, and student entropy in 3-teacher ensemble KD cases, for CIFAR100 and CIFAR100 imb100 datasets. These plots align with the tendencies observed in 2-teacher cases in the main text. Figure F.8: Scatter plots of teacher attention IoU, fidelity, mutual information, and student entropy in 3-teacher ensemble KD cases. These results, aligning with the tendencies observed in 2-teacher cases, further support our conclusions in the main text. Appendix G. Quantitative Evaluation Table G.5 compares our method with SOTA baselines: LFME Xiang et al (2020) and DMAE Bai et al (2023), focusing on the top-1 validation accuracy. LFME is specifically designed for long-tailed datasets, so we only present its results on those. DMAE is initially designed for balanced datasets, so its performance on balanced ones is less satisfying. For our method shown in this table: Ours(1T) is refferred to the KD with one ResNet50 teacher model distilled to one ResNet18 student model, with TwSs. Ours(2T) is refferred to the KD with two ResNet50 teacher models distilled to one ResNet18 student model, with T1sT2wSs. Ours(3T) is refferred to the KD with three ResNet50 teacher models distilled to one ResNet18 student model, with T1sT2wT3wSs. This table demonstrates that our approach, achieved solely by injecting varied levels of randomness into the dataset through controlled data augmentation strength, can attain comparable student performance on both 23 \fTable G.5: Validation accuracies for our method, LFME, and DMAE on four data sets. Method Cifar100 ImageNet Cifar100 imb100 ImageNet long-tail LFME 0.4380 0.3880 DMAE 0.8820 0.8198 0.3725 0.4395 Ours(1T) 0.8133 0.5152 Ours(2T) 0.8195 0.6917 0.5220 0.4968 Ours(3T) 0.8204 0.5302 balanced and imbalanced datasets with methods featuring intricate designs on architectures, optimization, or distillation procedures. Appendix H. Model Calibration and Overfitting Effects in our Experiments As a supplementary study, in this section we further investigate the model calibration effects in ensemble KD. Empirically, the student model can be better calibrated by simply enhancing data augmentation strength. And, as the augmentation strength (measured by Affinity) and/or teacher numbers increased, the calibration effects become more pronounced. While Guo et al (2017) has revealed the calibration effects of temperature scaling, a common technique in KD that does not influence the student\u2019s accuracy, the impact of data augmentation on the student\u2019s prediction confidence and model calibration in KD remains unexplored. This impact is typically gauged by entropy and Expected Calibration Error (ECE) in predictions and is crucial in understanding how they relate to the student\u2019s ability to generalize and perform on unseen data, as measured by overfitting tendencies. Our hypothesis is that, beyond the inherent calibration effects of KD, the student model can be effectively calibrated by elevating data augmentation strengths as well. In this study, we leverage logits entropy and Expected Calibration Error (ECE), along with calibration reliability diagrams Guo et al (2017) for visualization, to assess the calibration properties for teachers and student under varied data augmentation strengths. Specifically, the model logits entropy is computed as: 24 \fTable H.6: ECE and Affinity of models with various data augmentation strengths. Dataset Metric Model T1wT2wSw T1wT2wSs T1sT2wSw T1sT2wSs T1wT2sSw T1wT2sSs T1sT2sSw T1sT2sSs Cifar100 ECE 0.0776 0.0124 0.1076 0.0537 0.0994 0.0568 0.1397 0.0745 Affinity 0.9807 0.8611 0.9805 0.9083 0.9858 0.9143 0.9729 0.9310 Cifar100 imb100 ECE 0.0979 0.0103 0.1114 0.0465 0.0711 0.0482 0.1303 0.0651 Affinity 0.9763 0.8132 0.9810 0.8637 0.9751 0.8635 0.9723 0.8955 ImageNet ECE 0.0275 0.0095 0.0233 0.0118 0.0126 0.0107 0.0122 0.0193 Affinity 0.9901 0.8767 0.9930 0.8988 0.9845 0.9131 0.9871 0.9122 ImageNet long-tail ECE 0.0322 0.0226 0.0357 0.0224 0.0494 0.0307 0.0499 0.0178 Affinity 0.9850 0.8311 0.9755 0.8704 0.9782 0.8751 0.9903 0.8971 H(x) = \u2212 C X c=1 \u02c6 p(yc|x) log \u02c6 p(yc|x) (H.1) For ECE calculation, we first group all the validation samples into M interval bins, which are defined based on the prediction confidence of the model for each sample. The ECE thus can be formulated as follows: ECE = M X m=1 |Bm| N |Acc(Bm) \u2212Conf(Bm)| (H.2) where Bm denotes the set of samples in the m-th bin. The function Acc(Bm) calculates the accuracy within bin Bm, while conf(Bm) computes the average predicted confidence of samples in the same bin. In Figure H.9 Top, a notable inverse relationship was observed between the entropy of the student model\u2019s predictions and overfitting. While stronger data augmentation leading to increased entropy (indicative of lower confidence), there was a concurrent decrease in the tendency of the student model to overfit the training data, as evidenced by the reduction in the trainvalidation accuracy gap. Figure H.9 Bottom further compares the model calibration reliability diagrams for KD with varied teacher numbers (from 1 to 3) and data augmentation strengths. It can be observed that as the number of teachers increased or the augmentation strength increased (indicated by decreased Affinity), the student models exhibited better calibration. Table H.6 further provides the Expected Calibration Error (ECE) with corresponding Affinity values for all the trials with 2-teacher ensemble KD. 25 \fFigure H.9: Top: Scatter plots of student entropy versus overfitting (gap between top-1 validation and training accuracy) during KD training. Bottom: Calibration reliability diagrams with varied teacher numbers (1 to 3) for CIFAR100 imb100 and its balanced counterpart. Stronger augmentation (indicated by decreased Affinity) and more teachers in the ensemble contributes to improved model calibrations and mitigate overfitting effects. This aids in understanding the data augmentation strengths and the decreasing tendencies in all the previous scatter plots: Recall that Affinity measures the offset in data distribution between the original one and the one after data augmentation captured by the student, and lower Affinity corresponds to higher augmentation strength, leading to higher student accuracy. Thus, for the trials with strong data augmentation (e.g., T1wT2wSs in CIFAR-100, CIFAR-100 imb100, and ImageNet; T1sT2wSs in ImageNet-LT), they not only correspond to a relatively small ECE but also a high validation accuracy."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.00747v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00747v1",
|
| 3 |
+
"title": "Soft Preference Optimization: Aligning Language Models to Expert Distributions",
|
| 4 |
+
"abstract": "We propose Soft Preference Optimization (SPO), a method for aligning\ngenerative models, such as Large Language Models (LLMs), with human\npreferences, without the need for a reward model. SPO optimizes model outputs\ndirectly over a preference dataset through a natural loss function that\nintegrates preference loss with a regularization term across the model's entire\noutput distribution rather than limiting it to the preference dataset. Although\nSPO does not require the assumption of an existing underlying reward model, we\ndemonstrate that, under the Bradley-Terry (BT) model assumption, it converges\nto a softmax of scaled rewards, with the distribution's \"softness\" adjustable\nvia the softmax exponent, an algorithm parameter. We showcase SPO's\nmethodology, its theoretical foundation, and its comparative advantages in\nsimplicity, computational efficiency, and alignment precision.",
|
| 5 |
+
"authors": "Arsalan Sharifnassab, Sina Ghiassian, Saber Salehkaleybar, Surya Kanoria, Dale Schuurmans",
|
| 6 |
+
"published": "2024-04-30",
|
| 7 |
+
"updated": "2024-04-30",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"cs.AI"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM Fairness",
|
| 15 |
+
"gt": "Soft Preference Optimization: Aligning Language Models to Expert Distributions",
|
| 16 |
+
"main_content": "Introduction The alignment problem focuses on adjusting a generative model (e.g., Large Language Models (LLMs)) to align its outputs with human preferences and ethical standards or to tailor the model for specific tasks; and is especially important after pretraining on extensive datasets with mixed-quality samples. A widely embraced approach involves refining these models based on expert (i.e., human) preferences, typically expert-provided comparisons of pairs of model-generated outputs [1]. Given a preference dataset D and a pretrained model \u03c0ref, preference alignment seeks to train a new model, \u03c0\u03b8, whose outputs are better aligned with the preference in D [2, 3]. A notable advancement in this field has been the application of Reinforcement Learning from Human Feedback (RLHF), which involves training a reward-model based of actions preferred by humans and then optimizing \u03c0\u03b8 to maximize these learned rewards while ensuring closeness to the initial model behaviors [4]. Despite the effectiveness of RLHF in addressing the alignment problem, RLHF involves a relatively complex pipeline, susceptible to propagation of reward-model\u2019s biases over to the policy optimization Recently, several studies have introduced methods for the direct optimization of preferences, including Direct Preference Optimization (DPO) among others [5, 6, 7, 8, 9, 10, 11]. These approaches eliminate the need for a separate reward model training phase, instead adjusting the model directly using preference data, and often outperform RLHF-based approaches. These reward-model-free methods enjoy advantages over RLHF-based approaches, such as simplified pipelines, reduced computational complexity, and avoidance of the bias transfer from the reward model to policy optimization. Indeed, the rationale for incorporating an additional component, the reward model, into a supervised learning context with a supervised dataset, is debatable. In this work, we propose a simple and effective reward-model-free alignment method, termed Soft Preference Optimization (SPO). SPO seeks to align the model\u2019s preference estimates (detailed in Section 3) with expert preferences D, through minimizing a loss function of the form AlignmentLoss(\u03c0\u03b8, \u03c0ref, D) = PreferenceLoss(\u03c0\u03b8, D) + Regularizer(\u03c0\u03b8, \u03c0ref), (1) \u2217Correspondence to [email protected] arXiv:2405.00747v1 [cs.LG] 30 Apr 2024 \fwhere the Regularizer may be chosen as the KL divergence. We discuss natural choices for the model\u2019s preference estimates and the preference loss function in Sections 3 and 4. Unlike RLHF and DPO, the development of SPO does not rely on assumptions regarding the existence of underlying rewards, such as the Bradley-Terry (BT) model [12]. Nevertheless, we demonstrate that if the BT model is applicable and given an asymptotically large preference dataset, SPO is theoretically guaranteed to converge to a softmax of the rewards, which inspires the designation \u201cSoft Preference Optimization\u201d. Unlike DPO, which tends toward a deterministic model even with an extremely large dataset if the regularization coefficient is nearly zero, SPO allows for the adjustment of the softmax\u2019s exponent through an input parameter, thereby offering flexibility in modulating the \u201csoftness\" of the output distribution. SPO has two main differences from existing reward-model-free alignment methods. The first distinction involves the choice of a preference loss that aligns model\u2019s preference estimates with expert\u2019s preferences, resulting in a favorable fixed point as discussed in the previous paragraph. The other distinction of SPO with DPO and similar algorithms lies in the application of regularization. DPO restricts regularization to the preference dataset, which is counter-intuitive since the dataset already provides specific data points for the model to fit; thus, additional regularization within this limited scope is unnecessary. More critically, since the preference dataset represents a tiny subset of the potential outputs of the model, focusing regularization solely within this subset can lead to undesirable, extensive shift in the model\u2019s distribution outside of the dataset, resulting in a non-coherent behaviours. Acknowledging this limitation, SPO applies regularization across the entire output distribution of the model, not just within the confines of the preference dataset. 2 Background Consider a finite context (or query) space X and a finite action (or response) space Y. For a given query x \u2208X, a behavior policy (such as a pre-trained model) is employed to generate responses y1, y2 \u2208Y. These responses are subsequently evaluated by expert raters (e.g., humans) to determine which of y1 or y2 constitutes a more appropriate response to the query x. We adopt the notation y1 \u227by2 to denote that y1 is preferred over y2 in a specific context. The true expert preferences are typically represented by a probability, p\u2217(y1 \u227by2|x), reflecting the inherent randomness due to the variable nature of the experts, who may be a group of humans with slightly differing preferences. A preference dataset, D, is compiled by collecting expert preferences for multiple (x; y1, y2) tuples. In detail, D comprises tuples (x; yw, yl), where yw \u227byl indicates the preferred (winner) and less preferred (loser) responses based on expert evaluations. RLHF comprises two main phases: reward modeling and reinforcement learning (RL) fine-tuning. The initial phase, reward modeling, operates under the assumption that there exist latent rewards r(y|x) that form the basis of expert preferences. This phase aims to develop a model capable of closely approximating these underlying rewards. A widely accepted method for defining these latent rewards is through the Bradley-Terry (BT) model [12], alongside the Plackett-Luce ranking models, which serve as a generalization of the BT model for data involving rankings [13, 14]. The BT model posits that the distribution of expert preferences, p\u2217, is characterized by the following equation: p\u2217(y1 \u227by2|x) = \u03c3 \u0000r(y1|x) \u2212r(y2|x) \u0001 = exp \u0000r(y1|x) \u0001 exp \u0000r(y1|x) \u0001 + exp \u0000r(y2|x) \u0001, (2) where \u03c3(\u00b7) represents the sigmoid function. Subsequently, the reward model r\u03d5(y|x) is trained to minimize the negative log-likelihood loss, \u2212E(x;yw,yl)\u223cD \u0002 \u03c3 \u0000r(yw|x) \u2212r(yl|x) \u0001\u0003 . The RL fine-tuning phase aims to train a model, \u03c0\u03b8, to maximize a loss function of the form LRLHF \u0000\u03c0\u03b8, \u03c0ref, r\u03d5 \u0001 = \u2212Ex\u223cD,y\u223c\u03c0\u03b8(\u00b7|x) \u0002 r\u03d5(y|x) \u0003 + \u03b2DKL \u0000\u03c0\u03b8 \u2225\u03c0ref \u0001 , (3) where \u03b2 is a non-negative constant, and \u03c0ref is a reference policy often acquired through pre-training on high-quality data and is typically identical to the behavior policy. The DKL term in the loss function acts as a regularizer, ensuring the model does not significantly deviate from the distribution where the reward model is most accurate. RL fine-tuning employs reinforcement learning algorithms, like PPO [15], to optimize the above loss function [4], introducing significant complexity into the RLHF pipeline. Additionally, the RLHF framework allows for the propagation of any generalization errors from the reward model to the RL fine-tuned model. The DPO framework [5] addresses these 2 \fchallenges by simplifying the problem into a single-phase supervised learning approach, thus avoiding the pitfalls associated with separate reward modeling and RL fine-tuning phases. DPO circumvents the need for a reward model by directly optimizing the following loss function: LDPO \u0000\u03c0\u03b8, \u03c0ref, D \u0001 = \u2212E \u0014 log \u03c3 \u0012 \u03b2 log \u03c0\u03b8(yw|x) \u03c0ref(yw|x) \u2212\u03b2 log \u03c0\u03b8(yl|x) \u03c0ref(yl|x) \u0013\u0015 . (4) It was demonstrated in [5] that LDPO has the same minimizer as LRLHF, under the conditions of the BT model, an asymptotically large dataset, and a sufficiently large model capacity (i.e., a tabular model that encodes the probability of \u03c0\u03b8(y|x) for all x \u2208X and y \u2208Y into a vector). The DPO framework was further extended in [16], aiming to directly maximize the win-rate of \u03c0\u03b8 against \u03c0ref. 3 SPO Following (1), we consider a loss function of the form: LSPO(\u03c0\u03b8, \u03c0ref, D) = Lpref(\u03c0\u03b8, D) + Reg(\u03c0\u03b8, \u03c0ref), (5) where Lpref and Reg stand for preference loss and regularizer, respectively. We proceed to further detail these components. The the regularization term, Reg(\u03c0\u03b8, \u03c0ref), aims to ensure that \u03c0\u03b8 avoids producing outputs that are highly improbable under \u03c0ref. A common and effective choice is the KL divergence, DKL(\u03c0\u03b8 \u2225\u03c0ref), although other regularization options are viable [17]. Importantly, Reg(\u03c0\u03b8, \u03c0ref) does not incorporate the preference dataset D as an input. This is because within D, the model aims to fit to the target preferences, making additional regularization within D unnecessary. In fact, the regularization term primarily aims to regularize \u03c0\u03b8 outside D. This approach diverges from the DPO and several other existing loss functions (detailed in Section 7), which only consider the divergence of \u03c0\u03b8 from \u03c0ref within the preference dataset. We now turn our attention to the preference loss. Given a query x, let \u03c0\u03b8(y|x) denote the probability that model \u03c0\u03b8 generates output y. When presented with a query x and two responses, y1 and y2, we define the probability that \u03c0\u03b8 prefers y1 over y2 as P\u03c0\u03b8(y1 \u227by2 | x) def = P \u0000output of \u03c0\u03b8(\u00b7|x) is y1 \f \f output of \u03c0\u03b8(\u00b7|x) is in {y1, y2} \u0001 = \u03c0\u03b8(y1|x) \u03c0\u03b8(y1|x) + \u03c0\u03b8(y2|x), (6) where the last equality follows from the definition of conditional probability. We can then employ log-likelihood loss to measure the alignment of preference-probabilities\u2019 with the preference-dataset labels, \u2212E(x;yw,yl)\u223cD \u0002 log P\u03c0\u03b8(yw \u227byl | x) \u0003 . (7) We consider a preference loss Lpref(\u03c0\u03b8, D) that extends the above cross entropy loss by employing arbitrary exponents for \u03c0\u03b8. Specifically, we let for any \u03b1 > 0, L\u03b1 pref(\u03c0\u03b8, D) def = \u22121 \u03b1E(x;yw,yl)\u223cD \u0014 log \u03c0\u03b8(yw | x)\u03b1 \u03c0\u03b8(yw | x)\u03b1 + \u03c0\u03b8(yl | x)\u03b1 \u0015 , (8) and for \u03b1 = 0, L0 pref(\u03c0\u03b8, D) def = \u22121 2E(x;yw,yl)\u223cD \u0014 log \u03c0\u03b8(yw | x) \u03c0\u03b8(yl | x) \u0015 . (9) This L\u03b1 pref(\u03c0\u03b8, D) contains the cross-entropy loss in (7) as a special case when \u03b1 = 1. The \u03b1 parameter allows for tailoring the model to exhibit different entropies; models minimized under L\u03b1 pref will display higher entropy for larger \u03b1 values, gradually moving towards a deterministic model akin to DPO as \u03b1 approaches zero; as established in the next theorem. Although the SPO framework does not rely on existence of underlying reward functions, and in particular the BT assumption, it is insightful to study the preference loss L\u03b1 pref under the conditions where the BT model assumption is valid. Intuitively, for a BT expert model, defined as \u03c0(y|x) = exp(r(y|x))/Z(x) with Z(x) being the partition function, the preference probability in (6) would be 3 \fidentical to the BT preference formula (2). In the next theorem, we further study the landscape of L\u03b1 pref under the BT model assumption. To eliminate local minima and saddle points that arise from nonlinear model spaces such as neural networks, in the theorems we consider a tabular model that encodes the probability of \u03c0\u03b8(y|x) for all x \u2208X and y \u2208Y into a large vector. Theorem 1. Suppose that the BT model holds with rewards r(\u00b7|x), and fix any probability distribution D over X \u00d7 Y \u00d7 Y that has full support2 and is consistent with the BT assumption.3 Then, for any \u03b1 \u22650, L\u03b1 pref has a unique minimizer Softmax(r(\u00b7|x)/\u03b1) (reducing to argmax r(\u00b7|x) for \u03b1 = 0). Furthermore, this minimizer is globally absorbing, and the landscape of L\u03b1 pref contains no other first-order stationary point (i.e., no other local minima, local maxima, or saddle points). The proof is provided in Appendix A. According to Theorem 1, minimizer of L\u03b1 pref is the softmax of BT rewards divided by \u03b1, where \u03b1 controls the entropy of the final model. Specifically, in the the asymptotically large dataset regime, when \u03b1 = 1, the preference loss reaches its minimum at the hypothetical BT expert model that generates the preference dataset\u2019s labels, defined as Softmax(r(\u00b7|x)). The gradient of the preference loss L\u03b1 pref, for any \u03b1 \u22650, is given by \u2212\u2207\u03b8L\u03b1 pref(\u03c0\u03b8, D) = E(x;yw,yl)\u223cD \u0014 \u03c0\u03b8(yl|x)\u03b1 \u03c0\u03b8(yw|x)\u03b1 + \u03c0\u03b8(yl|x)\u03b1 \u0000\u2207\u03b8 log \u03c0\u03b8(yw|x) \u2212\u2207\u03b8 log \u03c0\u03b8(yl|x) \u0001\u0015 . Here, \u03c0\u03b8(yl|x)\u03b1/ \u0000\u03c0\u03b8(yw|x)\u03b1 + \u03c0\u03b8(yl|x)\u03b1\u0001 serves as a measure of the model\u2019s error in preferring yw over yl. Consequently, the magnitude of this preference error proportionally scales the adjustment \u2207\u03b8 log \u03c0\u03b8(yw|x) \u2212\u2207\u03b8 log \u03c0\u03b8(yl|x), leading to larger updates when the error is large. 4 Weighted SPO We further expand the preference loss of SPO by considering a weighting over different samples, where the weights can depend on \u03c0\u03b8. This weighting only affects (improves) the optimization process without changing the fixed point, as we show in this section. We call a function \u00b5 : Y \u00d7 Y \u00d7 X \u2192R+ symmetric positive if \u00b5(y1, y2 | x) = \u00b5(y2, y1 | x) > 0, for all x \u2208X and all y1, y2 \u2208Y. Given a symmetric positive function \u00b5 and an \u03b1 \u22650, we define weighted preference loss as L\u03b1,\u00b5 pref(\u03c0\u03b8, D) def = \u22121 \u03b1E(x;yw,yl)\u223cD \u0014 \u00b5(yw, yl | x) log \u03c0\u03b8(yw | x)\u03b1 \u03c0\u03b8(yw | x)\u03b1 + \u03c0\u03b8(yl | x)\u03b1 \u0015 (10) if \u03b1 > 0, and for \u03b1 = 0 we let L0,\u00b5 pref(\u03c0\u03b8, D) def = \u22121 2E(x;yw,yl)\u223cD \u0014 \u00b5(yw, yl | x) log \u03c0\u03b8(yw | x) \u03c0\u03b8(yl | x) \u0015 . (11) The weight-function \u00b5 controls the impact of individual samples within the loss calculation. The utility of \u00b5 emerges from the observation that not all sample pairs in the preference dataset hold equivalent significance. For instance, diminishing the weights of dataset samples (x; yw, yl) where both responses yw and yl are of low quality (e.g., low probability) can be particularly advantageous. This can be achieved for example by setting \u00b5(yw, yl | x) \u221d \u0000\u03c0\u03b8(yw | x) + \u03c0\u03b8(yl | x) \u0001\u03b3, with \u03b3 > 0. Refer to Section 6 and Appendix B for choices of \u00b5 that have demonstrated enhanced performance in our experimental evaluations. While \u00b5 may depend on \u03c0\u03b8, it is important to note that gradient propagation through \u00b5 is not permitted. Specifically, the gradient \u2207\u03b8L\u03b1,\u00b5 pref(\u03c0\u03b8, D) is given by \u2212E(x;yw,yl)\u223cD \u0014 \u00b5(yw, yl|x) \u03c0\u03b8(yl|x)\u03b1 \u03c0\u03b8(yw|x)\u03b1 + \u03c0\u03b8(yl|x)\u03b1 \u0000\u2207\u03b8 log \u03c0\u03b8(yw|x) \u2212\u2207\u03b8 log \u03c0\u03b8(yl|x) \u0001\u0015 . (12) 2Full support in this context means that the probability distribution assigns a non-zero sampling probability to all (x; yw, yl) \u2208X \u00d7 Y \u00d7 Y. 3Consistency with the BT holds if the relative probability of outcomes is determined by a logistic function of the reward differences. More specifically, D(x; y1, y2)/D(x; y2, y1) = exp \u0000r(y1 | x) \u2212r(y2 | x) \u0001 , for all (x; y1, y2) \u2208X \u00d7 Y \u00d7 Y, where r(\u00b7|\u00b7) is the reward function in the BT model. 4 \fInterestingly, the weight function, \u00b5, mainly influences the optimization process, not the ultimate fixed point, in the tabular setting and under asymptotically large preference dataset, as we show in the next theorem. Theorem 2. Suppose that the conditions of Theorem 1 hold. Then for any \u03b1 \u22650 and any symmetric positive function \u00b5, the softmax of the BT rewards divided by \u03b1, Softmax(r(\u00b7|x)/\u03b1) (reducing to argmax r(\u00b7|x) for \u03b1 = 0), is the unique globally absorbing fixed point of the differential equation \u02d9 \u03c0 = Q \u0000\u2212\u2207\u03b8L\u03b1,\u00b5 pref(\u03c0\u03b8, D)), where Q(\u00b7) stands for projection onto the probability simplex, and the gradient is given in (12). The proof is given in Appendix A. 5 Comparative Analysis: SPO Versus DPO This section contrasts the SPO method with the DPO algorithm, at a conceptual level. A detailed empirical comparison with DPO will follow in Section 6. A key distinction between SPO and DPO lies in the application of the regularization (or DKL(\u03c0\u03b8 \u2225 \u03c0ref)). The DPO loss function (4) incorporates regularization over preference dataset samples only. This approach is suboptimal since the primary objective of alignment is to align with the preferences in the dataset, and regularization within the preference dataset undermines this objective. The inclusion of DKL in Reinforcement Learning from Human Feedback (RLHF) originally aimed to mitigate the risk of \u03c0\u03b8 diverging significantly from \u03c0ref in unexplored regions of the response space, which could lead to unexpected distribution shifts. In the same vein, SPO incorporates a global regularizer, acting across the entire response space rather than being confined to the preference dataset. Another advantage of SPO over DPO and RLHF emerges from the tendency of DPO and RLHF models towards determinism. Consider a hypothetical scenario where the preference dataset is significantly larger compared to the data used for pretraining. In such a context, the preference dataset itself provides ample information for model alignment, rendering the regularization or DKL unnecessary; equivalently \u03b2 can be set to a very small value or zero. In this case, under BT-model assumption, the minimizers of the RLHF and DPO loss functions are deterministic models that for each query x, deterministically return a response y that maximizes r(y|x). In general, the RLHF and DPO loss functions have an inherent tendency towards low entropy policies [16], which results in a constrained range of responses, leading to potential mode collapse, where the model\u2019s outputs, although high-quality, are limited to a narrow set. In contrast, SPO, through its alignment loss in (5), allows for entropy control of the output solution via the \u03b1 parameter in (8), even when \u03b2 = 0 (see Theorem 1). This capacity to preserve information diversity makes SPO more adaptable for continual learning scenarios, enabling the model to evolve over time and be applied to subsequent alignments without significant loss of potential sets of responses. It is noteworthy that unlike RLHF and DPO, the SPO framework does not presuppose the existence of an underlying reward model nor relies on assumptions such as the BT model. Instead, SPO\u2019s preference loss aims to directly align \u03c0\u03b8 with the preferences in the preference dataset. This distinction potentially facilitates the adaptation of SPO to broader alignment contexts. Furthermore, the choice of regularization is not confined to DKL. This stands in contrast to the DPO and IPO frameworks, which fundamentally depend on employing DKL for derivation of their loss functions. We further observe that the DPO loss does not allow for separation into components like (5), namely as a sum of a preference loss that is independent of \u03c0ref and a regularizer such as DKL. To understand why, consider a scenario where \u03c0\u03b8(yw|x) = \u03c0\u03b8(yl|x) for a given sample (x; yw, yl) \u2208D. In this instance, the alignment loss remains symmetrical with respect to \u03c0ref(yw|x) and \u03c0ref(yl|x); because swapping the values of \u03c0ref(yw|x) and \u03c0ref(yl|x) would not alter either the preference loss or DKL. This symmetry is absent in the DPO framework, as evident from the DPO loss formulation in (4). Despite the benefits of regularizing \u03c0\u03b8 across the entire response space, as opposed to restricting DKL solely to the preference dataset\u2014a point highlighted earlier in this section\u2014this approach can lead to significant computational overhead. This is particularly true for sequential generative models such as language models, where generating sequences is computationally more intensive than calculating the probability of a given sequence. In Appendix B, we discuss methods to resolve this and other practical concerns of SPO. 5 \f6 Experiments We evaluated the performance of SPO in a story generation task, using pretrained models on the TinyStories dataset [18] which is a synthetic collection of brief stories designed for children aged 3 to 4. This dataset proves effective for training and evaluating language models that are smaller than the current state-of-the-art, and capable of crafting stories that are not only fluent and coherent but also diverse. Preference Dataset: We generated a preference dataset for aligning the stories to older age groups. In particular, we used the following prompt in generating the preference dataset where given a pair of stories, we asked GPT3.5-Turbo to assess them in terms of coherency in the plot, good language skills, and whether it is interesting and engaging for the age group of 16 years old. A high school teacher has asked two 16 year-old students to write a short story. Your task is to decide which story is better for publication in the high school newspaper, with absolutely no further editing. Story 1: \u201cOnce upon a time, there was a big balloon. It was red and round and made a funny noise. A little girl named Lily loved to watch it float in the sky. One day, Lily\u2019s mom had a meeting and needed to go. She told Lily to stay inside and play with her toys. But Lily wanted to see the balloon so badly that she sneaked outside and followed it. As she followed the balloon, she noticed that the sky was getting darker and thicker. She started to feel scared. Suddenly, the balloon started to shrink and get smaller and smaller. Lily was so scared that she started to cry. But then, a kind police officer found her and took her back home. Lily learned that it\u2019s important to listen to her mom and stay safe. And she also learned that balloons can be filled with air, but they can also be filled with heavy water.\u201d Story 2: \u201cOnce upon a time, there was a little girl named Lily. She loved animals and had a pet bunny named Fluffy. One day, she saw an amazing birdcage in the store. It was shiny and big, and had many colorful birds inside.Lily wanted the birdcage so much, but she didn\u2019t have enough money to buy it. She felt sad and cried a little. But then, Fluffy came to her and started cuddling with her. Lily felt happy again, and she realized that having Fluffy was more than just a pet store. It was her favorite thing.From that day on, Lily and Fluffy would sit together and watch the birds in the amazing birdcage. They didn\u2019t need to buy it, they just needed each other. And they lived happily ever after.\u201d Please provide your general assessment about each story including whether it is interesting and engaging for the age group of 16 years (not being too childish), has a coherent plot, and has good language skills. Then, assign each story a score between 0 and 10. A story should get a higher score if it is better in all aspects considered in the general assessment. Story 1: The plot is a bit confusing and jumps around a bit with Lily following the balloon and then suddenly being rescued by a police officer. The lesson about listening to her mom and staying safe is good, but the addition of the balloon shrinking and being filled with heavy water feels a bit random and out of place. Language skills could be improved with more descriptive language and better flow. Story 2: The plot is more coherent and focuses on a simple yet heartwarming relationship between Lily and her pet bunny, Fluffy. The message about appreciating what you have rather than always wanting more is clear and well-delivered. The language used is more engaging and suitable for the age group of 16 years. Final estimates: Score of story 1: 5 Score of story 2: 8 Preference: Story 2 is better for publication in the highschool newspaper. We generated a preference dataset of 500k story pairs using the above prompt, where each story was independently generated using a 110M-parameter pretrained model [19]. To improve the quality of preference data, we evaluated each story pair twice (flipping the stories in the second evaluation), and considered those pairs whose both evaluations have a consistent preference and the difference between story 1 and story 2 scores is at least two in each of both runs (at least 3 if story 2 is the winner, due to the statistically significant bias of GPT3.5 towards favoring story 2 over story 1). After this filtration, about 100k pairs remained to be used in the alignment phase. Implementation: Building on the implementation in [20], and starting from the pretrained 110M parameter model [19], we aligned a 110M parameter model using SPO and DPO. The alignment training used a batch size of 128 story-pairs for 40k iterations, optimizing SPO and DPO loss functions via AdamW with 5k warm-up iterations. For SPO, we searched \u03b1 \u2208{0.003, 0.01, 0.03, 0.1}, and used uniform weighting and SoftImportance weighting (see Appendix B for definition) for the weight function \u00b5. The regularization coefficient \u03b2 was swept over \u03b2 \u2208{0.003, 0.01, 0.03, 0.1} for both DPO and 6 \fFigure 1: Win rates of SPO and DPO against reference model. SPO algorithms. Refer to Appendix B for more implementation details. In both experiments we considered a reference model identical to the pretraind model. Results: We computed the win rates of SPO and DPO against the reference model using GPT3.5-Turbo, at different stages of training. Each win rate was averaged over 1000 story-pair instances. Figure 1 illustrates win rates versus training steps. The peak win-rates of SPO and DPO are 72% and 68%, respectively. Moreover, DPO has a disadvantage that its win rate quickly drops bellow 50% after a narrow peak, whereas SPO exhibits a higher and wider peak, as well as a much better long-term performance. 7 Related Works Reinforcement Learning from Human Feedback (RLHF) aims to align AI systems with human preferences, relying on human judgments rather than manual rewards or demonstrations. This method has been successfully applied in fine-tuning large language models (LLMs) [21, 22, 4], but faces challenges including data quality issues, reward misgeneralization, and policy optimization complexities. Research to enhance RLHF includes methods such as rejection sampling for response generation [23, 22], where the highest-reward response from a fixed number is selected for finetuning. The reference [24] simplified instruction alignment with language models into a goal-oriented reinforcement learning task, utilizing a two-phase approach of high-temperature online sampling and supervised learning with relabeled data during offline training. A two-loop learning algorithm, Grow and Improve, has also been proposed for iterative model alignment and training on a fixed dataset [25]. The Grow loop leverages the existing model to create and sample a dataset while the Improve loop iteratively trains the model on a fixed dataset. Given the challenges of RLHF, reward-model-free alignment methods emerged fairly recently and have gained a lot of popularity. Reward-model-free approach to alignment was popularized specifically after introduction of DPO in [5], which is breifly outlined in Section 2. Recently, several works have been proposed methods to improve DPO. In [16], the authors considered an objective called \u03a8PO for learning from human preferences that is expressed in terms of pairwise preferences, with no need for assumption the BT model. The authors focused on a specific instance, IPO, of \u03a8PO by setting \u03a8 as the identity, aiming to mitigate the overfitting and tendency-towards-deterministic-policies issues observed in DPO. The work in [7] proposed a loss function which is an unbiased estimate of 7 \fthe original DPO loss, and aims to alleviate sensitivity to flipped labels due to labeling noise. It was proposed in [6] to add an offset term within the sigmoid function in the DPO loss. In this manner, the model puts more weight on the winning response. In order to reduce the memory usage of DPO, [8] approximated the DPO loss by replacing \u03c0ref with a uniform reference model, eliminating the need to store and evaluate the reference model. Inspired by cringe loss previously proposed for binary feedback, [10] adapted cringe loss for the pairwise preference context. More specifically, cringe loss applies standard likelihood training to winning responses. For a losing response, it contrasts each token in the losing sequence against other likely tokens predicted by the model, aiming to discourage the losing sequence from being the top-ranked sequence. In [17], the authors proposed a separable alignment technique, called SLiC, where, similar to SPO, the alignment loss is the sum of two terms: a calibration loss that contrasts a winner and loser responses encouraging the model \u03c0\u03b8 to assign more probability to the winner, and a regularizer term. SLiC was further developed in [26] to be used in alignment to preference data, where they proposed the SLiC-HF algorithm. SLiC-HF involves a rectified contrastive-loss as its calibration loss and a log-likelihood term as the regularization. Other than a different choices for preference loss and regularization, SLiC-HF diverges from the SPO framework in that the regularization in SLiC-HF is limited to the preference or pertaining datasets, not using online samples form \u03c0\u03b8 as in the DKL regularizer. In practice, the performance of an alignment technique highly depends on the quality of the human preference dataset. Noisy preference pairs could potentially limit the language models from capturing human intention. In [27], DPO was used in conjunction with an improved preference dataset via a rejection sampling technique, arguing that DPO suffers from a mismatch between the sampling distribution and the policy corresponding to true expert preferences. In [11], the authors formed a dataset of conservative pairs by collecting AI feedback through an ensemble of chat model completions, followed by GPT-4 scoring. Then, employed DPO for alignment to this improved dataset. The work in [9] leveraged semantic correlations of prompts in the dataset to form more conservative response pairs. for a given prompt (x; yw, yl), a prompt x\u2032 with a similar semantic from a tuple (x\u2032; y\u2032 w, y\u2032 l) is used to form more conservative pairs. In particular, they propose a weighted version of the DPO loss where for a given labeled data (x; yw, yl), yw is approved while yl and any y\u2032 l (from a similar prompt x\u2032) are penalized."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00772v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00772v1",
|
| 3 |
+
"title": "Abell 0399-Abell 0401 radio bridge spectral index: the first multifrequency detection",
|
| 4 |
+
"abstract": "Recent low-frequency radio observations at 140 MHz discovered a 3 Mpc-long\nbridge of diffuse emission connecting the galaxy clusters Abell 0399 and Abell\n0401. We present follow-up observations at 60 MHz to constrain the spectral\nindex of the bridge, which so far has only been detected at 140 and 144 MHz. We\nanalysed deep (~18 hours) LOw Frequency ARray (LOFAR) Low Band Antenna (LBA)\ndata at 60 MHz to detect the bridge at very low frequencies. We then conducted\na multi-frequency study with LOFAR HBA data at 144 MHz and uGMRT data at 400\nMHz. Assuming second-order Fermi mechanisms for the re-acceleration of\nrelativistic electrons driven by turbulence in the radio bridge regions, we\ncompare the observed radio spectrum with theoretical synchrotron models. The\nbridge is detected in the 75'' resolution LOFAR image at 60 MHz and its\nemission fully connects the region between the two galaxy clusters. Between 60\nMHz and 144 MHz we found an integrated spectral index value of -1.44 +\\- 0.16\nfor the bridge emission. For the first time, we produced spectral index and\nrelated uncertainties maps for a radio bridge. We produce a radio spectrum,\nwhich show significant steepening between 144 and 400 MHz. This detection at\nlow frequencies provides important information on the models of particle\nacceleration and magnetic field structure on very extended scales. The spectral\nindex gives important clues to the origin of inter-cluster diffuse emission.\nThe steepening of the spectrum above 144 MHz can be explained in a turbulent\nre-acceleration framework, assuming that the acceleration timescales are longer\nthan ~200 Myr.",
|
| 5 |
+
"authors": "G. V. Pignataro, A. Bonafede, G. Bernardi, F. de Gasperin, G. Brunetti, T. Pasini, F. Vazza, N. Biava, J. M. G. H. J. de Jong, R. Cassano, A. Botteon, M. Br\u00fcggen, H. J. A. R\u00f6ttgering, R. J. van Weeren, T. W. Shimwell",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "astro-ph.CO",
|
| 9 |
+
"cats": [
|
| 10 |
+
"astro-ph.CO"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Diffusion AND Model",
|
| 14 |
+
"gt": "Abell 0399-Abell 0401 radio bridge spectral index: the first multifrequency detection",
|
| 15 |
+
"main_content": "Introduction Matter accretes primarily onto galaxy clusters along filaments of the so-called cosmic-web, and the subsequent merger of these systems releases an extreme amount of energy into the Intra-cluster medium (ICM) (Markevitch & Vikhlinin 2007). Radio observations have provided strong evidence for the processes where this energy is channeled into particle acceleration and magnetic field amplification in the form of diffuse emission with steep synchrotron spectra (\u03b1 < \u22121, with flux density S \u03bd \u221d\u03bd\u03b1). In galaxy clusters, we observe different types of radio diffuse sources: radio relics, mini halos and radio halos (for an extensive review, see van Weeren et al. 2019). Recent studies unveiled the existence of diffuse radio emission on an even larger scale (Govoni et al. 2019; Botteon et al. 2020; Hoeft et al. 2021). This emission extends beyond the cluster centers and traces the densest region of cosmic-web filaments, where the gas is compressed during the first phase of merger between \u22c6e-mail: [email protected] clusters. Multifrequency studies of synchrotron emission from radio-bridges between clusters can shed light on mechanisms of particle acceleration and properties of the magnetic fields on poorly probed scales (Vazza et al. 2019). The galaxy clusters Abell 0399 and Abell 0401 (hereafter, A399 and A401) are a local (z \u223c0.07, Oegerle & Hill 2001) pair found in a pre-merger state (Bonjean et al. 2018) where X-ray observations (Fujita et al. 1996, 2008; Akamatsu et al. 2017) revealed the presence of 6 \u22127 keV ionised plasma in the region between the clusters. The gas in this region is also detected via the Sunyaev-Zeldovich (SZ) effect by Planck (Planck Collaboration et al. 2013, 2016; Bonjean et al. 2018) and the Atacama Cosmology Telescope (ATC) (Hincks et al. 2022; Radiconi et al. 2022). The first radio bridge connecting two galaxy clusters is discovered in this system (Govoni et al. 2019). The bridge is detected at 140 MHz with the LOw Frequency ARray (LOFAR), and extends for approximately 3 Mpc, which is the projected separation of the two clusters that also host radio halos. Following the discovery of the bridge, the A399-A401 system was extensively studied Article number, page 1 of 7 arXiv:2405.00772v1 [astro-ph.CO] 1 May 2024 \fA&A proofs: manuscript no. aa Fig. 1. Radio images at 60 MHz of the A399-A401 clusters pair. Left panel: Low-resolution (75\u2032\u2032, beam-size in bottom-left corner) image with compact sources and \u03c3rms = 3mJy beam\u22121 produced with the parameters listed in Table 1. Right panel: Same as left panel, but after compact source subtraction. Contour levels (in white) start at 2\u03c3rms, 3\u03c3rms, 5\u03c3rms, and then increase to 20\u03c3rms with factors of 2. A negative contour at \u22123\u03c3rms is shown in red. at radio frequencies. de Jong et al. (2022) presented a 40-hour, deep LOFAR observation at 144 MHz and investigated further the properties of the diffuse emission in the bridge. They were able to detect the bridge at high significance, and measure a flux density of S 144 = 550 \u00b1 60 mJy over 2.7 Mpc2. To cause diffuse emission on such scales, electrons would need to be generated or re-accelerated in situ because of their short synchrotron life-times. Govoni et al. (2019) proposed the model of Fermi-I re-acceleration of fossil electrons by weak shocks crossing the region, which would result in spectral indices similar to those of radio relics (\u03b1 \u223c\u22121.3). Alternatively, Brunetti & Vazza (2020) showed how this emission could also be explained by a Fermi-II re-acceleration process. In this scenario, the fossil relativistic particles are re-accelerated by turbulence in amplified magnetic fields over Mpc-scales. This would result in steep observed synchrotron spectra between 150 MHz and 1.5 GHz (\u03b1 < \u22121.5). Recently, Nunhokee et al. (2023) presented WSRT observations at 346 MHz that were not sufficiently deep to observe the bridge, and therefore place a limit on the bridge spectral index (\u03b1346 140 < \u22121.5). A similar procedure to place limits on the emission of radio bridges is defined in Pignataro et al. (2024). The non-detection of bridge emission in high sensitivity uGMRT data at 400 MHz results in a more stringent constraint on the steep bridge spectral index (\u03b1400 140 < \u22122.2), disfavouring the Fermi-I acceleration scenario. Other than the detection in A399-A401, only a few other radio bridges associated with merging clusters are known. Botteon et al. (2018, 2020) report the bridge in Abell 1758N-S where they are also able to measure a spectral index for a patch of emission. Moreover, a candidate bridge is reported in Abell 1430 (Hoeft et al. 2021), and the bridge between relic and halo in A1550 (Pasini et al. 2022). Recently, a few more bridges between clusters and groups have been discovered (see Bonafede et al. 2021; Venturi et al. 2022). However, for none of these objects has it been possible to determine the spectral index of the extended diffuse emission. In this Letter, we present a multifrequency study conducted with new LOFAR Low Band Antenna (LBA) data at 60 MHz that allows, for the first time, the determination of the spectral index of the radio bridge in the A399-A401 bridge between 60 and 144 MHz. Here, we assume a \u039bCDM cosmology, with H0 = 70 km s\u22121 Mpc\u22121, \u2126m = 0.3, and \u2126\u039b = 0.7. With these assumptions, at the average distance of the A399-A401 system (z\u223c0.07), 1\u2032 = 83 kpc and the luminosity distance is DL = 329 Mpc. 2. Observations and data reduction The observations of A399-A401 are 18 hours long and were carried out using the LOFAR LBA system in the frequency range of 30\u221277 MHz (proposal code: LC13_022, P.I.: Bonafede). The correlated data are initially pre-processed for radio frequency interference (RFI) and Demix procedure (Van der Tol 2009; de Gasperin et al. 2020b). In this case, both Cygnus A and Cassiopia A were demixed. The data are also averaged from 1 to 4 s integration time and from 64 to 8 channels per subband to reduce memory and computational load. The data are then calibrated with the Library for Low Frequencies (LiLF) pipeline. Here we briefly summarize its main steps here, while a full description can be found in de Gasperin et al. (2019, 2020a). The pipeline obtains solutions from the calibrator (3C196) and applies them to correct the target data. In this part, the pipeline isolates well the systematic effects of the polarisation alignment, the bandpass, and Faraday rotation. The clock drift is left mixed with the total electron content (TEC), and both are transferred to the target. Additional calibration is then needed to correct for the differential ionospheric effects that strongly affect the lower frequencies (de Gasperin et al. 2018), especially for observations at low elevation, as it is the case of this target (Dec +13\u25e6). This is done for the target field with direction-independent (DI) self-calibration, and then direction-dependent (DD) calibration. After a round of calibration, we inspected the data and found Article number, page 2 of 7 \fG.V. Pignataro et al.: Abell 0399\u2212Abell 0401 radio bridge spectral index: the first multifrequency detection Fig. 2. Spectral index maps. Top panel: Spectral index map of the emission in the radio bridge in A399-A401 between 60 and 144 MHz, with resolution 80\u2032\u2032. The spectral index values distribution (histogram on top) only refers to the bridge emission inside the 2\u03c3rms level, not covered by the grey mask over the radio halos. Bottom panel: Associated spectral index error map. Overlaid in black are the LOFAR LBA contours at 2\u03c3rms and 3\u03c3rms. LBA HBA* Image size (pixels) 1500 1500 Cell size (arcsec) 6 6 Weighting Briggs robust -0.5 Briggs robust -0.5 min-uv (\u03bb) 24 24 max-uv (\u03bb) 3500 Taper gaussian (arcsec) 70 60 \u03c3rms (mJy beam\u22121) 3 0.5 Table 1. WSClean imaging parameters used to produce the lowresolution source-subtracted images for spectral index analysis. In the last line, we report the image rms noise \u03c3rms. *The HBA low-resolution images at 144 MHz made with these parameters are presented in de Jong et al. (2022). very strong artefacts produced by a radio galaxy (3C79) outside the first primary beam null. To mitigate this effect, we reduced the observation bandwidth to eliminate the frequencies where the source is the brightest and the primary beam is largest. This reduced the frequency range to 44 \u221277 MHz, with the central frequency at 60 MHz. Then, we proceeded with a new round of DI phase and amplitude self-calibration, which performs two cycles and corrects for the systematic errors in the target field. From the sky-model produced in the last round of DI calibration, the bright sources are selected as calibrators for the DD calibration, which removes the differential ionospheric errors in the direction of each calibrator within the field of view. Both steps are described in detail in de Gasperin et al. (2019). Finally, the pipeline performs the target extraction (van Weeren et al. 2021; Biava et al. 2021; Pasini et al. 2022), where the directiondependent solutions found are used to subtract all sources outside a radius of approximately 0.5\u00b0 around the target system. To refine the calibration towards the target, a few cycles of phase self-calibration at increasing time-resolution are performed on the extracted field. At this point, we can use the final calibrated extracted visibilities of the target to image at different resolutions with WSClean v3.1 (Offringa et al. 2014). We produced a final primary beam corrected image at the central frequency of 60 MHz, at a resolution of 75\u2032\u2032 with a rms noise of \u03c3rms = 3 mJy beam\u22121, shown in Fig. 1 (left panel). We then produced a high-resolution image excluding baselines shorter than 900\u03bb (i.e. emission on scales more extended than \u223c4\u2032) to recover only the compact sources, and then subtracted their components from the visibilities. The 75\u2032\u2032 resolution source-subtracted image of the target field is shown in Fig. 1 (right panel). The radio bridge is detected at 2\u03c3rms connecting continuously the two radio halos, and shows a patchier morphology at 3\u03c3rms level. 3. Results and discussion 3.1. Spectral analysis In order to understand the origin and properties of the large-scale emission we investigate the integrated spectral index and spectral index distribution of the bridge emission. To perform the spectral index analysis, the LBA data are imaged with the same parameters as the HBA data at 144 MHz presented in de Jong et al. (2022). The imaging parameters are listed in Table 1. In particular, we matched the uv-min and weighting scheme, to recover the same angular scales and reach a similar resolution between LBA and HBA observations. To ensure we are correcting for possible shifts introduced by the phase self-calibration, we Article number, page 3 of 7 \fA&A proofs: manuscript no. aa checked that the locations of the peaks of some point-sources in the field are matching in both images. We then convolve the images to the same restoring beam (80\u2032\u2032 resolution). Additionally, we performed a flux density alignment on the uv-subtracted image HBA maps presented in de Jong et al. (2022), as usually done for LoTSS pointings (Shimwell et al. 2019, 2022), and applied a scale factor of 0.9 to the data. Finally, we considered only the emission above 2\u03c3rms contour in both images and computed the spectral index map with associated error map, assuming a flux calibration error of 10% (as done for LoLSS, de Gasperin et al. 2021). We show the resulting spectral index and spectral index error maps between 60 and 144 MHz with a resolution of 80\u201d in Fig. 2. We consider only the emission outside the grey mask as part of the radio bridge, while we mask the radio halos and other features of diffuse emission not related to the bridge emission. Fig. 2 (top panel) shows the distribution of spectral index along the bridge, and the occurrence of each value is represented in the histogram. The distribution appears overall uniform, with most values found between \u22121.5 \u2264\u03b1 \u2264\u22121.2 . The error map (Fig. 2, bottom panel) shows the associated errors, that are mostly around \u2206\u03b1 \u223c0.2. Within the 2\u03c3rms level contours of the LBA image we measure a flux density of S 60 MHz = 1.77 \u00b1 0.18 Jy and S 144 MHz = 0.50 \u00b1 0.05 Jy1, leading to an integrated spectral index value for the radio bridge of \u03b1144 60 = \u22121.44 \u00b1 0.16. This is the first estimate of a radio bridge spectral index, and it provides important information on the models of magnetic field amplification and particle re-acceleration processes on megaparsec-scales. 3.2. Theoretical models The origin of the radio emission from radio bridges is still being investigated. The Mpc-scale size of the bridge requires an in-situ mechanism to accelerate the relativistic particles to travel over these scales (Brunetti & Jones 2014). Govoni et al. (2019) suggested a shock-driven emission model, where multiple weak shocks re-accelerates a pre-existing population of electrons. However, they show that it is difficult to account for the extension and strength of the bridge emission only via shock prior to the collision between A399 and A401. Moreover, the high-sensitivity study in de Jong et al. (2022) reported that they do not observe filamentary structures or shock surfaces in the bridge region, disfavouring the shock origin. We want to investigate the spectrum of the bridge emission with the measured flux densities at LOFAR frequencies. Additionally, we want to incorporate data from the uGMRT observations at a central frequency of 400 MHz presented in Pignataro et al. (2024), where the bridge emission is undetected. This also allows for a comprehensive comparison of the radio spectrum with the synchrotron spectrum predicted by theoretical models. In Pignataro et al. (2024) we found a limit on the bridge emission by following the injection procedure. In order to incorporate the limit in the radio spectrum produced in this work, we need to perform again the procedure injecting the 60 MHz emission model, where the emission appears less extended in than the 144 MHz detection. Following an extended approach to the injection procedure with the 60 MHz observation as a starting model, we find a limit for the spectral index between 60 MHz and 400 MHz of \u03b1400 60 < \u22121.75 at a 95% confidence level. The injection method is discussed in Appendix A. 1 This is in agreement with the flux density measured in de Jong et al. (2022), the difference is due only to different areas. Telescope \u03bd [MHz] P(\u03bd) [W Hz\u22121] S(\u03bd) [Jy] LOFAR LBA 60 (2.2 \u00b1 0.2) \u00d7 1025 1.77 \u00b1 0.18 LOFAR HBA 144 (6.1 \u00b1 0.6) \u00d7 1024 0.50 \u00b1 0.05 uGMRT Band3 400 < 7.6 \u00d7 1023 < 0.06 Table 2. Radio quantities for the bridge emission at the three frequencies presented in this study. Columns: (1) Telescope; (2) Central frequency in MHz; (3) Radio luminosity in W Hz\u22121 at given frequency; (4) Integrated flux density in Jy at given frequency. Fig. 3. Radio spectra. Top panel: The bridge emission radio spectra in integrated flux density. The grey arrow represents the uGMRT limit. Bottom panel: Radio luminosity of the bridge emission compared to a synchrotron theoretical model (red curve) produced by the relativistic particle populations with acceleration times \u03c4acc >200 Myr. The radio spectrum of the bridge is shown in Fig. 3 (top panel), the plotted values are listed in Table 2. The observations at 60 and 140 MHz and the upper limit derived at 400 MHz show a steepening towards increasing frequency. This feature, as well as the steep spectrum found between 144 and 400 MHz, is hard to reconcile with a shock origin scenario. However, spectral steeping is a key prediction of the turbulent re-acceleration model presented in Brunetti & Vazza (2020). In their work, they investigated several re-acceleration scenarios coupled with the result of a cosmological simulation, showing how the steepening changes as a function of magnetic field amplification effiArticle number, page 4 of 7 \fG.V. Pignataro et al.: Abell 0399\u2212Abell 0401 radio bridge spectral index: the first multifrequency detection Fig. 4. Spectral index distribution. Left panel: LBA radio image at 60 MHz and resolution of 80\u2032\u2032 with overlaid 2\u03c3rms contours and the grid to extract the spectral index between 60 and 144 MHz. Different areas of the bridge are covered by different colored cells. The colored cells covers the LBA emission above 2\u03c3rms and the red cells covers the 1\u03c3rms LBA emission and 2\u03c3rms HBA emission. Top right: Spectral index distribution across the bridge. The black points are values extracted from each cell, separated by color, and the red arrows are 1\u03c3rms limits extracted from the red cells. The dashed green horizontal line is the mean spectral index, while the filled green horizontal region represents the standard deviation. Bottom right: distribution of residuals (\u2206\u03b1) of \u03b1 with respect to the mean spectral index. ciency (\u03b7B) and particle acceleration times (\u03c4acc). In particular they identified for their analysis a simulated system that resembles the A399-A401 bridge (Govoni et al. 2019; Dom\u00ednguezFern\u00e1ndez et al. 2019). The initial spectrum of relativistic electrons is evolved solving the Fokker-Planck equations assuming a single-zone model, i.e. assuming average quantities, such as thermal densities and magnetic fields, that are measured in each cell of the simulated bridge region at a fixed time. This is done for different values of particle acceleration times (\u03c4acc), from \u223c10 Myr up to \u223c10 Gyr (see Brunetti & Vazza 2020, Fig. 2). The spectrum shown in Fig. 3 (bottom panel) is obtained from the spectrum presented in Brunetti & Vazza (2020) (see Fig. 3, for \u03b7B \u223c0.05). To reproduce our observed data, we have to rescale the spectrum from a surface of \u223c3.9 Mpc2 to \u223c2.2 Mpc2, which is the area covered by the LBA detection, excluding the radio halos. Additionally, it was necessary to exclude all cells with a generated \u03c4acc < 200 Myr, which make up for \u223c3% of the volume of the simulated bridge. Therefore, the observed LOFAR luminosities and the uGMRT limit set a constraint on the particles acceleration times to values longer than 200 Myr, which are generated in the majority of the cells in the simulation. Finally, we investigated how the spectral index value could vary in different areas of the bridge. The distribution of the spectral index is likely related to the contribution of turbulence and re-acceleration processes across the extended emission. We created a grid covering the emission inside the 2\u03c3rms contours in the 60 MHz image (Fig. 4, left panel). Each grid cell is one beam size (80\u2032\u2032 \u00d780\u2032\u2032). We extract the value of the spectral index between 60 and 144 MHz in each colored cell. Since we are computing the spectral index over the 2\u03c3rms LBA detection, we are considering the emission component with steepest spectra and/or weakest surface brightness. To check this bias, in the red cell we extract the 1\u03c3rms emission and evaluate a limit on the spectral index with the 2\u03c3rms emission in the 144 MHz image. The distribution of the spectral index in the bridge is shown in Fig. 4 top right panel, and in the bottom right panel we show the distribution of the residuals, \u2206\u03b1, between each value of spectral index extracted and the mean \u03b1 value. The values of each cell are consistent around the mean value inside the standard deviation for most points. Despite a larger scatter being observed for some cells (0.5 < \u2206\u03b1 < 0.9), they do not appear to be spatially correlated. The red arrows represent the limits extracted from the red cells, generally flatter than the mean value. However, we also note the limits at the level of the measured spectral index, or steeper. The spectral index is consisted with being constant across the region, and there is no evidence for any systematic trends across the bridge region. 4. Conclusions For the first time, we have determined the spectral index for the emission of a radio bridge, connecting the two pre-merging galaxy clusters A399 and A401. So far, the radio bridge was only detected at 144 MHz, therefore we analysed new LOFAR LBA data at 60 MHz to constrain the spectral index of the emission. We measured an integrated spectral index for the bridge between 60 and 144 MHz of \u03b1144 60 = \u22121.44 \u00b1 0.16. We also investigated the spectral index distribution, which gives insights into the contribution of turbulence and re-acceleration processes causing the extended emission. From the spectral index and associated errors maps, the distribution shows no systematic gradients in the bridge regions. Combining the two LOFAR detection and the uGMRT limit found at 400 MHz, we are able to produce a comprehensive comparison of the obtained radio spectrum of the bridge with the synchrotron spectrum predicted by theoretical models. The steep spectral index derived between 144 and 400 MHz already Article number, page 5 of 7 \fA&A proofs: manuscript no. aa challenged the shock-acceleration origin scenario (as proposed in Govoni et al. 2019). Moreover, we find that the steepening of the spectrum above 144, while hardly reconcilable with the shock acceleration scenario, it can be explained by the turbulent acceleration models investigated by Brunetti & Vazza (2020). Our observations allow us to constrain the particle acceleration time and, in turn, the volume-filling factor of the particle distribution in the turbulent re-acceleration model. Short acceleration times (corresponding to re-acceleration occurring in regions that occupy a small fraction of the bridge volume) generate shallower spectra, disfavoured by our observations. Conversely, large acceleration time (\u03c4acc > 200 Myr) for particles that occupy most of the bridge volume, are consistent with our data. The fact that the emission in the 60 MHz image appears less volume-filling than at 144 MHz, is likely related to the sensitivity limitations of the LBA observations. The Fermi-II origin scenario suggested by these observations requires the presence of significant turbulent motions in most of the bridge volume. Moreover, the aforementioned scenario assumes the presence of a volume-filling reservoir of low energy electrons (\u03b3 \u2264103) whose existence requires further observational evidence, and also not quantitatively predicted by simulations yet. Finally, the B \u22650.3 \u00b5G magnetic field required by this model are large for such peripheral regions, and might be detected by the forthcoming generation of polarisation surveys (Heald et al. 2020). Acknowledgements. AB acknowledges financial support from the ERC Starting Grant \u2018DRANOEL\u2019, number 714245. FdG acknowledges support from the ERC Consolidator Grant ULU 101086378. FV acknowledges the financial support from the Cariplo \"BREAKTHRU\" funds Rif: 2022-2088 CUP J33C22004310003. AB acknowledges financial support from the European Union Next Generation EU. MB acknowledges funding by the Deutsche Forschungsgemeinschaft under Germany\u2019s Excellence Strategy \u2013 EXC 2121 \u201cQuantum Universe\u201d \u2013 390833306. RJvW acknowledges support from the ERC Starting Grant ClusterWeb 804208."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.00791v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00791v1",
|
| 3 |
+
"title": "Obtaining Favorable Layouts for Multiple Object Generation",
|
| 4 |
+
"abstract": "Large-scale text-to-image models that can generate high-quality and diverse\nimages based on textual prompts have shown remarkable success. These models aim\nultimately to create complex scenes, and addressing the challenge of\nmulti-subject generation is a critical step towards this goal. However, the\nexisting state-of-the-art diffusion models face difficulty when generating\nimages that involve multiple subjects. When presented with a prompt containing\nmore than one subject, these models may omit some subjects or merge them\ntogether. To address this challenge, we propose a novel approach based on a\nguiding principle. We allow the diffusion model to initially propose a layout,\nand then we rearrange the layout grid. This is achieved by enforcing\ncross-attention maps (XAMs) to adhere to proposed masks and by migrating pixels\nfrom latent maps to new locations determined by us. We introduce new loss terms\naimed at reducing XAM entropy for clearer spatial definition of subjects,\nreduce the overlap between XAMs, and ensure that XAMs align with their\nrespective masks. We contrast our approach with several alternative methods and\nshow that it more faithfully captures the desired concepts across a variety of\ntext prompts.",
|
| 5 |
+
"authors": "Barak Battash, Amit Rozner, Lior Wolf, Ofir Lindenbaum",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "cs.CV",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CV",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"I.2, I.4"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "Diffusion AND Model",
|
| 16 |
+
"gt": "Obtaining Favorable Layouts for Multiple Object Generation",
|
| 17 |
+
"main_content": "Introduction In recent years, diffusion probabilistic models have garnered considerable attention from researchers across both academia and industry due to their remarkable performance and applicability to a wide range of downstream tasks related to high-quality image generation. State-of-the-art text-to-image foundation models, such as Stable Diffusion Rombach et al. (2022), Imagen Saharia et al. (2022), and DALL-E Parmar et al. (2023), are predominantly based on diffusion models. 1 arXiv:2405.00791v1 [cs.CV] 1 May 2024 \fFigure 1: This Figure shows the generation outputs of our method and competitive methods for multiple prompts, with various amount of subjects and objects. Diffusion models operate by iteratively denoising a noise-perturbed input image. Notably, the Stable Diffusion model Rombach et al. (2022) has showcased impressive capabilities in generating diverse and realistic images, underscoring the efficacy of diffusion-based approaches in image synthesis tasks. There are, however, a few eminent semantic challenges within such text-to-image synthesis, three of which are \u201csubject neglect\u201d, \u201csubject blending\u201d and \u201cattribute binding\u201d. The first pertains to generated samples where not all subjects described in the input text are present in the resulting image. The second is where the subjects are blended into other subjects, for example a bear with elephant trunk. \u201cAttribute binding\u201d is a compositional issue, wherein attribute details, such as 2 \fcolor or texture, are either misaligned with their intended objects or mistakenly associated with unrelated objects. This work aims to address the challenges of subject neglect and subject blending, which become more complicated as the number of subjects increases, see Figure 1. Although we do not attempt to solve the problem of attributed binding, our approach can be combined with any other attributebinding method and to show better or compatible results. Well-structured layout is required to avoid subject neglect and blending. If the layout is such that the subjects are well-separated and no subject dominates the scene, the diffusion methods can generate desirable images. Our research hypothesis is that given an initial noise map zT the diffusion model has bias towards some favorable layouts. Thus, manipulating the latent map is important as manipulating the attention maps. The layout is determined in approximately the initial 15 diffusion steps, but is not apparent during these steps. To mitigate this, our proposed solution has three phases. During the first few denoising steps, we apply various loss terms to encourage the XAMs associated with the tokens of each subject to be excited but separate from other subjects. Our solution to this is considerably more elaborate than previous attempts since we rely on the spatial structure and not just on the maximal activation. Then, after these initial steps, we extract binary masks per object and revise these masks to obtain favorable layouts. This is done by shifting some of the objects. The latent space of the diffusion model is then readjusted to match the optimized masks. Finally, diffusion continues, but the process is driven such that the per-subject XAMs match the set of fixed masks we have previously generated. Overall, the method provides a comprehensive solution to the challenges of multi-subject generation across all diffusion steps, all subjects, and the various spatial locations. In an extensive set of experiments, we demonstrate that our proposed method outperforms in all established metrics the many baselines that exist in this field by a sizable margin. Quantitative results demonstrate that these qualitative advantages match the obtained visual improvement. Figure 1 presents several examples of images generated using competitive methods. Widely used text-to-image diffusion based generative models struggle to generate an image containing all the objects in the prompt. Our model improves the image layout and leads to generated images that are more faithful to the prompt. 2 Related Work The groundbreaking potential of generative AI has been unleashed by cutting-edge foundation models, such as Stable Diffusion Rombach et al. (2022), Imagen Saharia et al. (2022), Midjourney, and DALL-E. They serve as the backbone for various generative AI applications, which include image, video, and 3D object generation. These models typically operate in a latent space, mapping from a low-dimensional latent representation to high-dimensional images. Diffusion models have recently gained attention for their ability to generate high-quality images by iteratively denoising a noise-perturbed input image. Notable work includes the Stable Diffusion model Rombach et al. (2022), which demonstrated impressive results in generating diverse and realistic images. Diffusion models offer advantages, such as stable training dynamics and the ability to generate high-resolution images, making them a promising choice for image synthesis tasks. The accuracy of the generated 3 \fimage is determined by its adherence to the user-provided text prompt. While existing generative models excel at synthesizing single-object images, generating images with multiple entities poses a more substantial challenge. Previous works have attempted to enhance the accuracy of text-to-image diffusion models by improving their semantic faithfulness. One of the major challenges in this regard has been aligning the image outputs with the input prompts, which has been thoroughly discussed in the paper by Tang et al. Tang et al. (2022). To address this issue, Kim et al. Liu et al. (2022) introduced ComposableDiffusion, a method that allows users to incorporate conjunction and negation operators in prompts. This approach improves concept composition guidance. Further advancements were made with the development of StructureDiffusion Feng et al. (2022), which suggests segmenting the prompts into noun phrases for exact attention distribution. Wu et al. Wu et al. (2023) proposed a method for controlling the XAMs with a spatial layout generation predictor. Agarwal et al. Agarwal et al. (2023) proposed A-star to minimize concept overlap and change in attention maps through iterations. Kim et al. Kim et al. (2023) introduced DenseDiffusion, a method for region-specific textual feature accumulation. The recent paper in Chefer et al. (2023) aimed to improve attention to neglected tokens, while Li et al. (2023a) suggested two distinct customized objective functions to handle the issues of missing objects and incorrect attribute binding separately. Although these approaches make an effort to resolve the mentioned problems, they still fail multiple object generation. MultiDiffusion Bar-Tal et al. (2023) combines multiple diffusion generation processes for each token, which helps overcome multi-subject generation difficulties. While MultiDiffusion Bar-Tal et al. (2023) requires a bounding box for each object in the prompt, which is a significant advantage over our approach and other baselines, it might be capable of overcoming hurdles associated with multi-subject generation. The Attend-and-Excite (A&E) method Chefer et al. (2023) focuses solely on neglected objects, but struggles to address the problem effectively when the areas of maximum attention are close. Conversely, Divide-and-Bind Li et al. (2023a) offers an approach to resolve the issue of incorrect attribute. 3 Method The stable diffusion process is an iterative procedure with indices from t = T to t = 0, where T is the total number of steps. At each step, a randomly sampled latent map zT is provided as input to the UNet, which predicts the noise estimation and gradually removes it until it produces the clean version of the latents z0. The latent map zt \u2208R4\u00d7h\u00d7w represents the output at step t. The early steps are important for creating the layout, while the later steps improve the local structure. To condition the text, a prompt P consisting of N tokens is used, and the latent image zt is viewed as a grid of P \u00d7 P patches. To link each patch to each prompt token, a XAM At \u2208RP \u00d7P \u00d7N is used. Let S be the set of all subjects in the prompt, and s be the token index. At step t, As t \u2208RP \u00d7P is the XAM of token s that links each patch to token s. To synthesize an image that accurately represents a particular subject, several factors must be considered. First, each subject s should have at least one patch with a high value in As t, especially during the initial stages of the generation process. Second, the overlap of the attention maps of the subject with the other attention maps should be minimal. 4 \fFigure 2: Illustrated herein is the sequential evolution of XAMs throughout the generation (backward) process. Commencing on the left with t = T, the XAMs exhibit a high degree of spatial entropy, signifying an unorganized state. During Phase 1, spanning t = (T, T \u2212\u03c4), the process strategically consolidates patches pertaining to identical subjects while concurrently segregating the XAMs of distinct subjects. The resulting XAMs at t = \u03c4 manifest enhanced organization and concentrated focus, enabling a preliminary prediction of the subjects\u2019 potential generation loci. Phase 2 involves optimizing the spatial arrangement and generating masks that will be used in Phase 3, the masks presented are after Gaussian smoothing. In Phase 3, the attention maps are subtly coerced to align with predefined masks. The extreme right column depicts the remainder of the diffusion process, which is instrumental in mitigating artifacts induced by the optimization process. Our approach contains three steps: 1. Excite and distinguish: Occurs in the first \u03c4 steps: t \u2208(T, T \u2212\u03c4), where T > \u03c4. This step gently compels all tokens s \u2208S be excited, i.e., to have a maximal attention score that is high enough, and to spatially separate these maps as much as possible. 2. Rearrange the generation grid: Occurs once at the end of the first phase. For each subject, extract for each s \u2208S spatial mask from the XAMs, and then spatially rearrange the masks to minimize their overlap, obtaining the desired masks Ms. 3. Follow the masks: Occurs during all subsequent diffusion steps: t \u2208(T \u2212\u03c4, 0) and utilizes the masks from the previous phase to guide the spatial arrangement in the XAMs. Figure 2 illustrates the dynamic interplay of the three phases in shaping the XAMs, thereby steering the layout of image generation towards superior multi-subject generation. Phase I It is widely recognized Hertz et al. (2022); Voynov et al. (2023) that the first few diffusion steps, largely determine the layout of the generated image. Therefore, if the goal is to separate the XAMs of subjects, it must be done within these initial steps. Unfortunately, during these early generation steps, it is almost impossible to understand the subject\u2019s location in the XAMs. 5 \fAs a starting point, consider the A&E Chefer et al. (2023) objective, which considers the maximal attention each token s \u2208S receives and attempts to minimize their neglect by maximizing the attention of the least attended token: LA&E = max s\u2208S (1 \u2212max(As t)) , (1) where the second maximization is over all spatial locations of the XAM. This objective can lead to subject-blending, since the same spatial location can have a high crossattention score to several tokens. In order to prevent this from occurring, we sort the subject tokens s by their maximal cross-attention values max(As t) in a descending manner. Let s0, s1, s2, ... be the list of sorted tokens at time t. We define Bs0 t to be the map that is zero everywhere except for a fixed-sized rectangle around the maximal values of As0 t . We then consider the filtered XAM \u02dc As1 t = As1 t \u2299(1 \u2212Bs0 t ), where \u2299marks an element-wise multiplication. We identify the maximal value in the filtered XAM, i.e., the maximal value that is at a distance from the maximal attention value in As0 t . Let B be a binary mask around the point of highest attention in \u02dc As1 t . We define Bs1 t to be the union (maximum value) of Bs0 t and B. The process repeats, each time accumulating more regions in which the next token\u2019s original attention is masked out, see Figure 3 for an illustration. The refined A&E loss term we employ includes these spatial considerations and is given as: LB&E = max si\u2208S(1 \u2212max((1 \u2212Bsi\u22121 t ) \u2299As t)) , (2) where Bs\u22121 t = 0 and the second maximization is across all spatial locations as above. This loss term LB&E is effective in having each XAM develop a high value that is spatially far from the high values of other XAMs. However, it does not prevent much of the high values of one XAM to overlap those of other XAMs. To mitigate this, we add a direct loss term per subject token s Ls ol = 1 |S| \u22121 X s\u2032\u2208S\\s \u27e8\u00af As t, \u00af As\u2032 t \u27e9F , (3) where \u00af As t represents a dilated version (with a 3x3 kernel) of As t,\u27e8\u27e9F is the fourbinous inner product the sum over pi, pj sums across both spatial dimensions. The dilation is applied to prevent the diffusion process from erroneously locating subjects within small gaps or holes that may be present in the image grid. Lastly, diffusion models may generate large subjects, leading to multiple subject generation failure due to lack of grid space. To address this issue, we employ conditional norm regularization, defined as follows: Ls norm = [\u2225As t\u2225F > C]\u2225As t\u2225F , (4) where C = P 2 |S| accounts for the size of the image as well as the number of subjects. The overall loss term in the first phase is: LI = \u03bbB&ELB&E + X s\u2208S \u03bbolLs ol + \u03bbnormLs norm. (5) 6 \f(a) As0 t (b) As1 t (c) As2 t (d) As3 t (e) 1 \u2212Bs0 t (f) 1 \u2212Bs1 t (g) 1 \u2212Bs2 t (h) 1 \u2212Bs3 t Figure 3: The computation of the blocking masks Bs t . The subject tokens si are sorted from largest excitation to smallest. At every step, the mask Bsi t accumulates the masked regions from its predecessors and adds a rectangle around the location of the maximal value in \u02dc As t = (1\u2212Bsi\u22121 t )\u2299Asi t . Note (h) will not be used. Where \u03bbB&E, \u03bbol, \u03bbnorm are scaling factors. Phase II After the first \u03c4 steps, phase II generates a set of per-subject masks Ms. It also rearranges the latent space z\u03c4 to be compatible with the new masks. The initial masks \u00af M s are obtained by thresholding the XAMs at step t = \u03c4. We use the following equation to define our masks: \u00af M s[i, j] = ( 1 As \u03c4 > \u03b3 max(As \u03c4), 0 otherwise (6) Where i, j are the spatial indexes. The value of \u03b3 is set to ensure that the masks are neither too large nor too small, with the default value being 0.2. It is then adjusted automatically within the range of \u03b3 \u2208(0.2, 0.8), see supplementary material for the details. We next rearrange the masks. We have learned that it is best to move as few objects as possible, and that an object should never be moved above its current location, otherwise, nonphysical scenarios are obtained, which SD cannot generate faithfully. The two objects that are moved are chosen based on the size ratio between the overlapping regions with other object over their own size is the highest. The latter is simply the sum of the mask \u00af M s. The former is the sum (over all image locations) of the intersection of all overlaps of the form \u00af M s \u2229\u00af M s\u2032 between the initial mask of subject s and that of s\u2032. 7 \fFigure 4: This figure illustrates Phase2 of the process. On the left, three masks represent the three subjects. In the middle column, we observe the initial masks, which estimate the patches in the attention maps contributing to each subject. On the right, we observe the final masks, after they have been shifted to their new locations. These final masks will subsequently guide the shifting of attention maps As t towards their new location. To determine the new location of a subject s, we shift the mask \u00af M s spatially, each time computing the total overlap as above, and selecting the shift that minimizes this quantity. The final set of masks is denoted M s. Please refer to Figure 4 for more intuition. To ease subject reallocation, we rearrange the patches in z\u03c4. Since we know the shift each of the objects we moved has (the result of the above 2D search), we first shift the corresponding locations in z\u03c4. We note that the size of the masks is four times smaller in each dimension than the size of the latent, thus we used an upscaled version of \u00af Ms. Shifting requires copying the representation in a certain spatial location to a new location. The original location can remain vacant if no other pixel is copied there. In this case, we use one of two imputation techniques: (1) if there is no background in the prompt, we impute a random vector from the normal distribution, else (2) we copy the latent activation of a the top-k locations from the background token sbkg XAM Asbkg T \u2212\u03c4. Phase III In this phase of the generation, we encourage As t to follow the masks M s. This is done using two objectives. The first objective keeps the object inside its mask, and it is formulated as: Linside = 1 |S| X s\u2208S (1 \u2212\u27e8As t, M s\u27e9F \u2225As t\u22251 )2. (7) 8 \fFigure 5: This figure displays the output generated by our method in response to the prompt: \"A chicken and a duck with a ball at the beach.\" and \"A dog, a cat, and a bear at the beach\". The three rightmost images depict the attention maps at step 9, a pivotal moment in the generation process that significantly influences the layout of the generated image. This step was specifically chosen to highlight its critical role in determining the spatial arrangement of the depicted entities. This visualization helps us analyze the importance of separating the attention maps. The second objective is to try and have the subject\u2019s XAM fill the object\u2019s mask: Lfill = 1 |S| X s\u2208S (1 \u2212\u27e8As t, M s\u27e9F \u2225M s\u22251 )2. (8) The loss term of this phase combines both: LIII = \u03bbinsideLinside + \u03bbfillLfill, (9) Where \u03bbinside, \u03bbfill are scaling factors. 4 Experiments 4.1 Data Our experiment\u2019s methodology expands the A&E Chefer et al. (2023) benchmark by examining textto-image models in more extreme cases. We use prompts with at least two subjects as templates 9 \fTable 1: This table presents the evaluation results on three test sets comprising only animals. Performance metrics were assessed using the Llava1.5 and Qwen models. Method Two animals Three animals Four animals c-score l-score q-score c-score l-score q-score c-score l-score q-score Stable Diffusion Rombach et al. (2022) 0.79 0.83 0.64 0.59 0.63 0.60 0.35 0.55 0.53 CompDiffusion Liu et al. (2022) 0.81 0.84 0.79 0.79 0.75 0.72 0.61 0.61 0.63 Divide & Bind Li et al. (2023a) 0.76 0.92 0.92 0.65 0.84 0.83 0.57 0.76 0.69 Attend & Excite Chefer et al. (2023) 0.82 0.92 0.90 0.81 0.79 0.73 0.72 0.71 0.64 SDXL Podell et al. (2023) 0.62 0.94 0.90 0.45 0.83 0.77 0.15 0.80 0.65 MultiDiffusion Bar-Tal et al. (2023) 0.72 0.92 0.92 0.76 0.73 0.70 0.39 0.57 0.50 Ours 0.92 0.97 0.97 0.92 0.90 0.91 0.85 0.86 0.88 for the benchmark. These are the templates that were used for the benchmark: (i) \u201ca [animalA] and a [animalB]\u201d, (ii) \u201ca [animalA], a [animalB] and a [animalC]\u201d, (iii) \u201ca [animalA], a [animalB], a [animalC] and a [animalD]\u201d, (iv) \u201ca [animalA], a [animalB] and a [object]\u201d, and (v) \u201ca [animalA], a [animalB], a [animalC] and a [object]\u201d. Although this work focuses only on \u201csubject neglection\u201d and \u201csubject blending\", we want to demonstrate that solving other issues in image synthesis, such as attribute binding, can be easily combined with our method. We utilized only the \u201cBind\u201d component from Divide-and-Bind Li et al. (2023a) and incorporated it to optimize the latents in conjunction with our suggested loss during phase III. We analyzed two sets of text prompts: (1) \u201ca [colorA][objectA] and a [colorB ][objectB]\u201d, as presented in A&E Chefer et al. (2023). (2) \u201ca [colorA][objectA], a [colorB][objectB] and a [colorC][objectC]\u201d. In total, all methods were evaluated using over 700 prompts, each generated using 5 different seeds. 4.2 Implementation details We use the official Stable Diffusion v2.1 text-to-image model. We use default hyperparameters for all models. All experiments were conducted using an Nvidia Tesla v100 32GB GPU. 4.3 Baselines We compared our model to several relevant baselines, including Vanilla SD2.1, Attend-and-Excite Chefer et al. (2023), Divide-and-Bind Li et al. (2023a), Composable DiffusionLiu et al. (2022) and MultiDiffusion Bar-Tal et al. (2023). Evaluating MultiDiffusion Bar-Tal et al. (2023) on a largescale benchmark is challenging, as it requires the user to input the bonding box. To ensure a fair evaluation, we provided the model with three sets of masks that were of a standard size and could be used for generation in a fair manner. Finally, we report the best result out of these three sets of masks. Despite implementing more advanced training methodologies, leveraging larger neural networks and using larger training data, generating multiple subjects remains a challenge. Unfortunately, due to hardware constraints, we were unable to test our method on SDXL Podell et al. (2023). However, we will use it to demonstrate the persistent challenge of generating multiple subjects. We used the full pipeline of SDXL Podell et al. (2023), both the base model and the refiner, all done using DiffusersWolf et al. (2019) package. 10 \fFigure 6: This figure shows the outputs of each method examined in this work in response to an input prompt containing four subjects. Other methods suffer from subject blending or are unable to generate all four subjects. In contrast, our method generates an image that is more loyal to the prompt. Table 2: This table shows the results on the set of prompts that consists of subjects including objects. Method Two animals and an object Three animals and an object c-score l-score q-score c-score l-score q-score Stable Diffusion Rombach et al. (2022) 0.77 0.75 0.71 0.59 0.65 0.64 Composable-Diffusion Liu et al. (2022) 0.78 0.75 0.73 0.75 0.69 0.67 Divide & Bind Li et al. (2023a) 0.70 0.81 0.81 0.62 0.72 0.70 Attend & Excite Chefer et al. (2023) 0.85 0.86 0.82 0.66 0.71 0.70 SDXL Podell et al. (2023) 0.55 0.88 0.8 0.46 0.81 0.74 MultiDiffusion Bar-Tal et al. (2023) 0.63 0.61 0.58 0.61 0.58 0.57 Ours 0.92 0.93 0.86 0.85 0.87 0.83 4.4 Evaluation Metrics : Thanks to the rapid advancements in the field of vision-language models, we have the capability to assess them automatically. In our study, we utilized Llava1.5 Liu et al. (2023) alongside the newly introduced QWEN-VL-Chat Bai et al. (2023) with 9.6B parameters, as well as BLIP2Li et al. (2023b). We used QWEN Bai et al. (2023) to describe the animals or objects it identifies, requiring grounding through bounding boxes. This facilitated the creation of a metric, q-score, calculated as the ratio of detected subjects to the total subjects in the prompt. For Llava1.5 Liu et al. (2023), we prompted the model to identify subject s in the image if it was clearly visible and not blended with the background, for each subject in the prompt. This evaluation, which we term the l-score, is comparatively less stringent than the QWEN Bai et al. (2023) evaluation and is formulated similarly to the q-score. The full prompt appears in the appendix. We utilize Llava1.5 Liu et al. (2023) to extract an additional metric, which will be employed to monitor the numerical quantity of subjects or objects generated by a model. If Llava1.5 Liu et al. (2023) detects a subject or object, we prompt the model once more, inquiring if there are multiple instances. We refer to this metric as the \u201cc-score\", where 1 represents the highest count and 0 indicates the lowest. 11 \fTable 3: This table presents the evaluation results on the five test sets evaluated by BLIP2 image-text-matching metric. Each column depicts a different composition of subjects. A=Animal, I=inanimate. Method 2\u00d7A 2\u00d7A+I 3\u00d7A 3\u00d7A+I 4\u00d7A Stable Diffusion Rombach et al. (2022) 0.810 0.932 0.854 0.866 0.932 Compos-Diffusion Liu et al. (2022) 0.969 0.972 0.955 0.976 0.951 Divide& Bind Li et al. (2023a) 0.942 0.978 0.965 0.982 0.960 Attend & Excite Chefer et al. (2023) 0.957 0.985 0.968 0.991 0.982 SDXL Podell et al. (2023) 0.941 0.9275 0.934 0.990 0.982 MultiDiffusion Bar-Tal et al. (2023) 0.926 0.734 0.903 0.863 0.846 Ours 0.971 0.991 0.997 0.991 0.998 Table 4: This table presents results for a set of prompts comprising subjects and their associated attribute. It evaluates the model\u2019s ability to correctly bind attributes to their respective subjects. Method Two subjects Three subjects Stable Diffusion Rombach et al. (2022) 0.71 0.68 Divide & Bind Li et al. (2023a) 0.88 0.83 Attend & Excite Chefer et al. (2023) 0.90 0.77 Ours 0.93 0.87 This score serves not as a simple count but rather as a metric to ensure that the model generates the appropriate number of objects. Its discriminatory power is crucial in distinguishing between models. Additionally, we employed BLIP2 Li et al. (2023b) for image-text matching (ITM), which assesses the likelihood that the given image and text correspond. This process was facilitated using the Lavis package Li et al. (2023c). We evaluate the attribute binding experiment using Llava1.5 Liu et al. (2023), we prompt the model with two questions: \u201cDo you see a {color} {obj} in the image? is it clearly seen?\u201d and the second is a sequential and harder prompt: \u201cDo you see a {obj} in the image? If {obj} exists what is its color?\u201d. 4.5 Results Table 1 and Table 2 present the results on five datasets with a varying number of subjects and objects. In the \u201cTwo animals\u201d benchmark it is not surprising to see that our method is better by a small gap. Two subjects are what most of the methods improving image generation models attempt and succeed in solving. One can note that as the number of subjects in the prompt increases our method has the lowest decrease in performance, and the gap between the methods increases. Further than that our model keeps its numeric generation stability by keeping a c \u2212score of 0.85 even in the Four subjects benchmark. MultiDiffusion Bar-Tal et al. (2023) also suffers from a low c \u2212score, which expresses the extent to which the model forcibly generates humans even if they 12 \fTable 5: This table presents the evaluation results of the ablation study on two test sets, which include three or four subjects. The first part of the table shows ablation experiments on the objectives in Phase 1. The second part shows the influence of the enhancements components applied in Phase 2, where \u201cNo restart\u201d means the diffusion process is not restarted after the pixel reallocation. The third block examines the effect of the objectives in Phase 3. Method Three subjects Four subjects c-score l-score q-score c-score l-score q-score No Lol 0.91 0.88 0.85 0.83 0.84 0.83 No LB&E 0.80 0.82 0.82 0.77 0.78 0.81 No LB&E & Lol 0.78 0.79 0.81 0.71 0.74 0.76 No pixel reallocation 0.89 0.88 0.87 0.84 0.83 0.86 No restart 0.91 0.88 0.89 0.84 0.84 0.88 No Lfill 0.85 0.85 0.85 0.79 0.78 0.79 No Linside 0.84 0.85 0.85 0.82 0.80 0.83 Full method 0.92 0.90 0.91 0.82 0.85 0.89 are not needed. Attend-and-Excite Chefer et al. (2023) and Divide-and-Bind Li et al. (2023a) show decent performance along all evaluation metrics. Table 3 shows the results on all five benchmarks, using BLIP2 image-text matching metric. While these metrics are more crude (even nonmatching image-prompt pairs often score high if there is some common element), our method is still showing the best performance. Finally, Table 4 presents the results of the attribute binding experiment, as discussed before, this experiment\u2019s goal is to show how well our method can perform by combining methods that tackle different issues. As can be seen, our method leveraging only the \u201cbind\u201d outperforms all other methods and improves Divide-and-Bind Li et al. (2023a) attribute binding abilities. 4.5.1 Qualitative Results In the next experiment, we illustrate visually the challenges in generating multiple subjects using strong text-to-image diffusion models. Figure 1 displays a comparison of various baseline models, highlighting their common issues of subject neglection or blending. For instance, SD2.1 suffers mainly from subject neglection and less from subject blending. Similarly, in the third row, \"A gray dog, a brown cat, and a pink bow\", the bow\u2019s color seems to bleed into the cat, indicating a blending issue. The D&B suffers from a strong subject blending; for example, in the first row, there is a dog with a frog head; on the right, there is half dog, half cat. A&E Chefer et al. (2023) Also suffers from two drawbacks: the first two show a blend between a goat and a duck, and the last row shows no chicken. It is interesting to see that SDXL Podell et al. (2023) sometimes collapses to \"cartoon mode\" when the prompt needs too many compositions. SDXL Podell et al. (2023) suffered from subject blending; this can be seen in the second row, where there are ducks with some frog attributes like greenish color and low and wide center of gravity. SDXL also tends to neglect objects, for example, in the first row where the model did not generate a dog 13 \fand generated some green undefined subject instead of a frog. Our method can better prevent the blending of subjects and enhance the overall performance of subject appearance in the scene. We note that we did not include MultiDiffusion Bar-Tal et al. (2023) since it was not able to generate reasonable results. Figure 5 shows the output produced by our method in response to the prompt: \"A chicken and a duck with a ball at the beach.\" and \"A dog, a cat, and a bear at the beach.\" The three images on the right display the attention maps at step 9, a crucial point in the generation process that greatly impacts the layout of the resulting image. We specifically selected this step to emphasize its vital role in shaping the spatial arrangement of the depicted objects. This visualization aids in understanding the significance of individual attention maps. In Figure 6 we present a qualitative comparison between our method and all baselines. We generated images using the same prompt containing four subjects. Each row shows the generated images for different random seeds. As evidenced by these figures, SD2.1 and MultiDiffusion Bar-Tal et al. (2023) produce images with blended or incorrect objects. While Attend-and-Excite Chefer et al. (2023) and Divide-and-Bind Li et al. (2023a) improve image generation quality, they fail to create all four subjects in one image. SDXL Podell et al. (2023) generates cartoonish-style images (even though not requested in the prompt) and also generates images that do not match the set of objects in the prompt. However, our model can create an improved image layout that accommodates all four objects. 4.6 Searching for a Simpler Solution Recent advancements in large-scale language models (LLMs) have emphasized the importance of input prompts in influencing model outputs. Stable Diffusion online communities consist of users who use and design prompts more frequently than the researchers who developed the models. We want to embrace their expertise in order to examine much simpler solutions: prompt engineering. We examined three prompts that were suggested in \"stable diffusion\" related forums for multiple character generation, and we will examine the effect of each prompt on Our method and A&E. Table 6 presents the results; one can note that all three prompt variations were not able to create an advantage over the vanilla prompting results presented in Table1. 4.7 Ablation Study An ablation study was conducted to investigate the effect of removing each component. In our method, we made several design choices which we examined using an ablation across all phases. In Phase 1, we analyzed LB&E and Lol. In Phase 2, we examined the pixel repositioning in the latents, as well as restarting the diffusion process statistics. Finally, in Phase 3, we checked the impact of No Lfill and Linside. The results of the ablation study are presented in Table 5. The findings suggest that reducing the two losses from phase1 simultaneously results in a significant decrease in performance. When Lol alone is reduced, the effect is less than when LB&E is reduced, but it still has a substantial impact. The enhancements from Phase 2 have a less pronounced effect on the results, but they still contribute to improved performance. The objectives of Phase 3 also have a significant impact on performance. 14 \fTable 6: This table presents the evaluation results of using prompt engineering for better performance using four and three subject sets. promptA=\"{prompt}, cinematic wide establishing shot\", promptB=\"A photo of {prompt} taken from a distance. \", promptC=\"A photo of {prompt} taken from a distance, full body\" Method Four subjects c-score l-score q-score SD2.1 promptA 0.56 0.63 0.59 A&E promptA 0.64 0.73 0.65 Ours promptA 0.83 0.84 0.86 SD2.1 promptB 0.58 0.63 0.59 A&E promptB 0.67 0.74 0.66 Ours promptB 0.83 0.82 0.85 SD2.1 promptC 0.61 0.65 0.61 A&E promptC 0.65 0.73 0.65 Ours promptC 0.81 0.81 0.85 5 Limitations There are a few limitations to consider when using our proposed framework. Firstly, it increases the inference time of the synthesis model, although only by a factor of two. However, since rapid generation times are crucial in this field, this limitation is worth taking into account. Another limitation is that forcing a layout on the diffusion model can impact the image quality negatively. In some cases, the resulting layout may even appear unnatural, as shown in Figure 7 for some lower-quality samples. While this effect is usually minor, it does create a tradeoff between staying true to the input prompt and producing high-quality output images. Lastly, it is worth noting that the method is not aware of subject proportions. As a result, it may generate a small mask for a large subject, and vice versa. 6 Conclusions While diffusion models are extremely powerful, they suffer from the same shortcut issue that plagues classifiers and other deep networks Geirhos et al. (2020); Hendrycks et al. (2021). In the case of diffusion models, this issue manifests itself as the problem of neglecting and blending subjects and tokens. Naturally, the problem becomes more severe as the prompt becomes increasingly complex. In this work, we offer a multi-step solution that handles multiple aspects of the generation process, including cross-attention and the latent space. The intervention we perform combines pulling the generation process, in inference time, using various loss terms, and more direct editing of the latent space. As we show in an extensive set of experiments, our model enhances the ability of text-toimage diffusion models to generate images with multiple subjects (objects). Although our method does not attempt to improve the attribute binding phenomena, it is naturally aid this issue. The 15 \fFigure 7: Limitations: the figure presents a few examples of our method and stable diffusion 2.1 generating scenes that feature multiple subjects and objects. Although these cases present multiple objects, their arrangement is unnatural, which may result from imperfect mask generation. Additionally, the image quality may be lacking. Improving the mask generation process is crucial to achieving more successful image generation. To enhance image quality, one can reduce the number of optimization iterations or use a refiner model, such as in the case of SDXL. ability to clearly generate multiple subjects and objects is a critical ability of text-to-image models. This capability enables them to create complex scenes effectively."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.00801v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00801v1",
|
| 3 |
+
"title": "\"Ask Me Anything\": How Comcast Uses LLMs to Assist Agents in Real Time",
|
| 4 |
+
"abstract": "Customer service is how companies interface with their customers. It can\ncontribute heavily towards the overall customer satisfaction. However,\nhigh-quality service can become expensive, creating an incentive to make it as\ncost efficient as possible and prompting most companies to utilize AI-powered\nassistants, or \"chat bots\". On the other hand, human-to-human interaction is\nstill desired by customers, especially when it comes to complex scenarios such\nas disputes and sensitive topics like bill payment.\n This raises the bar for customer service agents. They need to accurately\nunderstand the customer's question or concern, identify a solution that is\nacceptable yet feasible (and within the company's policy), all while handling\nmultiple conversations at once.\n In this work, we introduce \"Ask Me Anything\" (AMA) as an add-on feature to an\nagent-facing customer service interface. AMA allows agents to ask questions to\na large language model (LLM) on demand, as they are handling customer\nconversations -- the LLM provides accurate responses in real-time, reducing the\namount of context switching the agent needs. In our internal experiments, we\nfind that agents using AMA versus a traditional search experience spend\napproximately 10% fewer seconds per conversation containing a search,\ntranslating to millions of dollars of savings annually. Agents that used the\nAMA feature provided positive feedback nearly 80% of the time, demonstrating\nits usefulness as an AI-assisted feature for customer care.",
|
| 5 |
+
"authors": "Scott Rome, Tianwen Chen, Raphael Tang, Luwei Zhou, Ferhan Ture",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "LLM AND Agent",
|
| 14 |
+
"gt": "\"Ask Me Anything\": How Comcast Uses LLMs to Assist Agents in Real Time",
|
| 15 |
+
"main_content": "INTRODUCTION Comcast, like many other companies, provides customer service through various communication channels. Many self-service solutions are available on the mobile \u201cX\ufb01nity\u201d app (e.g., reviewing latest bill) which also has an option to chat with an AI-powered bot named \u201cX\ufb01nity Assistant\u201d. While these digital automation capabilities have been replacing human customer representatives (also referred to as \"agents\") for many tasks, there are still many situations that require human-to-human interactions. A customer trying to simply look up information about their pro\ufb01le, internet services, or bill, they should be able to do it without an agent\u2019s assistance. This also holds true if they are trying to carry out a relatively straightforward task like rescheduling their appointment or make a change to their services. Past studies show a human-human interaction is preferred over a human-computer one in certain customer service situations[21]. For example, agents might outperform bots in situations that require creative problem solving. In other situations, the customer might simply prefer to talk to a agent to bene\ufb01t from their empathy and emotional intelligence, or to navigate through cultural sensitivities. At Comcast, an internal custom tool suite aims to help agents to e\ufb00ectively and e\ufb03ciently handle such conversations. However, it still often requires manually looking up information in multiple places, relating it to what the customer is saying, then crafting a relevant response that aligns with the communication guidelines. In this paper, we introduce a new feature to this tool suite called \u201cAsk Me Anything\u201d (AMA). It leverages large language models (LLMs) following a retrieval-augmented generation (RAG) approach to generate contextually relevant responses by combining internal knowledge sources, indexing existing knowledge articles e\ufb03ciently at build time, retrieving relevant chunks of text for a given question at query time, then feeding them to a Reader LLM to generate a succinct answer with citations provided as reference. In the next section, we describe the methodology in more detail. 2 METHODOLOGY Our system follows a typical RAG implementation with modi\ufb01cations to improve performance on proprietary questions. First, the documents are preprocessed to text and chunked, the chunks are \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Rome, et al. embedded then stored with metadata (e.g., associated URL for citations, an identi\ufb01er, the title, etc.) in a vector database. We describe our speci\ufb01c choices for processing and embeddings in Section 2.1 and Section 2.2 respectively with some experimental justi\ufb01cation. Next, we detail how we train and evaluate a reranking model using synthetic data to improve search result relevancy in Section 2.3. Finally, we discuss how we generate answers followed by how we evaluate the system in Section 2.4 and 2.5. 2.1 Document Preprocessing We receive documents from various internal clients in di\ufb00erent formats. We standardize the documentsinto plain text and chunk each document into snippets using Deepset.ai\u2019s Haystack library [13]. In order to uniquely reference each chunk of every document after retrieval, we assign an origin identi\ufb01er to each document and a local identi\ufb01er to each chunk. Finally, we implement role-based access control on each document, so di\ufb00erent users can only view the documents for which they have permission. In Table 1, we show various chunking parameters for Haystack\u2019s preprocessor and their evaluation scores. The metric derivation is explained in Section 2.5 (Answer quality assumed the top 3 items were passed to the LLM). We observed a large improvement from setting a higher max_chars_check, which we used as a proxy for limiting the size of each snippet given to the LLM. Table 1: Chunking parameters and evaluation of three different settings. Parameter A B C clean_empty_lines true clean_whitespace true clean_header_footer true split_by word split_length 300 100 split_overlap 50 25 split_respect_sentence_boundary true max_chars_check 1000 3000 Metric Answer Quality -5.7% +13.2% MRR -13.3% 0.0% R@3 -7.9% 0.0% NDCG -10.0% 0.0% For clarity, only changes from setting \ud434are found in the table. Empty parameter values mean they are same as \ud434. The metric values are the relative di\ufb00erence from \ud434, i.e., 100 \u00b7 (\ud707\ud435\u2212\ud707\ud434)/\ud707\ud434for some metric \ud707. Metrics are de\ufb01ned in Section 2.5. 2.2 Retrieving Relevant Text Snippets To inform the choice of our retriever model, we conducted pilot experiments on a curated evaluation set of \ufb01fty question\u2013answer pairs. We searched the in-production system logs for queries starting with a WH-word (who, what, how, etc.) or ending with a question mark, roughly following the procedure on Bing query logs from WikiQA [24]. For each question, we then located the relevant passage and answer span in our internal knowledge base used by agents. Queries without answers were also labeled as such. Crucially, this process avoids back-formulation [17], where queries are manually written by annotators based on known passages rather than crawled from logs, resulting in biased evaluation sets. We experimented with both dense and sparse retrieval models. For the sparse model, we used Okapi BM25 [16] with \ud4581 = 1.0 and \ud44f= 0.5. For the dense ones, we experimented with four: dense passage retrieval (DPR) [9], \ufb01ne-tuned on Natural Questions [10]; MPNet-base (v1) [18], trained on 160GB of text corpora including Wikipedia, BookCorpus [26], and OpenWebText [6]; OpenAI\u2019s state-of-the-art ada-002 embeddings model; and MPNet-base v2, trained further on one billion sentence pairs for better embedding quality.2 Each was deemed to satisfy our computational and \ufb01nancial constraints at inference time. In Table 2, we report the recall@3 (R@3) and the mean reciprocal rank (MRR) of these models on our evaluation set. The choice of recall@3 (versus recall@5 or 10) is from us feeding the top-three retrieved passages into the LLM. As a sanity check, we also ran a baseline that randomly drew a passage, which unsurprisingly yielded low scores. Mirroring prior work [23], we found that BM25 remains a strong baseline, outperforming DPR in R@3 and MRR, respectively. We conjecture that this results from Natural Questions being substantially out of domain from our data. Table 2: Results of various retrievers on our pilot evaluation set Method Recall@3 MRR Random -71.4% -83.9% BM25 DPR (single-nq) -42.8% -42.9% DPR (multiset-nq) -23.8% -29.0% Multi-QA MPNet-base +33.0% +39.7% OpenAI embeddings (ada-002) +33.0% +53.9% MPNet-base v2 +38.1% +54.9% Statistics presented as relative di\ufb00erence from BM25, i.e., 100 \u00b7 (\ud707\u2212 \ud707\ud435\ud44025)/\ud707\ud435\ud44025 . Underline denotes statistical signi\ufb01cance relative to DPR. We observe MPNet-base (v1), OpenAI\u2019s ada-002, and MPNetbase (v2) to perform similarly. Signed-rank tests for R@3 and \ud461tests for MRR also reveal a signi\ufb01cant di\ufb00erence (\ud45d< 0.05) from DPR. Due to operational convenience and the high performance of OpenAI\u2019s ADA embeddings, we used ADA for the retriever component for the \ufb01nal system. For our production retrieval step, we embedded both the title of the article and the text of the individual chunk and added them together prior to storage in the vector database. Anecdotally, we found this to yield a more comprehensive retrieval for a variety of queries, especially when chunks were missing some descriptive context of the topic of the article. 2Nils Reimers\u2019sopen-source contribution: https://discuss.huggingface.co/t/train-the-best-sentence-emb \fHow Comcast Uses LLMs to Assist Agents in Real Time SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 3: Training hyperparameters. Parameter Speci\ufb01cation Learning Rate 5 \u00d7 10\u22126 Batch Size 8 Number of GPUs1 10 Warmup Steps 4000 Weight Decay 0.001 Epochs 1 Total Training Steps 171391 Learning Rate Scheduler Warmup-constant 1 GPU type: g4dn.xlarge (Nvidia T4) 2.3 Reranking Search Results We found that reranking results using models \ufb01netuned on synthetic data improved the retrieval step. Our approach was inspired by previous synthetic data generation approaches [1, 3]. First, we used GPT-4 to generate synthetic questions from each snippet in our dataset. We then ran each question through our search system using OpenAI\u2019s text-embeddings-ada-002 [8] embeddings. Any questions where the original snippet used for question generation did not appear in the top 20 results were discarded. For each synthetic question, we stored the top 20 items retrieved, their relevance as scored by BGE-reranker-large [22], and an indicator that the snippet was the source of the question. The \ufb01nal rankings were determined by \ufb01rst placing the source snippet as the \"most relevant\" result, followed by the snippets in most relevant order as scored by the BGE-reranker-large model. For training, we used RankNet [2] to distill these rankings into a \ufb01netuned MPNet [18], in particular all-mpnet-base-v2 from sentence-transformer [15], which has fewer parameters requiring less computational resources to deploy into production than BGE-reranker-large. The \ufb01nal dataset after constructing the necessary pairs for RankNet consisted of over 10 million examples. We set aside 0.5% of the examples as validation dataset. Our training parameters were listed in Table 3. We used DistributedDataParallel from PyTorch [12] for distributed training, so the e\ufb00ective batch size is the number of GPUs multiplied by the batch size. We found the \"Linear Scaling Rule\", where one scales the learning rate when the batch size increases, to not apply to our use case [7], but we suspect it is because the original MPNet architecture was trained with a much larger batch size than we used for \ufb01netuning. To further evaluate the performance of our reranker model, we randomly sampled 10,000 real questions asked by customer service agents in our production system. For every retrieved document, we followed the approach in [20], which showed that an LLM can accurately predict the relevancy of search results. Specifically, GPT-4 was used to evaluate the overall quality of each document to the question, which combined the scores from how the document matches the intent of the question as well as how trustworthy the document is. The \ufb01nal integer score ranged between 0 and 2, with higher score meaning higher overall quality. Table 4 compares multiple metrics between ADA vs. reranker. Since the overall score is non-binary, we compute MRR using the rank of \ufb01rst document with a score of 2, and recall@3 examines whether Table 4: ADA vs. Reranker Search Results using Production Questions Metric ADA Reranker Recall@3 +12% MRR +15% NDCG +4.8% For clarity, only changes from setting ADA are found in the table. The metric values are the relative di\ufb00erence from ADA. the top 3 documents contain any documents with a score of 2. The results indicate an improvement in retrieval performance with the reranker model. 2.4 Generating the Answer from Snippets In generating the answer, we follow the conventional wisdom approach in the RAG literature. We begin our prompt with a preamble of guidelines for the model, followed by the task description. Due to the length of our snippets of text from the knowledge base, we are unable to provide few-shot examples. We have anecdotally found it better to include more of the text to avoid necessary information being cut o\ufb00at random. To avoid the \"lost in the middle\" problem [11], we reverse the order of the Top K results when passed into the LLM, formatted as XML capturing the ID, title and content of the result. We used OpenAI\u2019s gpt-3.5-turbo for our production Reader component. As a \ufb01nal step in our prompt, we ask the LLM to answer the given question using the search results. 2.4.1 Citations. An important product feature of of the AMA solution is providing references to agents so they can learn more about the answer given. This can be seen in various RAG implementations, such as Microsoft Copilot. In addition, the goal was to build con\ufb01dence in the system\u2019s output and drive adoption internally. Inspired by the Fact-Checking Rail [14], our Citation Rail was accomplished by prompting the LLM to cite its sources in a speci\ufb01c manner (c.f., Figure 1) combined with a post processing step where the citations were removed from the text. If no citations were found, then the system would not return the answer. Practically, there was another bene\ufb01t from an observability perspective: through this approach, we identi\ufb01ed most \"no answer\" responses from the LLM, as typically the LLM would response similarly to \"I\u2019m sorry. I was unable to \ufb01nd the answer in the documents\" without a citation. Please include a single source at the end of your answer , i.e., [Document0] if Document0 is the source . If there is more than one source , use [Document0 ][ Document1 ] if Document0 and Document1 are the sources. Figure 1: An example component of a prompt to encourage citations from the LLM used in the system prompt section. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Rome, et al. Table 5: Response Quality Metric ADA Reranker Answer Quality +5.9% Citation Match Rate +2.5% Recall@3 +16.5% For clarity, only changes from setting ADA are found in the table. The metric values are the relative di\ufb00erence from ADA. 2.5 O\ufb00line Response Evaluation To evaluate the system\u2019s responses, we follow the LLM-as-a-judge methodology [25], in addition to metrics around retrieval quality typical of a search system. In particular, a random sample of questions from customers were pulled from production tra\ufb03c. Human annotators then wrote correct answers to each query using internal knowledge bases that are also available to the AMA system. We were able to compare system answers to correct responses given by human annotators using GPT-4 to compute \"Answer Quality\". For each question, the annotators also provided a citation from which their answers were based. We used this to calculate \"Citation Match Rate\": the percentage of cases in which the citation from the AMA system matched the ground truth. Given that our retrieval step returns a list, we calculated Recall@K by assuming the annotated citation is the only relevant document. Table 5 shows key metrics for the same two approaches as in Table 4 (text-embedding-ada-002 for dense retrieval of relevant documents and rerankering the ADA-retrieved documents using our \ufb01netuned model). We observe that using reranked documents, LLM is able to achieve a higher answer quality meaning that the answer from a di\ufb00erent document ranking is more accurate according to GPT-4. The improvement can also be explained by the increased Citation Match Rate and Recall@3 from the reranked documents directly in\ufb02uencing the LLM\u2019s ability to answer accurately. 3 DEPLOYING AMA TO CUSTOMER SERVICE AGENTS Due to business sensitivity purposes, this section will obscure some details related to monetary business metrics. The system was piloted with hundreds of chat agents in late 2023. Over the course of a month-long trial, chat handling time improved 10% when agents used AMA versus the traditional search option, which required the agent to open a new tool and perform a search. We believe this is a good proxy metric for answer quality because an inaccurate or incomplete response from AMA would require the agent to start over and revert to the traditional option, duplicating work and taking more time overall. Explicit feedback, via a simple thumbs up/thumbs down UI element, was also collected from agents, with nearly an 80% positive feedback rate (there is no baseline for this rate as such feedback was not requested before the release of this feature). Shortly after the trial period, the system was rolled out to all chat agents (in thousands), with AMA-driven search becoming the preferred way of searching, accounting for two thirds of all typed queries. 4 ONLINE RERANKER EXPERIMENT Shortly after the trial from Section 3 concluded, we began an A/B test of the reranker module described in Section 2.3. The control variant used only the ADA embeddings for vector retrieval with no reranking component, and the treatment utilized the reranker component on the top 20 results from the ADA-based vector retrieval step. The test ran for three weeks in early 2024. We powered our tests at 80% and use signi\ufb01cance level \ud6fc= .01 for metrics that applied to every interaction and \ud6fc= .05 when metrics considered user feedback, as responses were sparse. Due to the limited pool of agents, our randomization unit, we utilized an agent-day randomization similar to the cookie-day randomization found in other large systems [19] to increase statistical power. It has been shown in the literature [5] that violations of the independent and identically distributed (IID) assumption can lead to underestimation of the variance, but these tests can still be considered trustworthy in practice by using smaller signi\ufb01cance thresholds and when observing larger e\ufb00ect sizes. The delta method [4] was employed to estimate the variance from question-level metrics. We observed a statistically signi\ufb01cant increase in two of our metrics: namely the \"No Answer Rate\", which is the number of queries with no answer divided by the total number of queries, and the \"Positive Feedback Rate\", de\ufb01ned as the number of thumbs up divided by the count of feedback received. Downstream business metrics like average handle time and escalation rate showed no signi\ufb01cant di\ufb00erence. However, the improvement in No Answer Rate implies that the system was able to handle more questions than before by providing the relevant documents to the LLM while also increasing the rate of positive feedback. Table 6: A/B Test Results Metric E\ufb00ect p-value No Answer Rate -11.9% p < .001 Positive Feedback Rate +8.9% p < .05 Table contains relative change from control as the e\ufb00ect. Lower is better for No Answer Rate. 5 CONCLUSIONS In this paper, we introduced AMA, a large-scale solution to a common business need: e\ufb03cient high-quality customer care. Through the use of third-party LLMs and proven RAG methodology, we were able to build AMA pretty quickly and demonstrate clear value as an assistive feature. We showed improvements to retrieval and answer quality with speci\ufb01c choices for the document preprocessing, the retrieval model and its embeddings, as well as a custom reranker model. As we deploy AMA to thousands of agents with tangible business bene\ufb01ts, we believe that this provides a good example of how humans and AI can collaborate to better serve customers."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.00824v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00824v1",
|
| 3 |
+
"title": "Efficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations",
|
| 4 |
+
"abstract": "Conventional recommendation systems (RSs) are typically optimized to enhance\nperformance metrics uniformly across all training samples.\n This makes it hard for data-driven RSs to cater to a diverse set of users due\nto the varying properties of these users. The performance disparity among\nvarious populations can harm the model's robustness with respect to\nsub-populations. While recent works have shown promising results in adapting\nlarge language models (LLMs) for recommendation to address hard samples, long\nuser queries from millions of users can degrade the performance of LLMs and\nelevate costs, processing times and inference latency. This challenges the\npractical applicability of LLMs for recommendations. To address this, we\npropose a hybrid task allocation framework that utilizes the capabilities of\nboth LLMs and traditional RSs. By adopting a two-phase approach to improve\nrobustness to sub-populations, we promote a strategic assignment of tasks for\nefficient and responsible adaptation of LLMs. Our strategy works by first\nidentifying the weak and inactive users that receive a suboptimal ranking\nperformance by RSs. Next, we use an in-context learning approach for such\nusers, wherein each user interaction history is contextualized as a distinct\nranking task and given to an LLM. We test our hybrid framework by incorporating\nvarious recommendation algorithms -- collaborative filtering and\nlearning-to-rank recommendation models -- and two LLMs -- both open and\nclose-sourced. Our results on three real-world datasets show a significant\nreduction in weak users and improved robustness of RSs to sub-populations\n$(\\approx12\\%)$ and overall performance without disproportionately escalating\ncosts.",
|
| 5 |
+
"authors": "Kirandeep Kaur, Chirag Shah",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "cs.IR",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.IR",
|
| 11 |
+
"cs.HC"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM Fairness",
|
| 15 |
+
"gt": "Efficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations",
|
| 16 |
+
"main_content": "INTRODUCTION Recommendation systems (RSs) have become an integral part of numerous online platforms, assisting users in navigating vast amounts of content to relieve information overload [1]. While Collaborative Filtering based RSs [2] primarily rely on user-item interactions to predict users\u2019 preferences for certain candidate items, the utilization of language in recommendations has been prevalent for decades in hybrid and content-based recommenders, mainly through item descriptions and text-based reviews [3]. Furthermore, conversational recommenders [4] have highlighted language as a primary mechanism for allowing users to naturally and intuitively express their preferences [5]. Deep recommendation models are trained under the Empirical Risk Minimization (ERM) framework that minimizes the loss function uniformly for all training samples. Such models, however, fail to cater to a diverse set of sub-populations, affecting robustness [6\u201312]. Empirical analysis conducted by Li et al. [13] shows that active users who have rated many items receive better recommendations on average than inactive users. This inadvertent disparity in recommendations requires careful scrutiny to ensure equitable recommendation experiences for all users [14]. arXiv:2405.00824v1 [cs.IR] 1 May 2024 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. On the other hand, Large Language Models (LLMs) like GPT [15], LLaMA [16], LaMDA [17], Mixtral [18] can effectively analyze and interpret textual data, thus enabling a better understanding of user preferences. These foundation models demonstrate remarkable versatility, adeptly tackling various tasks across multiple domains [19\u201321]. However, the field of recommendations is highly domain-specific and requires in-domain knowledge. Consequently, many researchers have sought to adapt LLMs for recommendation tasks [22\u201325]. Authors in [25] outline four key stages in integrating LLMs into the recommendation pipeline: user interaction, feature encoding, feature engineering, and scoring/ranking. The purpose of using LLMs as a ranking function aligns closely with generalpurpose recommendation models. The transition from traditional library-based book searches to evaluating various products, job applicants, opinions, and potential romantic partners signifies an important societal transformation, emphasizing the considerable responsibility incumbent upon ranking systems [26]. Existing works that deploy LLMs for ranking [5, 27\u201334] have proven excellence of LLMs as zero-shot or few-shot re-rankers demonstrating their capabilities in re-ranking with frozen parameters. These works use traditional RSs as candidate item retrieval models to limit the candidate items that need to be ranked by LLM due to a limited context window. Furthermore, Hou et al. [27], Xu et al. [28] interpret user interaction histories as prompts for LLMs and show that LLMs perform well only when the interaction length is up to a few items, demonstrating the ability of LLMs for (near) cold-start users. Since adapting LLMs can raise concerns around economic and efficiency factors, most of these works train RS on entire datasets but randomly sample interaction histories of some users to evaluate the performance of LLMs, questioning the generalizability of results for all users. This leads us to two important research questions. \u2022 RQ1: Though LLMs have shown remarkable ranking performance even in zero-shot settings, how can we reduce the high costs associated with adapting LLMs to support practical applicability? \u2022 RQ2: Conventional recommendation systems are cost-effective and can perform well on most users, as shown by previous works; how can we prevent performance degradation on sub-populations? To address these RQs, we propose a task allocation strategy that leverages LLM and RS\u2019s capabilities in a hybrid framework (Fig. 1). Our strategy operates in two phases based on the responsible and strategic selection of tasks for the cost-effective usage of LLMs. First, we identify the users with highly sparse interaction histories on whom the ranking performance of RS is below a certain threshold \ud835\udc61\ud835\udc5d. All such users are termed as weak users. In the second phase, interaction histories of weak users are contextualized using in-context learning to demonstrate user preferences as instruction inputs for LLM. While the strong users receive the final recommendations retrieved by RS, weak users receive the recommendations ranked by LLM if the quality of the ranked list is better than the RS. We test our framework based on collaborative filtering and learning-to-rank recommendation models and our results show the efficacy of our strategy, both with open-source as well as closedsource LLMs, in boosting the model robustness to sub-population and data sparsity and improving the quality of recommendations. For reproducibility and to support research community, our code is available on https://anonymous.4open.science/r/resp-llmsRS/. In short, the following are our contributions in this paper. \u2022 We introduce a novel hybrid task allocation strategy that combines the strengths of LLMs and traditional RSs to improve robustness to subpopulations and data sparsity. \u2022 Our unique method for pinpointing weak users based upon two criteria (user activity and the received recommendation quality below a set threshold) facilitates interventions using LLMs for equitable recommendations. \u2022 Our proposed framework improves the robustness of traditional recommendation models by reducing weak user count, enhancing recommendation quality, and addressing high costs associated with adapting LLMs. \u2022 Our experiments, both on closed-source and open-source LLMs, show the efficacy of our framework in improving the model robustness to sub-populations by (\u224812%) for varying levels of sparsity and reducing the count of weak users significantly. 2 RELATED WORK Robustness in machine learning (ML) targets developing models capable of withstanding the challenges posed by imperfect data in diverse forms [35]. Within the paradigm of recommendations, some existing works developed models resilient to shifts in popularity distribution [36\u201338], distribution disparity in train and test datasets [39, 40], adversarial and data poisoning attacks [41\u201345]. Our work aims to tackle the recommendation model\u2019s robustness to data sparsity [46] and sub-populations [47]. In their research, Li et al. [13] illustrated that RSs excel in catering to active users but fall short in meeting the overall needs of inactive ones. To address this inequality, they proposed a re-ranking technique that reduced the disparity among active and inactive users. Their results depict that such post-processing techniques [48\u201350] can either harm the average performance on advantaged users to reduce the disparity or reduce the overall utility of models. Though the in-processing techniques [51\u201353] for improving equitable recommendations across various sub-populations can tackle fairness-utility trade-offs, simply adding regularizer term results in sub-optimal performance [54]. Most of these works have shown disparity and evaluated existing models by grouping users based on their activity, demographics, and preferences. Similarly, Wen et al. [55] developed a Streaming-Distributionally Robust Optimization (S-DRO) framework to enhance performance across user subgroups, particularly by accommodating their preferences for popular items. Different from these, our work first builds upon the existing literature that elicits the issue of performance disparities among active and inactive users and then indicates that though inactive users receive lower-quality recommendations on average, this degradation only affects a subset of inactive users rather than all inactive users. Unlike these works, our framework identifies weak users\u2014 inactive individuals whose preferences traditional recommendation systems struggle to capture effectively. Many researchers have turned to LLMs to address some of these problems because, in recent years, LLMs have proven to be excellent \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY re-rankers and have often outperformed existing SOTA recommendation models in zero-shot and few-shot settings without requiring fine-tuning. For example, Gao et al. [56] proposed an enhanced recommender system that integrates ChatGPT with traditional RS by synthesizing user-item history, profiles, queries, and dialogue to provide personalized explanations to the recommendations through iterative refinement based on user feedback. AgentCF [31], designed to rank items for users, involves treating users and items as agents and optimizing their interactions collaboratively. While user agents capture user preferences, item agents reflect item characteristics and potential adopters\u2019 preferences. They used collaborative memorybased optimization to ensure agents align better with real-world behaviours. While the retrieval-ranker framework in [29] remains similar to previous works, authors generate instructions with key values obtained from both users (e.g., gender, age, occupation) and items (e.g., title, rating, category). Despite the excellence of LLMs as ranking agents, adapting LLMs can involve processing lengthy queries containing numerous interactions from millions of users. Furthermore, each query can raise various economic and latency concerns. Thus, all these works randomly select a few users from the original datasets to evaluate the performance of LLMs. In practice, this user base can involve many more users, which questions the practical applicability of large models for recommendations. However, some recent studies have shown the efficacy of large language models (LLMs) as re-ranking agents to cater to queries with shorter interaction histories compared to lengthy instructions that constitute hundreds of interactions. For example, Hou et al. [27] trained recommendation systems to generate candidate item sets and then used user-item interactions to develop instructions. The authors sorted users\u2019 rating histories based on timestamps and used in-context learning to design recency-focused prompts. They prompted LLMs to re-rank the candidate items retrieved by the recommendation systems. Their analysis showed decreased performance of LLMs if the candidate item set had more than 20 items. ProLLM4Rec [28] adopted a unified framework for prompting LLMs for recommendation. The authors integrated existing recommendation systems and works that use LLMs for recommendations within a single framework. They provided a detailed comparison of the capabilities of LLMs and recommendation systems. Their empirical analysis showed that while state-of-the-art sequential recommendation models like SASRec [57] improve with a growing number of interactions, LLMs start to perform worse when the number of interactions grows. Furthermore, both of these works sampled some users to evaluate the performance of LLMs due to the high adaptation costs. To investigate the effectiveness of various prompting strategies Sanner et al. [5] focused on a (near) cold-start scenario where minimal interaction data is available. They used various prompting techniques to provide a natural language summary of preferences to enhance user satisfaction by offering a personalized experience. By exploiting rich positive and negative descriptive content and item preferences within a unified framework, they compared the efficacy of prompting paradigms with large language models against collaborative filtering baselines that rely solely on item ratings. In summary, past works suggest that despite the high costs associated with adapting LLMs for recommendations, these models can outperform existing recommendation models significantly. Moreover, we acknowledge that the literature shows the contrasting capabilities of both RSs and LLMs \u2013 RSs fail to perform well on inactive users due to sparse interaction vectors, and in contrast, LLMs can be prompted to cater to inactive users in near cold-start settings without requiring any fine-tuning. Building upon these crucial insights, our framework first aims to identify the weak users for whom RS finds it hard to capture their preferences accurately. We then use in-context learning to prompt LLMs to generate recommendations for such users. While past works like ProLLM4Rec by [28], dynamic reflection with divergent thinking within a retriever-reranked by [33], recency-focused prompting by [27] and aligning ChatGPT with conventional ranking techniques such as point-wise, pair-wise, and list-wise ranking by [58] are all different techniques to design prompts with different variations, our main contribution lies in the responsible task allocation within recommendation systems and all such techniques can be used within our framework for designing prompts. In the next section, we discuss our methodology in detail. 3 METHODOLOGY We begin here by providing a formal definition of the existing problem. We then discuss our framework, which adopts a hybrid structure by leveraging the capabilities of both traditional RSs and LLMs. For this, we first identify users for whom RSs do not perform well and then leverage LLMs for these users to demonstrate user preferences using in-context learning. 3.1 Problem Formulation Consider a recommendation dataset D with \ud835\udc58data points. Let \ud835\udc48= {\ud835\udc621,\ud835\udc622, . . . ,\ud835\udc62\ud835\udc40} be the set of users and |\ud835\udc48| = \ud835\udc40represents the number of users in D. Let \ud835\udc3c= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc41} be the set of all the items and |\ud835\udc3c| = \ud835\udc41represents the number of items in D. D = {(\ud835\udc62\ud835\udc5a,\ud835\udc56\ud835\udc5b,\ud835\udc5f\ud835\udc5a\ud835\udc5b) : \ud835\udc5a= 1, 2, . . . , \ud835\udc40;\ud835\udc5b= 1, 2, . . . , \ud835\udc41} (1) Here, the triplet \ud835\udc51\ud835\udc5a\ud835\udc5b= (\ud835\udc62\ud835\udc5a,\ud835\udc56\ud835\udc5b,\ud835\udc5f\ud835\udc5a\ud835\udc5b) represents one data point where a user \ud835\udc62\ud835\udc5aprovided a rating of \ud835\udc5f\ud835\udc5a\ud835\udc5bto an item \ud835\udc56\ud835\udc5b. Now, if a user \ud835\udc62\ud835\udc5ahas rated a set of items, then let [\ud835\udc5f\ud835\udc5a\ud835\udc5b]\ud835\udc41 \ud835\udc5b=1 denote the rating vector consisting of explicit rating values ranging from 1 to 5 if a user provided a rating and 0 otherwise. Additionally, \ud835\udf03\ud835\udc5f represents the conventional recommendation model. The first step to solving the problem includes determining different criteria to categorize a user as weak. This includes ranking users based on the RS performance on each one of them. Then, the goal is to understand user characteristics to categorize extremely weak users. For each weak user, we contextualize interaction history as a distinct recommendation task and finally allocate these tasks to LLM. 3.2 Identifying Weak Users We consider two criteria for identifying weak users for recommendation model \ud835\udf03\ud835\udc5f. First, given \ud835\udc3eusers and their associated rating vectors, we evaluate how well the model could rank the relevant user items, often termed as positive items above the irrelevant or negative items. Let \ud835\udc5fdenote the rank of the relevant item, and \ud835\udc5f\u2032 be the rank of irrelevant items. Then, \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. \ud835\udeff(\ud835\udc5f< \ud835\udc5f\u2032) (2) denotes an indicator function that outputs one if the rank of the relevant item \ud835\udc5fis higher than that of the irrelevant item \ud835\udc5f\u2032. Let \ud835\udc41 denote the total number of items and |\ud835\udc45| be the set of all relevant items. Then, similar to Rendle et al. [59], we use AUC measure to evaluate how hard it was for \ud835\udf03\ud835\udc5fto rank items preferred by a certain user, given by P(u) = 1 |\ud835\udc45|(\ud835\udc41\u2212|\ud835\udc45|) \u2211\ufe01 \ud835\udc5f\u2208\ud835\udc45 \u2211\ufe01 \ud835\udc5f\u2032\u2208{1,...,\ud835\udc41}\\\ud835\udc45 \ud835\udeff(\ud835\udc5f< \ud835\udc5f\u2032) (3) Here, |\ud835\udc45|(\ud835\udc5b\u2212|\ud835\udc45|) denotes all possible pairs of relevant and irrelevant items. We acknowledge that various metrics like NDCG, F1, precision and recall have been used to measure the quality of ranking ability of recommendation models. However, these metrics place significant importance on the outcomes of the top-k items in the list and completely ignore the tail. For identifying weak users, we require a metric consistent under sampling i.e. if a recommendation model tends to give better recommendations than another on average across all the data, it should still tend to do so even if we only look at a smaller part of the data. The aim of our framework is a clear task distribution. The performance of top-k metrics varies with \ud835\udc58, and this might raise uncertainty as \ud835\udc58varies with varying users and platforms. Nevertheless, AUC is the only metric which remains consistent under-sampling, and as \ud835\udc58reduces, all top-k metrics collapse to AUC. For more details, we refer the readers to [60]. Past works [13] have shown that active users that provide more ratings receive better recommendations than the inactive users on average. However, only a few inactive users might receive irrelevant recommendations individually (Fig. 3). Thus, we evaluate each user\u2019s activity. Let a user \ud835\udc62rated |\ud835\udc45| items out of a total of \ud835\udc41 items. Then sparsity index S\ud835\udc3cassociated with a given user \ud835\udc62can be calculated as: S\ud835\udc3c(\ud835\udc62) = |\ud835\udc45| \ud835\udc41 (4) If this value falls above a certain threshold \ud835\udc61\ud835\udc60, the user is considered as inactive. Combining with the weak user identification, we obtain, Definition 3.1. Given dataset D and a recommendation model \ud835\udf03\ud835\udc5f, we say that a user \ud835\udc62\ud835\udc5ais extremely weak if the likelihood of \ud835\udf03\ud835\udc5f being able to rank the relevant items above the irrelevant items is below \ud835\udc61\ud835\udc5dand the rating vector [\ud835\udc5f\ud835\udc5a\ud835\udc5b] has extremely high sparsity. i.e., above \ud835\udc61\ud835\udc60 P(\ud835\udc62\ud835\udc5a) \u2264\ud835\udc61\ud835\udc5d && S\ud835\udc3c(\ud835\udc62) > \ud835\udc61\ud835\udc60 (5) It is important to note that a higher AUC value implies better performance, and the value always lies between 0 to 1. Further, we use \ud835\udc61\ud835\udc60= \ud835\udc4e\ud835\udc63\ud835\udc54(S\ud835\udc3c(\ud835\udc37)), the average sparsity of all users in D i.e., \ud835\udc4e\ud835\udc63\ud835\udc54(S\ud835\udc3c(\ud835\udc37)) = 1/\ud835\udc5a\u2217\u00cd\ud835\udc5a \ud835\udc57=1 S\ud835\udc3c(\ud835\udc62\ud835\udc57) for determining this threshold. 3.3 Designing Natural Language Instructions for Ranking Closest to our work, Hou et al. [27] formalized the recommendation problem as a conditional ranking task considering sequential interaction histories as conditions and uses the items retrieved by traditional RS as \ud835\udc50\ud835\udc4e\ud835\udc5b\ud835\udc51\ud835\udc56\ud835\udc51\ud835\udc4e\ud835\udc61\ud835\udc52items. While we aim to design the conditional ranking tasks, our approach differs significantly from theirs as instead of using LLMs as a re-ranking agent for all users; we instruct LLM with the preferences of weak users preferences (sorted in descending order of decreased preference). This technique is detailed below. For each user, we use in-context learning to instruct LLM about user preferences conditions and assign the task of ranking the candidate items. For a user \ud835\udc62, let H\ud835\udc62= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc5b} depict the user interaction histories sorted in decreasing order of preference and C\ud835\udc62= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc57} be the candidate items to be ranked. Then, each instruction can be generated as a sum of conditions and candidate items, i.e., I \ud835\udc62= H\ud835\udc62+ C\ud835\udc62 (6) In-context learning: We use in-context learning to provide a demonstration of the user preferences to LLM using certain examples. As suggested by Hou et al. [27], providing examples of other users may introduce extra noise if a user has different preferences. Therefore, we sort every weak user\u2019s preferences based on explicit user ratings. For example, \"User {\ud835\udc62\ud835\udc60\ud835\udc52\ud835\udc5f_\ud835\udc56\ud835\udc51} liked the following movies in decreasing order of preference where the topmost item is the most preferred one: 1. Harry Potter, 2. Jurassic Park ...\". This forms the condition part of the instruction. We then select items which served as test items for recommendation models as candidate items and instruct LLM to rank them in decreasing order of preference as \"Now, rank the following items in decreasing order of preference such that the top most movie should be the most preferred one: Multiplicity, Dune ...\". It is important to note that while the presentation order in conditions plays a significant role in demonstrating user preferences to LLM, we deliberately shuffle the candidate items to test the ability of LLM to rank correctly. Since LLMs can generate items out of the set, we specially instruct to restrict recommendations to the candidate set. Fig. 2 shows the final template of the instruction given to LLM for a particular user. We use the same template for all identified weak users to contextualize their past interactions into a ranking task. 3.4 Our Framework This section discusses the workflow adopted by our framework as depicted in Fig. 1 and corresponding algorithm 1. Initially, the model takes input as the training D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5band test dataset D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61, a set of users U, a recommendation model \ud835\udf03\ud835\udc5f, large language model \ud835\udf03\ud835\udc59and two thresholds: sparsity threshold \ud835\udc61\ud835\udc60and performance threshold \ud835\udc61\ud835\udc5dwhich depict the minimum sparsity and performance values for user to be classified as strong user. It is important to note that splitting data will not yield a mutually exclusive set of users in both sets, but item ratings for each user in D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5bwill differ from those in D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 2: Instruction template for contextualizing interaction histories of weak users. Table 1: Datasets statistics ML-1M ML-100k Book-Crossing # Users 6,041 943 6,810 # Items 3,952 1,682 9,135 # Interactions 1,000,209 100,000 114,426 Sparsity 95.81% 93.7% 99.82% Domain Movies Movies Books The algorithm begins by training the recommendation model \ud835\udf03\ud835\udc5fon the training set D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5band provides ranked items for all users. Using D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61, we test the ranking ability of the model for each user by evaluating P(\ud835\udc62\ud835\udc5a) using Eq. 3. Further, each user is also assigned a sparsity score S\ud835\udc3c(\ud835\udc62) evaluated using Eq. 4. If P(\ud835\udc62) has a value less than \ud835\udc61\ud835\udc5dand the sparsity index S\ud835\udc3c(\ud835\udc62) for a particular user falls below \ud835\udc61\ud835\udc60, the user is termed as a weak user. While previous works have shown that, on average, inactive users receive poor performance, we pinpoint weak users by evaluating both the sparsity and performance. For all such weak users, we convert rating histories from D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b as conditions H\ud835\udc62using in-context learning and use test items as candidate items C\ud835\udc62for testing purposes. However, in practice, these candidate items can be replaced by unobserved items. The final instructions are generated by combining conditions and candidate items as depicted by (Eq.6). These instructions are given to the LLM, which provides a ranked list of items for each user. For all the strong users, the recommendations presented are the ones ranked by the conventional recommendation model. However, the weak users receive final ranked lists generated by the LLM. 4 EXPERIMENTS This section discusses our experimental setup with details of the datasets and models used, followed by the implementation details of all these models and various metrics used. We finally present empirical results and a comparative analysis of various recommendation models and LLMs. 4.1 Experimental Setup 4.1.1 Datasets. To test the effectiveness of our framework, we conducted experiments on three real-world datasets: ML-1M1, ML100k2, and Book-Crossing (B-C)3. Both ML100k and ML1M 1https://grouplens.org/datasets/movielens/1m/ 2https://grouplens.org/datasets/movielens/100k/ 3http://www2.informatik.uni-freiburg.de/ cziegler/BX/ Algorithm 1 Hybrid LLM-RecSys Algorithm for Ranking 1: Input: D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b: training dataset; D\ud835\udc61\ud835\udc52\ud835\udc60\ud835\udc61: test dataset; U: set of users; S\ud835\udc3c: Sparsity index for all users; \ud835\udf03\ud835\udc5f: recommendation algorithm; \ud835\udf03\ud835\udc59: large language model, \ud835\udc61\ud835\udc60: sparsity threshold, \ud835\udc61\ud835\udc5d: performance threshold. 2: Output: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54: ranked lists of items for strong users, \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58: ranked lists of items for weak users. 3: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\u2190\ud835\udf03\ud835\udc5f(D\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc56\ud835\udc5b) 4: for each user \ud835\udc62\ud835\udc5a\u2208U do 5: Calculate P(\ud835\udc62\ud835\udc5a) using Eq. 3 6: Calculate S(\ud835\udc62\ud835\udc5a) using Eq. 4 7: if P(\ud835\udc62\ud835\udc5a) < \ud835\udc61\ud835\udc5d&& S\ud835\udc3c(\ud835\udc62\ud835\udc5a) then 8: U\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58\u2190\ud835\udc62\ud835\udc5a 9: else 10: U\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54\u2190\ud835\udc62\ud835\udc5a 11: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc60\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc5b\ud835\udc54\u2190\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51[\ud835\udc62\ud835\udc5a] 12: end if 13: end for 14: for each \ud835\udc62\ud835\udc56\u2208U\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58do 15: Generate instruction I \ud835\udc62\ud835\udc56using Eq. 6 16: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc62\ud835\udc56= \ud835\udf03\ud835\udc3f(I \ud835\udc62) 17: \ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc64\ud835\udc52\ud835\udc4e\ud835\udc58\u2190\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc58\ud835\udc52\ud835\udc51_\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc62\ud835\udc56 18: end for are movie-rating datasets, and book-crossing is a book-rating dataset. We select three datasets with varying levels of sparsity for evaluating robustness to data sparsityML100k has the least sparsity, and Book-Crossing has the highest sparsity (for exact values, refer to table 1). All these datasets have explicit user preference in the form of ratings ranging from 0 \u22125 for movie ratings and 0 \u221210 for book ratings dataset. We do not filter out users from ML1M and ML100k as each user has rated at least 20 movies in both these datasets. For consistency, we filter out users with less than 20 ratings from the Book-Crossing dataset. While both movie-ratings datasets have clustering based on sensitive attributes like age and gender, this paper aims to boost performance on all weak users irrespective of the sensitive features. Thus, following the protocol adopted by [13], we divided users based on their activity or the number of items rated. Any user who has rated items below a certain threshold \ud835\udc61\ud835\udc60 is termed an inactive user, and all those above this threshold are active users. We calculated the number of items rated on an average by all the users and used this average value as a threshold; this threshold can always vary and be set to different values per application. Table 1 presents the statistics of all three datasets. 4.1.2 Baselines and Models. Our hybrid framework uses both traditional recommendation systems and LLMs. Thus, we include two different types of recommendation models: (i) Collaborativefiltering based: Neural Collaborative Filtering (NCF) [61] as well as ItemKNN [62]; and (ii) Learning-to-rank modelBayesian Personalized Ranking (BPR) [59]. While these models identify weak users and generate candidate items, LLMs are further deployed to improve the performance of such users. We use both open (Mixtral-8x-7binstruct) and closed-sourced (GPT-3.5-turbo)to test the capability of the proposed framework. It is important to note that the Collaborative Filtering models are mostly used to capture the long-term \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. (a) ItemKNN (ML1M) (b) NCF (ML1M) (c) BPR (ML1M) (d) ItemKNN (ML100k) (e) NCF (ML100k) (f) BPR (ML100k) (g) ItemKNN (B-C) (h) NCF (B-C) (i) BPR (B-C) Figure 3: AUC vs Sparsity scatter plots for illustrating the performance (measured using AUCx-axis) for all users in ML1M, ML100k and Book-Crossing (B-C) dataset on three different algorithms. preferences of users. We acknowledge that existing works (refer to Section 2) have used sequential recommendation models for comparing the performance of LLMs. These works also use recommendation models as candidate retrieval models and then use LLMs to rerank the candidate items. However, sequential models are used to predict the next item according to the recently bought items. We test our framework mainly on long-term user preferences and use collaborative filtering models not only for candidate item retrieval but also for recommending top-k items to strong users. However, we believe that any existing model adopting this retrieval-reranker strategy can adopt our framework. For space constraints, we present an evaluation of our framework only on NCF, ItemKNN and BPR. In line with existing literature [27, 28], we design instructions by randomly sampling 20 rated items to demonstrate the user\u2019s preferences to the LLM. Furthermore, existing works do not discuss the responsible adaptation of LLMs, and the underlying task of retrieval-reranker of such models remains consistent and can thus use our framework. 4.1.3 Implementation details. For ease of reproducibility, we use the open-source recommendation library RECBOLE [63] for implementing all recommendation models and API calls for access to LLMs 45. Each dataset is split into train (80%), test (10%) and validation set (10%). We carefully use the validation set to tune all recommendation models\u2019 hyperparameters. For BPR, we search for 4https://platform.openai.com/docs/api-reference 5https://www.llama-api.com/ optimal learning rate in [5\ud835\udc52\u22125, 1\ud835\udc52\u22124, 5\ud835\udc52\u22124, 7\ud835\udc52\u22124, 1\ud835\udc52\u22123, 5\ud835\udc52\u22123, 7\ud835\udc52\u22123] and in [5\ud835\udc52\u22127, 1\ud835\udc52\u22126, 5\ud835\udc52\u22126, 1\ud835\udc52\u22125, 1\ud835\udc52\u22124, 1\ud835\udc52\u22123] for NCF. Additionally, we use [64, 32, 16] as MLP hidden size for all layers and search optimal dropout probability within [0.0, 0.1, 0.3] for NCF. Two hyperparameter for ItemKNN involve \ud835\udc58(neighborhood size) in [10, 50, 100, 200, 250, 300, 400] and \ud835\udc60\u210e\ud835\udc5f\ud835\udc56\ud835\udc5b\ud835\udc58(normalization parameter to calculate cosine distance) in [0.0, 0.1, 0.5, 1, 2]. We adopt the protocol presented by a recently released toolkit RGRecSys [64] for evaluating robustness to sub-population using NDCG and AUC. We emphasize that the use of AUC to measure the hardness associated with each user for a given recommendation model is because of the consistency property of AUC. We use the popular CatBoost6 library that offers AUC implementation for ranking and also report final NDCG@10 scores. Furthermore, we set the temperature to 0 in GPT-3.5-turbo to minimize the generation of out-of-list items and hallucinations. However, as per our observations, setting the temperature to 0 in Mixtral-8x7b-instruct, the model outputs the list in the same order in which it was given input to it. Hence, we set the temperature to 1 and removed the items which were not originally present in the candidate list. We now discuss our empirical inferences as we conduct experiments following these details. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Table 2: Tabular illustration of the overall comparison of results in terms of ranking quality measured using AUC and NDCG@10 for two collaborative filtering based (Neural Collaborative Filtering, ItemKNN) and one learning-to-rank models in comparison to their usage within our framework along with one open-sourced LLM (GPT-3.5-turbo) and one close-sourced LLM (Mixtral8x7b-instruct). ML1M ML100K Book-Crossing AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) AUC AUC (Weak Users) NDCG@10 NDCG@10 (Weak Users) ItemKNN 0.47032 0.23776 0.66792 0.58226 0.45616 0.24778 0.66792 0.58226 0.43309 0.25909 0.75197 0.65098 ItemKNN + GPT-3.5-turbo 0.58142 0.51776 0.82643 0.70352 0.59781 0.51953 0.82643 0.82598 0.61629 0.49713 0.86212 0.77101 ItemKNN + Mixtral-8x7b-instruct 0.56035 0.51708 0.70147 0.69276 0.59972 0.52327 0.70147 0.82438 0.55203 0.47215 0.85904 0.76183 NCF 0.47805 0.22945 0.78795 0.59801 0.48311 0.25182 0.78795 0.67734 0.51852 0.29004 0.78370 0.66148 NCF + GPT-3.5-turbo 0.58935 0.52122 0.80317 0.71723 0.60831 0.50814 0.80317 0.82603 0.61946 0.50513 0.88219 0.78133 NCF + Mixtral-8x7b-instruct 0.57211 0.52100 0.79178 0.70741 0.61174 0.50903 0.79178 0.82306 0.59901 0.49897 0.86254 0.78001 BPR 0.57957 0.37824 0.88833 0.73998 0.51629 0.17020 0.88833 0.71944 0.53310 0.24426 0.80405 0.70289 BPR + GPT-3.5-turbo 0.65397 0.51997 0.90098 0.82117 0.6387 0.51435 0.90098 0.82452 0.62145 0.50998 0.88173 0.81933 BPR + Mixtral-8x7b-instruct 0.64174 0.51972 0.89742 0.82998 0.64910 0.52135 0.89742 0.81761 0.61625 0.50081 0.87798 0.80284 4.2 Empirical Evaluation 4.2.1 Comparative analysis. The first phase begins by identifying the inactive users. For this, we calculate the average sparsity of all users in the dataset and identify users above this threshold as inactive users. However, one can use different values for this threshold like [13] used only top 20% of the sparse users for annotating the inactive users. We then evaluate the AUC score using equation 3 to measure the performance of RS on all these users. In line with the findings of Li et al. [13], RS performs significantly well on active users as compared to inactive users. For instance-by-instance analysis of every user, we then plot AUC scores against the sparsity index as shown in Fig. 3 for all three datasets using three different recommendation algorithms: ItemKNN, NCF, and BPR. While ItemKNN and NCF are collaborative filtering algorithms, the overall scatterplot for BPR shows better AUC scores as compared to the other two algorithms. This is because of the inherent nature of learning-to-rank models like BPR, which rank user preferences better. This figure shows that though RS performs poorly on inactive users on average, not all inactive users receive poor-quality recommendations. We thus use our definition 5 to identify such users and mark them as weak. It might be interesting to explore why not all inactive users receive poor performance. We leave this exploratory study for future work. For the second phase, we design instructions for these weak users using the approach discussed inSection 3. Our results in Table 2 show that LLMs perform significantly well on these users. Using LLM and base RS models yields the best results for all three datasets. Our results show improvement in both AUC and NDCG@10 for weak users, thus demonstrating improved robustness to the subpopulation of weak users. This further leads to an overall improved ranking quality. We also highlight that previous works like [27, 34] show that close-source models perform much better than opensource. However, understanding the properties of users for which LLMs inherently perform well (like we provide a mechanism of finding weak users) and responsibly assigning tasks to large models improve performance using open-source as well as closed-source models. Our results show that Mixtral-8x-7b-instruct can perform almost equally well on weak users in all the datasets and base models. Furthermore, as observed for the ML100k dataset, this open-source model can outperform GPT-3.5-turbo when evaluated 6https://github.com/catboost/ on AUC. It should be noted that AUC mainly evaluates the discriminatory ability of model to rank positive items over negative and NDCG focuses on the user\u2019s satisfaction with the ranked list, considering both relevance and position. The reason for this can be associated with dataset sparsity. As shown in Table 1, the sparsity of the ML100k dataset is lesser (\u224893%) compared to the other two datasets. While Mixtral is a good choice when datasets are small and more dense, GPT-3.5-turbo performs well for extremely sparse datasets. Yet, it is important to note that in either case, the margin of the performance of both these LLMs is not significant, and thus, even open-source models can give a comparable performance. Nevertheless, usage of GPT model yields best NDCG@10 scores for all datasets. 4.2.2 Reduction in weak user count. For analyzing the variations in the count of weak users, we counted a number of weak users identified in the first phase whose interactions were contextualized and given to LLMs as the base for comparison with LLMs using the same threshold \ud835\udc61\ud835\udc5d. We evaluated AUC on the rankings obtained by LLM for these users. It was noted that a few users continued being hard even for LLM if the AUC lied below \ud835\udc61\ud835\udc5d. Fig. 4 shows that when used with large models, the count of weak users in recommendation systems drops significantly. In highly sparse datasets like that of Book-Crossing and ML1M, GPT-3.5-turbo reduced the number of weak users by \u224887% and Mixtral-8x7b-instruct by \u224885%. On the contrary, when the dataset is dense like ML100k, Mixtral-8x7binstruct can reduce the count by \u224899% and the closed-source model by \u224888%. While the reduction ability of GPT-3.5-turbo remains consistent over all datasets, the open-source models yield better performance for less sparse datasets yet improve the robustness of RS to sub-populations. In addition, we noted that a single query takes \u22488 seconds in GPT-3.5-turbo and \u224811 seconds in Mixtral-8x7b-instruct. This shows each user query\u2019s high processing times and inference latency. Thus, it is crucial to use these models responsibly by identifying what they are good at. Considering the example of the smallest dataset of ML100k, which consists of 943 users. Our strategy for identifying weak users results in only 330 weak users (worst case by ItemKNN), which leads to an overhead of 2, 640 seconds in using GPT-3.5-turbo in addition to the training time of base RS models, which is significantly less than the 7, 544 seconds if used for all the users. \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Trovato et al. (a) ML1M (b) ML100k (c) Book-Crossing Figure 4: Comparative analysis of reduction in the count of weak users 5 DISCUSSION In this work, we implemented a novel approach for responsible adaptation of LLMs for ranking tasks.As suggested by Burnell et al. [65], the involvement of AI in high-stake decision-based applications (like ranking models for job recommendations) requires instance-by-instance evaluation instead of aggregated metrics for designing responsible AI models. Our results in Fig. 3 show that many inactive users receive recommendations of poor quality by traditional RS. Some inactive users still receive recommendations comparable to the active users, but this might be due to high similarity scores with active users that existing models can still capture their preferences effectively. We leave this as an exploratory study for future work. However, the overall performance scores on active users remain better than those of inactive users. Building upon these weak instances, our framework emphasizes on instance-by-instance evaluation of users. While we group users based on activity and then evaluate the performance of inactive users, our approach pinpoints to weak users whose preferences remain hard for traditional RS to capture effectively. We believe that our framework inherently addresses the issue of group fairness. Though we do not group users based on demographics, our framework can also be extended to these scenarios where instead of activity, user demographics can be used to group users and then within the marginalized groups, the interaction histories of users who receive poor performance can be contextualized and given to LLM. However, irrespective of the demographics, our framework mainly addressed the robustness to data sparsity and sub-population of weak users, which inherently tackles the fairness issue. This framework further helped reduce the number of queries which needed to be given to the LLM. Since most existing works (refer to Section 2) randomly select a few users out of all the users present in the dataset to evaluate the performance of LLMs, our framework provides a systematic way of selecting users for which LLMs can be used. Leveraging the capabilities of LLMs for weak users, our work emphasizes the importance of low-cost traditional RSs as well. We also observed that in some cases (Fig. 4), LLM might not be able to perform well on every weak user. This opens up new research opportunities for understanding similarities and differences within the identified weak users on which LLM does and does not perform well. Further, one can think of various prompting strategies to prompt the model to capture the preferences of extremely weak users effectively. Past works have developed various prompting strategies, which can all be tested to observe which strategies remain effective for which types of users. Nevertheless, the main goal of this paper remains to emphasise the importance of responsible adaptation of LLMs by strategically selecting tasks for which these models inherently perform well. It is also important to note that for weak users, we still obtain candidate items from traditional RS, as has been done in most past works (refer to Section 2). This helped in reducing the candidate set from thousands of unrated items to a few, which were given to the LLM to then rank. While this approach ensures that the results obtained by RS for weak users are utilized for generating candidate items instead of discarding such results directly, thus maximizing the usage of RSs even for weak users, it has a limitation. Traditional RSs perform worse on these users, and the candidate items might not capture the true preference of weak users. When we give these candidate items to LLM for ranking, the results might deviate further from true preferences. This issue can be the one reason that LLMs might not perform well on all weak users. One can, thus, further investigate the relation of candidate items to the performance of LLMs on certain users. If the candidate items are already non-preferred items by users, LLM might inherently find it difficult to perform well. Our work, thus, represents a foundational step towards responsibly adapting LLMs while emphasizing the importance of traditional models, particularly focusing on addressing the challenges posed by sub-populations with sparse interaction histories. Our instance-by-instance evaluation approach, inspired by the imperative highlighted by recent studies in high-stakes decision-based AI applications, underscores the necessity of a nuanced understanding of individual user needs and preferences. While our framework emphasizes the importance of leveraging traditional recommendation systems alongside LLMs, we acknowledge the need to further explore the performance variations among weak users and the impact of candidate item selection on LLM effectiveness. Moving forward, our work lays the groundwork for continued research into refining the adaptation of LLMs, ensuring their responsible deployment across diverse user populations and application scenarios. \fEfficient and Responsible Adaptation of Large Language Models for Robust Top-k Recommendations Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY 6 CONCLUSION In this paper, we presented a hybrid framework that aims to improve the robustness of RS to sub-populations by leveraging LLMs. Our approach first utilized user activity to identify inactive users and then measured the performance of various RSs on these users to pinpoint weak users on which RSs find it hard to perform well. In doing so, this paper represents a novel stride towards improving robustness to sub-populations (irrespective of sensitive attributes) in RSs through efficient and responsible adaptation of LLMs without requiring any fine-tuning. While we evaluated our framework for the long-term preference of users using various collaborative filtering and learning-to-rank models, our model can be extended to various recommendation models. Our work particularly examines the importance of evaluating various user properties in the responsible adaptation of generative models within the recommendation domains. This paper opens numerous research directions for further exploration including developing prompting strategies for extremely weak users on which LLMs can not perform well. We further aim to explore other factors that can aid in the responsible adaptation of large models and improve the robustness and fairness of the challenges within recommendation systems."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00843v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00843v1",
|
| 3 |
+
"title": "Can a Hallucinating Model help in Reducing Human \"Hallucination\"?",
|
| 4 |
+
"abstract": "The prevalence of unwarranted beliefs, spanning pseudoscience, logical\nfallacies, and conspiracy theories, presents substantial societal hurdles and\nthe risk of disseminating misinformation. Utilizing established psychometric\nassessments, this study explores the capabilities of large language models\n(LLMs) vis-a-vis the average human in detecting prevalent logical pitfalls. We\nundertake a philosophical inquiry, juxtaposing the rationality of humans\nagainst that of LLMs. Furthermore, we propose methodologies for harnessing LLMs\nto counter misconceptions, drawing upon psychological models of persuasion such\nas cognitive dissonance theory and elaboration likelihood theory. Through this\nendeavor, we highlight the potential of LLMs as personalized misinformation\ndebunking agents.",
|
| 5 |
+
"authors": "Sowmya S Sundaram, Balaji Alwar",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "cs.AI",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.AI"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "LLM AND Agent",
|
| 14 |
+
"gt": "Can a Hallucinating Model help in Reducing Human \"Hallucination\"?",
|
| 15 |
+
"main_content": "Introduction Unwarranted beliefs, including pseudoscience and conspiracy theories, represent intriguing phenomena within the human psyche, characterized by steadfast adherence to ideas despite lacking empirical support. The rami\ufb01cations of such beliefs span a spectrum from benign to profoundly detrimental. Consequently, their origins and mechanisms of modi\ufb01cation have been extensively examined in the disciplines of philosophy and psychology (Torres et al. (2020), Torres et al. (2023), Huete-P\u00b4 erez et al. (2022), Gershman (2022), Maarten Boudry & Pigliucci (2015)). This study delves into the comparison between humans and large language models (LLMs) within this framework, and investigates the potential of LLMs as agents of persuasion through a philosophical lens. The consequences of embracing and propagating unwarranted beliefs have yielded numerous adverse outcomes, ranging from debilitating \ufb01nancial debt and fraudulent schemes to, in extreme cases, loss of life. Despite the documented and acknowledged repercussions, a distressingly high proportion of individuals continue to endorse such beliefs. A 2005 media survey revealed that three out of four Americans subscribed to unwarranted notions, encompassing areas such as paranormal phenomena, astrology, telepathy, and conspiracy theories. (Waymire, 2019). From the lens of epistemology (Steup & Neta, 2005), there exists a fundamental difference between knowledge and belief. Knowledge is attributed to an agent only when there is substantial evidence supporting its accuracy, whether as a direct observation or a derived inference. In contrast, a belief may lack any basis in truth. It is crucial to underscore that the objective of this inquiry is to classify any belief as unwarranted if it lacks substantiated evidence within the collective knowledge of humanity, rather than solely relying on the perspective of the individual agent. In other words, the ignorance of the individual agent does not negate the classi\ufb01cation of a belief as unwarranted (Hansson, 2008). Research into the underlying causes of such behavior, despite advancements in human critical thinking, yields varying conclusions regarding its origins. Nonetheless, there is consensus among studies that a multitude of unwarranted beliefs stem from similar cognitive patterns (Lobato et al., 2014). This observation can be elucidated by recognizing that not all intuitive processes are devoid of merit, and conversely, not all scienti\ufb01c beliefs are 1 \fPreprint. Under review. immutable truths at any given moment. This conceptual framework \ufb01nds expression in the renowned doctrine of \u201ddual process theory\u201d (Kahneman, 2013), which posits that intuition and reasoning arise from distinct cognitive mechanisms. Consequently, individuals endowed with robust scienti\ufb01c reasoning abilities may still harbor pseudo-scienti\ufb01c beliefs, in\ufb02uenced by cultural, historical, media, and societal factors (Impey et al., 2011). This phenomenon is also investigated as the demarcation problem, which entails determining the criteria for distinguishing science from non-science (Resnik, 2000). Given the multitude of models positing causes, we opt to focus on a subset comprising cognitive biases and logical fallacies for our investigation. Cognitive biases represent patterns of thought that in\ufb02uence judgment and decision-making, whereas logical fallacies denote errors in the art of argumentation (Bennett, 2012). Exploring the origins of unwarranted beliefs brings us closer to developing persuasive models capable of in\ufb02uencing individuals. Such cognitive frameworks \ufb01nd application across various domains including marketing and politics. In our inquiry, we focus on cognitive dissonance theory (Festinger, 1962) and elaboration likelihood theory (Cacioppo et al., 1986). Cognitive dissonance theory posits that discomfort or dissonance, stemming from the reluctance to abandon a belief despite contradictory evidence, serves as a potent tool of persuasion. Conversely, elaboration likelihood theory proposes two routes to persuasion: one characterized by systematic analysis and reasoning, and the other by heuristic processing in\ufb02uenced by factors like social consensus, brand recognition, and celebrity endorsements. In this work, we \ufb01rst examine the level of unwarranted beliefs that LLMs harbor using psychometric tests that comprise of natural language questions and contrast it with the average human belief levels. Although prior studies have examined the performance of large language models (LLMs) in relation to these constructs (Koo et al. (2023), Li et al. (2024)), the correlation between unwarranted beliefs and these fallacies remains unexplored. Furthermore, we compare and contrast human and LLM cognitive models of belief and present ways of using the LLM as a persuasion agent to challenge unwarranted beliefs of humans. All the data and LLM prompts are available in the appendix and supplementary \ufb01les. In essence, our aim is to mitigate human hallucinations through the utilization of LLMs. 2 Human Hallucinations Unwarranted Beliefs Reasoning about beliefs is known as doxastic logic (Lindstr\u00a8 om & Rabinowicz, 1999) in the \ufb01eld of epistemology. Numerous models of belief reasoners have been proposed (Smullyan, 1986), with an accurate reasoner epitomizing the perfect skeptic. Such an agent only entertains a proposition if it holds true. This concept can be represented using propositional logic and the belief operator as follows: \u2200p : Bp \u2192p. Humans, are unfortunately, not perfect reasoners. Most humans are consistent reasoners. That is, if they believe a proposition, they do not believe its negation (\u00ac\u2203p : Bp \u2227B\u00acp or \u2200p : Bp \u2192\u00acB\u00acp). For a consistent reasoner, epistemology suggests rational steps for belief revision and/or updation (Peppas & Williams, 1995) in the face of new evidence. The foundation of a human\u2019s initial belief system is intricately intertwined with their childhood experiences. The multifaceted socio-cultural and emotional landscapes encountered during upbringing contribute to the formation of the initial set of propositions, alongside inherited models of knowledge. Addressing this complexity in belief formation are information processing models (Wyer & Albarracin, 2005), which diverge from linear frameworks that conceptualize belief sources as a simple sum of factors, or conditional inference models wherein beliefs are constructed based on probabilities assigned by past experiences stored in memory. We assume the information processing model of belief formation as it highlights the complex factors that may differentiate a human from a machine. As a working hypothesis, we propose that LLMs lacking inherent notions of self or emotions, may be immune to the in\ufb02uence of emotional or socio-cultural factors. However, given their training on human-derived data potentially imbued with such notions, we ap2 \fPreprint. Under review. proach this belief cautiously. With both outcomes conceivable, we assess popular LLMs using a psychometric test. 3 LLMs on PEUBI After belief formation and revision, we require a tool to assess an individual\u2019s belief state at a speci\ufb01c moment. For this process, we take the help of psychometry. Psychometry is the theory and technique of measurement in the \ufb01eld of psychology (Kaplan & Saccuzzo, 2001). In the recent past, there have been quite a few psychometric tests suggested for studying such unwarranted beliefs (Huete-P\u00b4 erez et al. (2022), Maarten Boudry & Pigliucci (2015), Torres et al. (2023), Stanovich et al. (2016)). For our investigation, we have selected the Popular Epistemically Unwarranted Beliefs Inventory (PEUBI) (Huete-P\u00b4 erez et al., 2022), an open-source psychometric tool which has a comprehensive coverage of unwarranted beliefs. Consisting of 36 questions graded on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree), PEUBI offers a diverse range of belief inquiries in the areas of pseudoscience and conspiracy theories. We chose three LLMs for our study the \u201dchat\u201d versions of GPT-3.5 (Achiam et al., 2023), GPT-4 (Achiam et al., 2023) and Gemini (Team et al., 2023) with default settings. We prompted the LLMs with the same text in the psychometric tests and no additional information and on one iteration(zero-shot processing) A.2. Details of individual input and output are available in the supplementary \ufb01les . In Table 1, we present the results of our study where record the mean human response along with the LLM outputs. Through the analysis of the outputs, we found GPT-3.5 to be conservative and inconsistent. GPT-4 tends to take extreme options (either 1 or 5) but tends to argue towards the premise. Both GPT-3.5 and GPT-4 suggest that these beliefs may be cultural. GPT-4 tends to assert against conspiracy theories more strongly than cultural beliefs. GPT3.5 seems to be more pragmatic with cultural beliefs and suggests a value of 3. Gemini also offers more information and labels the kind of unwarranted belief. Even though all these LLMs differ in their scope, they did perform resonably better than the mean human performance. 4 Under the Hood The performance of LLMs on the psychometric tests is consistently more than average human performance. This phenomenon is strikingly interesting and invites further experimentation. There are numerous studies exploring the causes, correlations and mechanisms of unwarranted beliefs (Rizeq et al. (2021), Torres et al. (2020), Stanovich et al. (2016) etc.). We make a qualitative study of a few logical fallacies and cognitive biases on the LLMs, to explore their behavior. 4.1 Logical Fallacies and Cognitive Biases There is growing interest in the \ufb01eld of assessing the robustness of LLMs to cognitive biases (Koo et al., 2023) and logical fallacies (Li et al., 2024). The studies underscore the inconsistency in reasoning exhibited by LLMs and their susceptibility to signi\ufb01cant biases, cautioning against their indiscriminate use in various applications. This observation aligns with the notion that although these models lack emotional and social contexts inherently, their training on data in\ufb02uenced by these constructs can introduce biases. Despite these inconsistencies in reasoning, LLMs demonstrated moderate pro\ufb01ciency on the PEUBI benchmark. This observation suggests the presence of a unique form of \u201ccompetence\u201d in LLMs, one that is neither inherently intelligent nor rational (Mahowald et al., 2023). Perhaps this edge is a product of debiasing strategies or a by-product of reinforcement learning through human feedback. This hypothesis suggests that the gains are super\ufb01cial, and are in a state of unstable equilibrium. As an experiment for discovering the extent of stability, we repeated the PEUBI experiment with the negation of the statements for the LLMs. A consistent reasoner would simply \ufb02ip the answers. The results are presented in the appendix A.3. We found that the results were not consistently negated. From this idea, we got the notion that 3 \fPreprint. Under review. Question Human Gemini GPT-3.5 GPT-4 The most in\ufb02uential and powerful people in the world plotted to cause the economic crisis. 3.05 3 1 3 Dragons and unicorns only exist in fantasies. 4.5 5 2 5 I believe in God 2.09 3 3 5 I say \ufb01ngers crossed or I directly cross my \ufb01ngers to bring me luck. 2.81 4 3 5 It is a coincidence if a premonition comes true 3.31 4 2 5 If you break a mirror, you will be unlucky. 1.56 3 2 5 I believe in vampires. 1.41 3 3 5 Human beings are not the result of an experiment by aliens 4.12 5 1 5 It is possible to contact the dead with the help of a medium. 2.08 2 3 5 The cure for cancer has already been found but it is being kept as a secret so that pharmaceutical companies can get richer. 2.59 2 1 1 The Loch Ness Monster is just an urban legend. 4.1 4 2 5 Many alternative therapies (Reiki, Bioenergy) are just passing fashions with no real power to cure. 3.5 2 4 5 I try not to open an umbrella indoors because it brings bad luck. 1.91 3 3 5 When dreams seem to predict the future, it is a mere coincidence. 3.57 5 4 5 When we pray to God, he listens to us and gives us protection. 1.81 3 3 5 Suggesting that someone can be cured just by placing your hands on them or thinking about them is a con. 4.03 5 4 5 I try not to walk under a ladder because it brings bad luck. 1.91 3 3 5 The United States government knew beforehand about the terrorist attacks on 11 September. 2.89 1 1 5 Heaven and Hell exist. 1.92 3 3 5 If we follow the principles of our religion, God will reward us 1.67 3 3 5 Some people can see the spirits of the dead. 2.31 3 3 5 At a wedding, if you catch the bride\u2019s bouquet it means you will be the next one to get married. 1.92 3 3 5 The Virgin Mary conceived Jesus by the power of the Holy Spirit. 1.6 3 3 5 There is evidence that proves the existence of the Yeti and Big Foot. 1.85 2 3 5 I doubt that there are clairvoyants who can predict the future of others. 3.96 4 4 5 Some human beings have alien implants. 1.6 3 3 5 Graphology (determining personality by examining handwriting) has no scienti\ufb01c basis. 3.02 5 4 5 It is impossible for a dead relative to communicate with us. 3.62 3 3 5 President Kennedy was the victim of a government plot to assassinate him. 3.36 3 2 5 Nobody can see the future. 3.82 3 4 5 The of\ufb01cial version given by the authorities tends to conceal the truth. 3.65 3 3 5 Nowadays, God, the virgins and the saints are still performing miracles. 1.58 3 3 5 I doubt that it is possible to be reincarnated. 3.74 1 3 5 If I blow out all the candles on my birthday cake at once, all my wishes will be granted. 1.64 3 3 5 If your ears are ringing, someone is talking bad about you. 1.72 1 3 5 Google and the CIA cooperate to control Internet users. 3.65 2 1 5 Table 1: Performance of LLMs on PEUBI vs. Mean Human Performance 4 \fPreprint. Under review. negation is not properly modeled. Table ??. GPT-4 negates strongly (\ufb02ips between 1 or 5) but it does contradict itself. Gemini is inconsistent but prefers to take a middle stance. 4.2 Low-Resource Languages If LLMs genuinely possess constructed knowledge models for reasoning, their performance should be consistent across languages with varying resource availability. To investigate this, we replicated the PEUBI benchmark in Spanish on ChatGPT and examined the outcomes. Our analysis revealed that the models exhibited a tendency towards noncommittal responses (favoring option 3) and demonstrated a notably concise approach when compared to the English model (Table 5). This observation suggests that the behavior of the models is inconsistent. The details are in the appendix A.3. 4.3 Unstable Rationality By analyzing these concepts, we suggest a informal notion of LLM intelligence that we term unstable rationality. Rationality and language are tightly coupled constructs (lan, 1990). This notion ties in with the idea of functional competence (Mahowald et al., 2023) of language. We need both descriptive and argumentative aspects of language. Unstable rationality represents a description of rationality derived from the interdisciplinary language upon which LLMs have been trained. The evidence suggests that LLMs are inconsistent (\u2203p : Bp \u2227B\u00acp) and inaccurate reasoners (\u2203p : \u00acp \u2227Bp). This facet of LLMs can tempt one to label LLMs as incapable of reasoning. However, their admirable performance on PEUBI, despite these de\ufb01ciencies suggests that there exists a semblance of rationality that is unlike what humans possess. We propose the idea of unstable rationality inspired by the notion of unstable equilibrium in classical mechanics (Taylor & Taylor, 2005). In classical mechanics, an in\ufb01nitesimal force on an object in unstable equilibrium displaces the object from rest. Similarly, small modes of disruption, such as negation or changing a language causes signi\ufb01cant damage to the observed rationality. We hypothesize that a factually consistent training data will produce better modes of rationality. If the training data D frequently has inconsistent positions, then we believe this is a ground for higher inconsistencies ((p \u2208D) \u2227(\u00acp \u2208D) \u2192\u2203p : Bp \u2227B\u00acp). These claims need to be supported with further empirical research. We acknowledge that such a rational training set will not be re\ufb02ective of the complexity of language and its many contexts. Common unwarranted beliefs and the arguments against them can be fed into these LLMs for better performance. The experiments suggest that true rationality in the current framework may not be possible, but with enough care in the pre-training or the \ufb01ne tuning tasks, one can nudge an LLM towards achieving good enough rationality. 5 Human vs LLM In cognition, the concept of self holds signi\ufb01cant importance. Throughout our inquiry, there\u2019s a risk of anthropomorphizing Large Language Models (LLMs) by attributing beliefs to them. It\u2019s important to clarify that when we mention an LLM \u2019believes\u2019 something, we\u2019re con\ufb02ating thought with language (Mahowald et al., 2023). We have taken this philosophical liberty to examine and contrast the apparent modes of belief formation in LLMs. GPT-4 demonstrates a remarkable ability to delineate itself as an AI agent, avoiding the con\ufb02ation with a human-like self. However, notions of self are less de\ufb01ned in other LLMs. As mentioned before, information processing models (Wyer & Albarracin, 2005) describe the complex interplay of many factors such as social norms and cultural beliefs in belief formation. Cognitive patterns such as con\ufb01rmation bias and motivated reasoning (Kunda, 1990) that arise from intuition and emotional processing also lead to some forms of unwarranted beliefs. Though LLMs are free from these complex processes, they are trained on data that is re\ufb02ective of these inherent human traits. Hence, they appear to be both rational and irrational, leading to our de\ufb01nition of unstable rationality. 5 \fPreprint. Under review. 6 LLM Persuasion: Prototype The discussion so far establishes LLM rationality as an unstable one. Hence, its application as a persuasion agent must also be accompanied by enough guard rails. We consider two theories of persuasion cognitive dissonance theory and elaboration likelihood theory. Cognitive dissonance theory suggests that the discomfort arising from holding onto a belief despite con\ufb02icting evidence can be effectively used as a persuasive tool. We asked GPT-4 to simulate what conversation of persuasion through cognitive dissonance may look like. The results of this query is present in Table 2. Elaboration likelihood theory outlines two pathways to persuasion: one involving careful analysis and reasoning, and the other in\ufb02uenced by shortcuts like societal agreement, brand familiarity, and endorsements from celebrities. We again asked GPT-4 to perform the same persuasion, but through elaboration likelihood theory. The results are presented in Table 3. Both of these attempts of persuasion are noteworthy. No details about both persuasion theories were given as part of the prompts. The agent to be persuaded seems to be more receptive to belief revision than might seem human. Nevertheless, this prototype suggests, that with enough caution, one can utilize LLMs as a personalized persuasion tool, capitalizing on its own models of rationality. As the rationality of LLMs themselves is uncertain, the process of building a robust persuasion agent involves various rounds of \ufb01ne-tuning and prompt engineering coupled with human-in-the-loop guard rails. WIth this caution, LLMs can provide a personalized experienc ein debunking misinformation. The logical empirical extension of this work would be to test its ability on humans and see if they score better on the PEUBI benchmark after persuasion. 7 Related Work There is a burgeoning interest in the philosophy of LLMs (Mahowald et al. (2023), Zhang (2024), Schwitzgebel et al. (2023)). As LLMs are both being pervasively used personally and deployed into large scale applications, an understanding of the capabilites of LLMs through various lenses is the need of the hour. Logical reasoning abilities of LLMs are being studied extensively. Datasets and benchmarks for logical fallacies (Li et al. (2024), Jin et al. (2022),Payandeh et al. (2023)), cognitive biases Koo et al. (2023) and logical reasoning (Liu et al. (2023), Liang et al. (2022)) have been explored. There is an established \ufb01eld of work dealing with psychological and epistemic modeling of unwarranted beliefs in humans (Rizeq et al. (2021), Torres et al. (2020), Resnik (2000), Festinger (1962), Kahneman (2013), lan (1990) etc.) and persuasion (Cacioppo et al. (1986), Peppas & Williams (1995) etc.). To the best of our knowledge, this is the \ufb01rst work that expounds the philosophy of rationality of LLMs and humans and suggests that the unstable rationality of LLMs can have potential for the application of persuasion. 8 Limitations The limited number of items in the PEUBI may restrict the scope of generalizations regarding individual abilities. The study could be enhanced by implementing psychometric tests to assess cognitive biases and logical fallacies, thus providing a more comprehensive examination of rationality. The causes of biases can be compared among humans and LLMs for better de\ufb01ning notions of unstable rationality. The de\ufb01nition of this concept requires more formalism. Finally, the ef\ufb01cacy of the user persuasion prototype could be bolstered through user studies, extnded htrough experimenting on various LLMs, allowing for a more robust evaluation of its effectiveness. 6 \fPreprint. Under review. Prompt: Suggest how one might convince a person who believes this fact as true \u201dThe most in\ufb02uential and powerful people in the world plotted to cause the economic crisis.\u201d using cognitive dissonance theory. Give example of the dialog. Using cognitive dissonance theory to address and potentially shift someone\u2019s belief that \u201dThe most in\ufb02uential and powerful people in the world plotted to cause the economic crisis\u201d involves introducing new information or perspectives that con\ufb02ict with their existing belief, creating discomfort (dissonance) that the person is then motivated to resolve. This can lead to changing the belief or acquiring new information that aligns their beliefs more closely with reality. Here\u2019s how you might approach such a conversation, incorporating elements of cognitive dissonance: You: I understand why it might seem like economic crises are orchestrated by the powerful. The world is incredibly complex, and it\u2019s unsettling how decisions by few can impact many. Have you considered, though, the vast number of factors that economists agree contribute to such crises? Them: Well, I\u2019ve heard some of that, but I still think those at the top have too much to gain by manipulating the economy. You: It\u2019s true that economic inequality is a huge issue. Let\u2019s consider for a moment the role of unintended consequences. Even powerful people can\u2019t control all aspects of the economy. For instance, the 2008 \ufb01nancial crisis was partly due to subprime mortgages, which many initially thought would help expand home ownership. This decision, while pro\ufb01table at \ufb01rst for banks, eventually led to disastrous outcomes even for the wealthy. How does this align with the idea of a deliberate plot? Them: I suppose not everything goes according to plan, even for them. But aren\u2019t they still manipulating things behind the scenes? You: It\u2019s understandable to feel that way, especially with the \ufb01nancial inequalities we see. However, most economic theories and historical analyses show that crises often result from a mix of policy errors, market failures, and global economic pressures\u2014not just the actions of a few. And when you think about it, a stable economy generally serves the wealthy better in the long run. Constant crises would also harm their interests, wouldn\u2019t it? Them: Maybe so. I hadn\u2019t thought about the long-term stability aspect. You: It\u2019s de\ufb01nitely complex. And recognizing the roles of various systemic issues doesn\u2019t excuse the actions of those who exacerbate these problems for personal gain. It\u2019s more about understanding the full picture and working towards systemic solutions. Does exploring these other factors feel like it could provide a more comprehensive understanding of how crises happen? Them: Yeah, I can see how blaming a single group oversimpli\ufb01es things. I\u2019m interested in learning more about those other factors now. This conversation introduces dissonance by challenging the belief with new information and perspectives, encouraging the individual to reconsider their stance in light of broader economic complexities. By acknowledging their concerns and introducing the idea that an economic crisis is not bene\ufb01cial even for the powerful in the long term, it invites them to resolve the dissonance by adjusting their belief towards a more nuanced understanding of economic crises. Table 2: Persuasion of GPT-4 Cognitive Dissonance Theory 9 Conclusion The prevalence of unwarranted beliefs, or human hallucinations, encompassing pseudoscience, logical fallacies, and conspiracy theories, poses signi\ufb01cant societal challenges and the danger of spreading misinformation. Using an established psychometric test PEUBI, this research delves into how well LLMs fare compared to the average human in identifying common logical errors. Surprisingly, our investigation reveals that LLMs perform better on psychometric assessments despite their tendency towards inconsistency and inaccuracy in reasoning. We informally refer to this form of rationality as unstable rationality, 7 \fPreprint. Under review. Prompt: Suggest how one might convince a person who believes this fact as true \u201dThe most in\ufb02uential and powerful people in the world plotted to cause the economic crisis.\u201d using cognitive dissonance theory. Give example of the dialog. Using the Elaboration Likelihood Model (ELM) to change someone\u2019s belief that \u201dThe most in\ufb02uential and powerful people in the world plotted to cause the economic crisis\u201d involves navigating through the central and peripheral routes of persuasion. ELM suggests that attitudes can change via two main pathways: the central route, where persuasion occurs through logical argumentation and evidence, and the peripheral route, where persuasion is based on super\ufb01cial cues rather than the strength of arguments. Here\u2019s an example of how you might use both routes in a conversation: \u2014 **Central Route Approach:** **You:** It\u2019s quite a claim that the world\u2019s most powerful people orchestrated economic crises. Have you looked into the economic studies and historical data on how these crises develop? **Them:** I\u2019ve seen some arguments, but I mainly believe they have the means and motive to do it. **You:** It\u2019s important to scrutinize the evidence closely. For example, economic research shows that crises often result from a combination of complex factors like market speculation, lack of regulation, and global economic pressures. This analysis comes from decades of data and research by economists worldwide. It suggests that while the powerful can in\ufb02uence economies, no single group can control all the variables that lead to a crisis. Does this perspective make sense when considering the complexity of global economies? **Them:** It does make it seem less likely that a plot is the sole cause. I hadn\u2019t considered the complexity of controlling all those factors. **Peripheral Route Approach:** **You:** Also, consider how global economic leaders and organizations respond to these crises. They often take immediate action to stabilize markets, implement reforms, and sometimes even rescue failing institutions. This urgency doesn\u2019t quite align with the idea of a deliberate plot, does it? **Them:** I guess not. Actions taken to mitigate the crises do seem counterintuitive if they were all in on it. **You:** Absolutely, and many of these leaders and organizations are recognized for their expertise and awarded for their efforts to prevent economic downturns. Recognizing their expertise and the accolades they receive might also help us understand the broader picture of economic management, showing how complex and interconnected it is, beyond the reach of any single plot. **Them:** That\u2019s a good point. I suppose the recognition and response efforts do imply a different narrative than I considered. \u2014 In this dialogue, the **central route** is engaged through the discussion of economic research and data, appealing to the individual\u2019s ability for logical reasoning and analysis of complex systems. The **peripheral route** is tapped into by mentioning the actions of economic leaders and organizations post-crisis, leveraging the credibility of these entities and the societal recognition they receive, which might not align with the conspiracy theory. This combination can help sway the person\u2019s belief by engaging both deep cognitive processing and more super\ufb01cial associative cues. Table 3: Persuasion of GPT-4 Elaboration Likelihood Theory as it is easily disrupted by changes in language or contradictory information. Additionally, we present a prototype for leveraging LLMs to combat misconceptions, drawing from psychological theories of persuasion such as cognitive dissonance and elaboration likelihood. 8 \fPreprint. Under review. However, we emphasize the need for caution and prudence in this approach. Through this research, we underscore the potential of LLMs as personalized agents for debunking misinformation. This is in tandem with the concept of acknowledging that humans and LLMs possess different modes of rationality that may be capitalized for human bene\ufb01t."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.00853v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00853v1",
|
| 3 |
+
"title": "Efficient Algorithms for Learning Monophonic Halfspaces in Graphs",
|
| 4 |
+
"abstract": "We study the problem of learning a binary classifier on the vertices of a\ngraph. In particular, we consider classifiers given by monophonic halfspaces,\npartitions of the vertices that are convex in a certain abstract sense.\nMonophonic halfspaces, and related notions such as geodesic halfspaces,have\nrecently attracted interest, and several connections have been drawn between\ntheir properties(e.g., their VC dimension) and the structure of the underlying\ngraph $G$. We prove several novel results for learning monophonic halfspaces in\nthe supervised, online, and active settings. Our main result is that a\nmonophonic halfspace can be learned with near-optimal passive sample complexity\nin time polynomial in $n = |V(G)|$. This requires us to devise a\npolynomial-time algorithm for consistent hypothesis checking, based on several\nstructural insights on monophonic halfspaces and on a reduction to\n$2$-satisfiability. We prove similar results for the online and active\nsettings. We also show that the concept class can be enumerated with delay\n$\\operatorname{poly}(n)$, and that empirical risk minimization can be performed\nin time $2^{\\omega(G)}\\operatorname{poly}(n)$ where $\\omega(G)$ is the clique\nnumber of $G$. These results answer open questions from the literature\n(Gonz\\'alez et al., 2020), and show a contrast with geodesic halfspaces, for\nwhich some of the said problems are NP-hard (Seiffarth et al., 2023).",
|
| 5 |
+
"authors": "Marco Bressan, Emmanuel Esposito, Maximilian Thiessen",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"stat.ML"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Graph AND Structure AND Learning",
|
| 15 |
+
"gt": "Efficient Algorithms for Learning Monophonic Halfspaces in Graphs",
|
| 16 |
+
"main_content": "Introduction We study the problem of binary classi\ufb01cation of the vertices of a graph. With the advent of social networks and the use of graph-based techniques in machine learning, this problem has received considerable attention in all the most common learning settings, including supervised learning (Hanneke, 2006; Pelckmans et al., 2007), active learning (Afshani et al., 2007; Guillory and Bilmes, 2009; Cesa-Bianchi et al., 2010; Dasarathy et al., 2015), and online learning (Herbster et al., 2005; Cesa-Bianchi et al., 2013; Herbster et al., 2015). Most results obtained so far rely on the homophily principle, that is, they assume that adjacent vertices tend to belong to the same class. We take a different perspective on this problem and assume that the ground truth classes are monophonic halfspaces, a notion related to linear separability and convexity in Euclidean spaces. In this way, we hope to exploit the intuition and technical machinery behind classes of convex concepts, which are often at the heart of machine learning models (e.g., think of intervals, halfspaces, or polytopes). Let us spend a few words to introduce the notion of graph convexity. In Rd, a set X is convex if for all x, y \u2208X the connecting segment I(x, y) lies in X. One can port this intuition to graphs in the following way. Given two vertices x and y in a graph G, one de\ufb01nes the interval I(x, y) between x, y \u2208V . For instance, I(x, y) can be the set of all z \u2208V that lie on some shortest \u00a9 M. Bressan, E. Esposito & M. Thiessen. \fBRESSAN ESPOSITO THIESSEN Figure 1: Example of a monophonic halfspace partitioning the graph with the border highlighted. path between x and y, that is, such that d(x, y) = d(x, z) + d(z, y) where d: V \u00d7 V \u2192N is the shortest-path distance in G. In this way I(x, y) becomes the graph analogue of a segment in Rd. This yields the well-known notion of geodesic convexity: a set X \u2286V is geodesically convex if the induced subgraph G[X] is connected and I(x, y) \u2286X for all x, y \u2208X, and is a geodesic halfspace if V \\ X is geodesically convex too. Thus, X is the graph analogue of a halfspace in Rd. Interestingly, several bounds that hold for halfspaces in Rd continue to hold: for example, the VC dimension of geodesic halfspaces is bounded by their Radon number. One can therefore port several learning-theoretical results to the realm of graphs. Starting from decade-old classic results, in recent years there has been a revived interest in the study of graph halfspaces and their learning properties (Duchet and Meyniel, 1983; Farber and Jamison, 1986; Bandelt and Chepoi, 2008; Pelayo, 2013; Thiessen and G\u00a8 artner, 2021; Bressan et al., 2021; Chalopin et al., 2022a; Seiffarth et al., 2023). Most works, however, give purely existential bounds on the invariants (the Radon number, the Helly number, the VC dimension, etc.) and study the connection with properties of the graph (maximum degree, clique number, etc.). Little is known, instead, about the existence of ef\ufb01cient algorithms for learning a halfspace in the most common supervised learning settings. Note that the existence of ef\ufb01cient algorithms is not obvious: already for the class of geodesic halfspaces described above, even just deciding if there exists a hypothesis consistent with a given labeled sample of vertices is NP-hard (Seiffarth et al., 2023). In this work we study ef\ufb01cient learning algorithms for another variant of abstract graph halfspaces, called monophonic halfspaces, which are de\ufb01ned through what is called minimal convexity or induced-path convexity (Farber and Jamison, 1986; Duchet, 1988). Let us introduce some notation. Given two distinct vertices x, y \u2208V , the monophonic interval I(x, y) between x and y is the set of all vertices that lie on some induced path between x and y. A set X \u2286V is monophonically convex (m-convex) if G[X] is connected and I(x, y) \u2286C for all x, y \u2208C. A set H \u2286V is a monophonic halfspace (m-halfspace) if both H and H = V \\ H are m-convex. For instance, if G is a tree, then the connected components left by deleting an edge are m-halfspaces; if G is a clique, then any subset is a m-halfspace, see Figure 1 for another example. Monophonic halfspaces are among the most studied graph halfspaces (Bandelt, 1989; Changat et al., 2005; Dourado et al., 2010), second only to geodesic halfspaces. Nonetheless, only a couple of facts are known; for instance, a connection between the Radon number of m-convex sets and the clique number \u03c9(G), or that computing a k-partition of V into m-convex subsets is hard for k \u22653 (Gonz\u00b4 alez et al., 2020). The present work \ufb01lls this gap by providing several concrete results. Our Contributions For a graph G = (V, E), let n = |V | and m = |E|, and let HMP = HMP(G) denote the concept class consisting of all m-halfspaces of G. 2 \fEFFICIENT ALGORITHMS FOR LEARNING MONOPHONIC HALFSPACES IN GRAPHS 1. In the realizable PAC setting, we give an algorithm that with probability 1 \u2212\u03b4 learns a monophonic halfspace with accuracy \u03b5 by using O \u0000\u03b5\u22121 (\u03c9(G) log(1/\u03b5) + log(1/\u03b4)) \u0001 labeled samples and poly(n) time. The heart of this result is a polynomial-time consistency checker, i.e., an algorithm that in time poly(n) \ufb01nds a m-halfspace H \u2208HMP consistent with a given labeled subset of vertices, if one exists. This polytime consistency checker is nontrivial; it requires us to exploit heavily the structure of monophonic halfspaces, and to devise a polynomial-time reduction to 2-satis\ufb01ability by carefully constructing boolean formulas that express constraints on G. 2. In the agnostic PAC setting, we give an algorithm that with probability 1\u2212\u03b4 learns a monophonic halfspace with accuracy \u03b5 by using O \u0000\u03b5\u22122 (\u03c9(G) + log(1/\u03b4)) \u0001 samples and |HMP| poly(n) \u2264 2\u03c9(G) poly(n) time. To this end we list HMP in time |HMP| poly(n) using the polynomial-time checker above; then, we prove that |HMP| \u22644m 2\u03c9(G) \u03c9(G) +2 by a careful analysis of a second listing algorithm tailored to the purpose. 3. In the realizable active setting, we give an algorithm that learns H \u2208HMP(G) in time poly(n) using O(\u03c9(G) + log diam(G) + h(G)) queries, where diam(G) and h(G) are respectively the diameter and the \u201cmonophonic hull number\u201d of G. We leave open the problem of obtaining a similar result for agnostic active learning. 4. In the realizable online setting, we give an algorithm that learns H \u2208HMP(G) in time poly(n) by making O(\u03c9(G) log n) mistakes. We obtain this algorithm by showing a decomposition of m-halfspaces into a small disjunction of simpler concepts, and applying the Winnow algorithm. We give a similar result for the agnostic online setting. Additionally we achieve an improved mistake bound O(\u03c9(G) + log(n/\u03c9(G)) using Halving and a runtime of 2\u03c9(G) poly(n). 5. As a byproduct of the results above, we resolve the open case k = 2 of the partitioning problem of Gonz\u00b4 alez et al. (2020), which showed that for k \u22653 it is NP-hard to decide whether the vertex set V of a graph G admits a k-partition V1, . . . , Vk where each Vi is monophonically convex. We also prove that |HMP| \u22642\u03c9(G) poly(n), signi\ufb01cantly improving on the only previously known and trivial bound |HMP| = O(nd), where d = O(\u03c9(G)) is the VC dimension of HMP. From a technical point of view, we rely on several technical results on the structure of m-halfspaces, including characterizations of their cutsets and ef\ufb01cient computation of so-called \u201cshadows\u201d. It should be noted that ours are among the few constructive results on the ef\ufb01ciency of learning abstract halfspaces (or convex concepts) in graphs. Our work belongs to a line of research on graph hypothesis spaces, their learning properties, and \ufb01xed-parameter learnability (Chepoi et al., 2007, 2021; Chalopin et al., 2022b; Le and Wulff-Nilsen, 2024; Brand et al., 2023). Note that our runtime bounds for agnostic PAC learning and Halving are indeed polynomial for any family of graphs with bounded \u03c9(G). This is for example the case for triangle-free graphs or planar graphs and a much weaker condition than more commonly used parameters such as treewidth (Thiessen and G\u00a8 artner, 2021) and maximum clique-minor size (Duchet and Meyniel, 1983; Chalopin et al., 2022b; Le and Wulff-Nilsen, 2024) to achieve polynomial runtime or bounds on the VC dimension. Organization of the manuscript. Section 2 reviews related work. Sections 4-6 discuss our main results, their signi\ufb01cance, and their technical ingredients. Section 7 presents structural properties of monophonic halfspaces that are needed by our algorithms. Section 8 presents our polynomial-time consistency checker, and Section 9 presents our empirical risk minimization (ERM) algorithm. All missing details and proofs can be found in the appendix. 3 \fBRESSAN ESPOSITO THIESSEN 2. Further Related Work The complexity of consistency checking for monophonic halfspaces was unknown. Gonz\u00b4 alez et al. (2020) show that deciding whether V can be partitioned into k \u22653 nonempty m-convex sets is NP-hard. They leave open the case k = 2, i.e., deciding if there is a nontrivial m-halfspace of G, or in other words whether HMP(G) \u0338= \u2205. Our polynomial-time consistency checker proves that the case k = 2 is in polynomial time: for every pair {u, v} \u2208 \u0000V 2 \u0001 use the checker to verify whether there is a halfspace that separates u from v. This should be constrasted with consistency checking in geodesic halfspaces, which is known to be NP-hard (Seiffarth et al., 2023). Empirical risk minimization can be reduced to listing the concept class HMP(G). Duchet (1988) observes that, if H is m-convex, then the vertices of H that are adjacent to H form a clique. Therefore one can list HMP(G) by listing all pairs of cliques of G and checking if the edges between them form a cut of G, for a running time of n2\u03c9(G) poly(n). A better bound can be achieved if one is given a polynomial-time consistency checker: in that case, by a folklore algorithm one can list HMP(G) in time |HMP(G)| poly(n). Our work gives both a polynomial-time consistency checker and a tight bound on |HMP(G)|; neither one was known before. In particular, bounds on |HMP(G)| given by standard VC-dimension arguments suffer an exponential dependence on the cutsize c (i.e., the number of edges) of the halfspace (Kleinberg, 2004). In our case c can be as large as \u0398(\u03c9(G)2), which yields |HMP| \u2264nO(\u03c9(G)2). This is signi\ufb01cantly beaten by our novel bound |HMP| \u22644m 2\u03c9(G) \u03c9(G) + 2. Glantz and Meyerhenke (2017) give polynomial time algorithms for enumerating geodesic halfspaces of bipartite and planar graphs, but do not have results for general graphs. By contrast, we can enumerate all m-halfspaces in optimal time |HMP| poly(n) up to polynomial factors. For active learning, Thiessen and G\u00a8 artner (2021) give lower and upper bounds, but for geodesic halfspaces. Their algorithm requires computing the geodesic hull number, which is APX-hard and without constant-approximation algorithms; our algorithm runs in polynomial time. Bressan et al. (2021) also studied active learning on graphs under a geodesic convexity assumption. They achieved polynomial time however with additional assumptions on the convex sets, such as margin. For online learning, Thiessen and G\u00a8 artner (2022) give again results for geodesic halfspaces. Their algorithms, however, are computationally inef\ufb01cient and/or loose in terms of mistakes. We instead rewrite monophonic halfspaces as a union of a small number of m-shadows, which enables an algorithm (Winnow) that is both computationally ef\ufb01cient and near-optimal in terms of mistakes. 3. Preliminaries Let G = (V, E) be a simple undirected graph and let n = |V | and m = |E|. Without loss of generality we assume G is connected.1 For any v \u2208V let N(v) = {u \u2208V | {u, v} \u2208E}. The cutset induced by X \u2286V is \u03b4(X) = {{u, v} \u2208E | u \u2208X, v / \u2208X}. For an edge {u, v} \u2208\u03b4(X) we may write uv \u2208\u03b4(X) to specify that u \u2208X. The border of X is \u0393(X) = {u | uv \u2208\u03b4(X)}, the set of vertices of X with a neighbour in V \\ X. We denote by G or \u00acG the complement (V, \u0000V 2 \u0001 \\ E) of G. We let \u03c9(G) and \u03b1(G) be respectively the clique number and the independence number of G, and \u02dc \u03c9(G) = max{\u03c9(G), 3}. For any X \u2286V we denote by G[X] the subgraph of G induced by X. If P is a path in G, any edge in E(G) \\ E(P) is called a chord; a path is induced if and only if it has no chords. Any shortest path is an induced path. We denote by diamg(G) the diameter of 1. If G is not connected then it has no m-halfspace, unless it consists of precisely two connected components, which are then the only two halfspaces. 4 \fEFFICIENT ALGORITHMS FOR LEARNING MONOPHONIC HALFSPACES IN GRAPHS G, and by diamm(G) the maximum number of edges in any induced path. If a, b \u2208V and X \u2286V , then X is an (a, b)-separator if in G every path between a and b intersects X. An algorithm is FPT (\ufb01xed-parameter tractable) with respect to some parameter k (in our case k = \u03c9(G)) if its runtime is bounded by f(k) poly(n) for some computable function f. Let V be a set and C \u22862V . The set system (V, C) is a convexity space if (i) \u2205, V \u2208C and (ii) (T C\u2208C\u2032 C) \u2208C for every C\u2032 \u2286C.2 A set C \u2286V is said to be convex if C \u2208C. Convexity spaces abstract standard Euclidean convexity (see, e.g., van de Vel (1993)). The convex hull of A \u2286V is conv(A) = T C\u2208C:A\u2286C C \u2208C. The hull number of (V, C) is the size of the smallest A such that conv(A) = V . A set R \u2286V is Radon-independent if conv(R\u2032)\u2229conv(R\\R\u2032) = \u2205for all R\u2032 \u2286R. The Radon number of (V, C) is the size of its largest Radon-independent set. Many convexity spaces are de\ufb01ned by intervals. A map I : V \u00d7 V \u21922V is an interval map if (i) u, v \u2208I(u, v) and (ii) I(u, v) = I(v, u) for all u, v \u2208V . An example is the geodesic interval of a metric space (V, d) de\ufb01ned by Id(u, v) = {z \u2208V | d(u, z) + d(z, v) = d(u, v)}. Any interval map I de\ufb01nes a convexity space (V, C) where C \u2208C if and only if I(u, v) \u2286C for all u, v \u2208C. A graph convexity space for G = (V, E) is any convexity space (V, C) where G[C] is connected for all C \u2208C (Pelayo, 2013). We denote by r(G) the Radon number and by h(G) the hull number of (V, C). For any distinct u, v \u2208V the monophonic interval (m-interval) I(u, v) between u and v is the set of all vertices that lie on some induced path between u and v; while I(u, u) = \u2205for all u \u2208V . A set X \u2286V is monophonically convex (m-convex) if I(u, v) \u2286X for all u, v \u2208X. A set X \u2286V is a monophonic halfspace (m-halfspace), if both X and its complement X = V \\ X are m-convex. We denote by HMP(G) the set of all m-halfspaces of G. The monophonic shadow (m-shadow) of u with respect to v is the set u/v = {z \u2208V : u \u2208I(z, v)}. It is known that computing I is NP-hard in general (Dourado et al., 2010); as I(u, v) = {z \u2208V : u \u2208z/v}, computing m-shadows is NP-hard, too. Note that if X \u2286V is m-convex and vu \u2208\u03b4(X) then u/v \u2286X. The monophonic convex hull (m-hull) of X \u2286V , denoted conv(X), is the smallest m-convex set X\u2032 such that X \u2286X\u2032 \u2286V . It is known that one can compute conv(X) in time poly(n) for any given X (Dourado et al., 2010). To discuss some tightness results of our achieved bounds we rely on so-called separation axioms, which determine the separability of vertices by halfspaces. De\ufb01nition 1 (van de Vel (1993)) A convexity space (X, C) is: \u2022 S1 if and only if each singleton x \u2208X is convex. \u2022 S2 if and only if each pair of distinct elements x, y \u2208X is halfspace separable. \u2022 S3 if and only if each convex set C \u2208C and elements x \u2208X \\ C are halfspace separable. \u2022 S4 if and only if any two disjoint convex sets are halfspace separable. By slightly abusing notation, we write Si for the family of graphs whose m-convexity space is Si (for i = 1, . . . , 4). All graphs are in S1, and S2 \u228bS3 \u228bS4. The graph families S2, S3, and S4 are exactly characterized (Jamison-Waldner, 1981; Bandelt, 1989). 4. Supervised Learning We study the standard PAC (probably approximately correct) setting with a given \ufb01nite instance space V and a known hypothesis space H \u22862V . In our case H = HMP(G) is de\ufb01ned implicitly by the input graph G. A labeled sample is a pair (S, y) where S \u2286V is a \ufb01nite (multi)set of vertices 2. When V is not \ufb01nite one needs additional constraints, see van de Vel (1993); clearly this is not our case. 5 \fBRESSAN ESPOSITO THIESSEN and y: S \u2192{0, 1} is a binary labeling of the sample. We say that it is possible to (\u03b5, \u03b4)-PAC learn (V, HMP) if we can provide an m-halfspace H = H(S, y) that has error bounded by \u03b5 with probability at least 1\u2212\u03b4, for any \u03b5, \u03b4 \u2208(0, 1), under some unknown joint distribution over vertices and their labels. The aim of PAC learning is to guarantee this property with the smallest sample complexity possible. Note that, while standard PAC learning formulations typically measure the running time as a function of the sample size (see, e.g., Valiant (1984); Shalev-Shwartz and Ben-David (2014)), we measure running time as a function of n. Note that a dependence on n is unavoidable: even just to check whether two vertices u, v \u2208V are separated by some H \u2208HMP(G), one needs to distinguish between G being a path or a cycle, which takes time \u2126(n). We are in the realizable setting if for any given sample (S, y) there exists a H \u2208HMP such that for all s \u2208S it holds that y(s) = 1 if and only if s \u2208H. Our main result for realizable PAC learning is: Theorem 2 (Realizable PAC is poly-time) There exists an algorithm that, for any \u03b5, \u03b4 \u2208(0, 1), can (\u03b5, \u03b4)-PAC learn (V, HMP) in the realizable setting using O \u0010 \u03c9(G) log(1/\u03b5)+log(1/\u03b4) \u03b5 \u0011 labeled samples and time poly(n). The bound on the sample complexity of our algorithm follows by a bound on the VC dimension of HMP in terms of \u03c9(G) (Proposition 4), coupled with standard PAC learning bounds (Blumer et al., 1989). The bound is near-optimal save possibly for the log(1/\u03b5) factor (see below). The bound on the running time is instead given by Theorem 5 (see below) which provides a polynomial-time algorithm for computing an m-halfspace consistent with a given realizable sample. Similarly to the realizable case, we obtain the following result for agnostic PAC learning: Theorem 3 (Agnostic PAC is FPT) There exists an algorithm that, for any \u03b5, \u03b4 \u2208(0, 1), can (\u03b5, \u03b4)-PAC learn (V, HMP) in the agnostic setting using O \u0010 \u03c9(G)+log(1/\u03b4) \u03b52 \u0011 labeled samples and time |HMP| poly(n) \u22642\u03c9(G) poly(n). Similarly to the realizable case, the sample complexity bound follows by coupling VC dimension bounds with standard PAC bounds, and is near-optimal. The result follows by using the ERM from Corollary 10, which uses the version space enumeration of Theorem 8 (see below). To prove that the sample complexity bounds of Theorem 2 and Theorem 3 are near-optimal, it suf\ufb01ces to note that the VC dimension of HMP is \u03c9(G) for clique graphs. We prove that this is true for all S4 graphs (see Section 3), and essentially optimal. Let \u02dc \u03c9(G) = max{\u03c9(G), 3}. Then: Proposition 4 If G \u2208S4 then VC(HMP(G)) \u2265\u03c9(G). Moreover, VC(HMP(G)) \u2264\u02dc \u03c9(G) for all G. 4.1. Polynomial-Time Consistency Checking and Version Space Enumeration The algorithms of Theorem 2 and Theorem 3 rely on the standard empirical risk minimization (ERM) approach: given a labeled sample (S, y), we \ufb01nd a hypothesis H \u2208HMP that has minimal disagreement with (S, y). This, we need ef\ufb01cient algorithms for ERM. In the realizable case, where (S, y) is realizable, we need an algorithm to \ufb01nd a hypothesis consistent with the sample. Formally, an m-halfspace H \u2208HMP is consistent with a given labeled sample (S, y) if y\u22121(0) \u2286H and y\u22121(1) \u2286H. Our main technical contribution is MH-CHECK, an algorithm that gives: Theorem 5 (Consistency check is polynomial time) Let G = (V, E) and (S, y) a labeled sample. Then, MH-CHECK(G, (S, y)) (Algorithm 1) runs in time poly(n) and returns an H \u2208HMP(G) consistent with (S, y), or reports that no such H exists. 6 \fEFFICIENT ALGORITHMS FOR LEARNING MONOPHONIC HALFSPACES IN GRAPHS The intuition behind MH-CHECK is to guess an edge in the cut of H, and then construct a set of constraints that reduces the search space to precisely those subsets H \u2286V that are m-halfspaces and that are consistent with (S, y). It turns out that, by carefully choosing those constraints, one actually obtains in time poly(n) an instance of 2-satis\ufb01ability, which in turn is solvable in time poly(n). Choosing the constraints requires de\ufb01ning several speci\ufb01c subsets of vertices, as well as proving several structural results on m-halfspaces and their cutsets. Another computational ingredient is showing that computing m-shadows over edges is easy, although hard in general. Lemma 6 Let {z, v} \u2208E. Then, z/v = {x \u2208V | N(v) \\ {z} is not a (z, x)-separator in G}. As a consequence one can compute z/v in time poly(n). As an immediate byproduct of Theorem 5 we answer an open problem of Gonz\u00b4 alez et al. (2020), who proved that deciding if V admits a proper k-partition into m-convex sets is NP-hard for k \u22653. Corollary 7 In time poly(n) one can decide if V admits a proper 2-partition into m-convex sets in G (that is, if there exists H \u2208HMP such that \u2205\u228aH \u228aV ). A detailed description of MH-CHECK and a sketch of the proof of Theorem 5 are given in Section 8. Using MH-CHECK, we obtain an algorithm for ERM. For a given labeled sample (S, y), the version space VS((S, y), HMP) consists of all H \u2208HMP consistent with (S, y). Thus, one can \ufb01nd an empirical risk minimizer for (S, y) by just listing VS = VS((S, y), HMP) and picking a m-halfspace with minimum disagreement. We say that an algorithm lists VS with delay t if it spends time at most t to output the next element of VS. A listing algorithm is considered to be ef\ufb01cient if it has polynomial delay, that is, t = poly(n); this also implies that VS is listed in total time | VS | poly(n). Using MH-CHECK and a folklore technique, we prove that such an algorithm exists. Theorem 8 (Listing the version space with polynomial delay) There exists an algorithm MH-LIST that, given a graph G and a labeled sample (S, y), lists VS = VS(S, HMP) with delay poly(n) and total time | VS | poly(n). As a last ingredient for our ERM bounds, we bound the size of HMP(G). To this end we develop another listing algorithm, Algorithm 3, which by a careful analysis yields the following upper bound. Further details on how we accomplish this are deferred to Section 9. Theorem 9 |HMP(G)| \u22644m2\u03c9(G) \u03c9(G) + 2 for all graphs G. Together this yields the fact that ERM is FPT for m-halfspaces. Corollary 10 (ERM is FPT) Given G = (V, E) and a labeled sample (S, y), one can compute HERM \u2208arg minH\u2208HMP 1 |S| P s\u2208S 1{y(s) \u0338= 1s\u2208H} in time |HMP| poly(n) \u22642\u03c9(G) poly(n). 5. Active Learning In the active learning setting, the algorithm is given a graph G = (V, E), and nature selects a concept H \u2208HMP(G). The algorithm can query any vertex x \u2208V for its label 1x\u2208H. The goal of the algorithm is to output H by making as few queries as possible. This problem is a special case of realizable transductive active learning on a graph (Afshani et al., 2007; Guillory and Bilmes, 2009; Cesa-Bianchi et al., 2010; Dasarathy et al., 2015), and can be seen as a variant of query learning 7 \fBRESSAN ESPOSITO THIESSEN (Angluin, 1988; Heged\u02dd us, 1995) with an additional \ufb01xed graph given where vertices correspond to the instance space. Now let H \u22862V be a concept class. The query complexity of (V, H) is the maximum number of queries an optimal algorithm makes over H \u2208H. More precisely, for any algorithm A and any H \u2208H, let qc(A, H) denote the number of queries A make on G when nature chooses H. The query complexity of A on H is qc(A, H) = maxH\u2208H qc(A, H). The query complexity of H is qc(H) = minA qc(A, H). Our main result is stated as follows. Theorem 11 (Poly-time active learning) It is possible to actively learn (V, HMP) in time poly(n) with query complexity O (h(G) + log diamg(G) + \u03c9(G)). The full proof of Theorem 11 is given in Appendix E; here we provide some intuition. First, we compute a minimum monophonic hull set in polynomial time relying on Dourado et al. (2010). If any cut-edge {u, v} exists we can then \ufb01nd it with O(log diamg(V )) queries. Using some structural results (Lemma 17 and Lemma 19, see Section 7) we can infer the remaining halfspace by querying at most \u03c9(G) from the set \u25b3uv = N(u) \u222aN(v) \u222a{u, v}. The required m-shadows can be computed ef\ufb01ciently by Lemma 6. Theorem 11 should be contrasted with the active learning algorithm provided by Thiessen and G\u00a8 artner (2021) for geodesic halfspaces; that algorithm does not guarantee polynomial running time, as it requires solving the minimum geodesic hull set problem, which is APX-hard. Along the previously mentioned separation axioms from De\ufb01nition 1 we achieve increasingly tighter lower bounds on the query complexity, eventually matching our algorithmic upper bound from Theorem 11 for all S4 graphs, the strongest separability assumption. Theorem 12 Let G be a graph. The following holds for the query complexity qc(HMP(G)): \u2022 if G \u2208S2, then qc(HMP(G)) \u2265\u2126(log diamm(G)), \u2022 if G \u2208S3, then qc(HMP(G)) \u2265\u2126(log diamm(G) + h(G)), and \u2022 if G \u2208S4, then qc(HMP(G)) \u2265\u2126(log diamm(G) + h(G) + \u03c9(G)). 6. Online Learning The classical (realizable) online learning problem of Littlestone (1988) can be modelled as an iterative game between a learner and the environment over a \ufb01nite number T of rounds. The instance space V and a hypothesis space H \u22862V is known and \ufb01xed. First, the environment chooses a hypothesis H from a hypothesis space H. Then, in each round t = 1, . . . , T: 1. the environment chooses a point vt \u2208V , 2. the learner predicts the label \u02c6 yt \u2208{0, 1} of vt, 3. the environment reveals the true label yt = 1vt\u2208H, 4. the learner made a mistake if \u02c6 yt \u0338= yt. The goal of the learner is to minimize the total number of mistakes. More precisely let A be an algorithm for this online learning problem. Then, let M(A, H) for H \u2208H denote the worst-case number of mistakes A would make on any sequence labeled by H over T rounds. The mistake bound of A on H is thus de\ufb01ned as M(A, H) = maxH\u2208H M(A, H). We are interested in the optimal mistake bound M(H) = minA M(A, H), also known as the Littlestone dimension of H. This setting can be extended to the agnostic/non-realizable case, see Ben-David et al. (2009). 8 \fEFFICIENT ALGORITHMS FOR LEARNING MONOPHONIC HALFSPACES IN GRAPHS The node classi\ufb01cation (or node labeling) variant of this problem is well studied (Herbster et al., 2005; Cesa-Bianchi et al., 2013; Herbster et al., 2015). As in the active learning variant, the main parameter is the (potentially effective resistance weighted) cutsize, linearly determining the mistake bounds. In this section, we mainly study the variant of the above realizable online learning problem over our hypothesis class HMP. Theorem 13 (Poly-time online learning) Realizable online learning of monophonic halfspaces is possible in time poly(n) with a mistake bound of O(\u03c9(G) log n). It is also possible in time |HMP| poly(n) \u22642\u03c9(G) poly(n) with a mistake bound of O \u0010 \u03c9(G) + log n w(G) \u0011 . The \ufb01rst mistake bound is achieved by the Winnow algorithm (Littlestone, 1988) and relies on a novel representation of m-halfspaces as a sparse disjunction of m-shadows (Lemma 18). The second mistake bound is achieved by the Halving algorithm (Barzdin, 1972; Littlestone, 1988), together with our version space listing algorithm (Theorem 8) and our upper bound on |HMP| (Theorem 9). In Appendix F.3 we additionally discuss agnostic online learning of m-halfspaces relying again on our decomposition by Lemma 18 and known results for Winnow (Blum, 1996). We achieve the following mistake bound ef\ufb01ciently almost matching standard expert-based techniques (Cesa-Bianchi et al., 1997; Ben-David et al., 2009), which typically require O(|HMP|) time per round. Theorem 14 A mistake bound O \u0000\u03c9(G)(M\u2217+ log n) \u0001 for agnostic online learning over HMP(G) is possible in time poly(n), where M\u2217is the mininum number of mistakes of any m-halfspace. Lower bounds. The achieved mistake bounds are near-optimal by the following result, whose proof is in Appendix F.2. Proposition 15 For any S4 graph G, it holds that M(HMP(G)) \u2265\u03c9(G). 7. Structural Lemmas regarding Monophonic Halfspaces This section provides some structural results on monophonic halfspaces used by our algorithms. Loosely speaking, the main message is that monophonic halfspaces can be expressed as unions of a small number of \u201csimpler\u201d sets, whose labels can be inferred from a small number of labeled vertices. First, however, we give a basic result about the border of the cut. Lemma 16 H \u2208HMP(G) if and only if G[\u0393(H)] and G[\u0393(H)] are cliques. The statement follows by Duchet (1988, Proposition 4.2) applied to both H and H; a selfcontained proof can be found in Appendix A. Figure 1 shows an example. For an edge uv \u2208E let \u25b3uv = (N(u) \u2229N(v)) \u222a{u, v}. Our \ufb01rst result shows that every halfspace H \u2208HMP can be expressed as the union of m-shadows over H \u2229\u25b3uv, for every uv \u2208\u03b4(H). Lemma 17 Let H \u2208HMP(G), let uv \u2208\u03b4(H), and let \u25b3uv = (N(u) \u2229N(v)) \u222a{u, v}. Then, H consists precisely of those vertices whose label can be inferred from u, v, and a vertex in H \u2229\u25b3uv. Formally: H = [ z\u2208H\u2229\u25b3uv z/v . 9 \fBRESSAN ESPOSITO THIESSEN Note that Lemma 17 does not yet give a sparse representation, in the sense that it does not bound the number of m-shadows |H \u2229\u25b3uv| used to obtain H. Combining Lemma 17 with Lemma 16, however, we can prove that H can be expressed as the union of at most \u03c9(G) such m-shadows. Lemma 18 Let H \u2208HMP(G). Then, there exists a subset C \u2286\u03b4(H) with |C| \u2264\u03c9(G) such that H = [ {z,v}\u2208C z/v . Lemma 18 will be crucial to enable ef\ufb01cient online learning using Winnow. We shall also note the following fact, used by our bounds on |HMP| and by our polynomial-time active learning algorithm. Lemma 19 Let H \u2208HMP and uv be a cut-edge of H. Then, the number Nuv of connected components in \u00acG[\u25b3uv] satis\ufb01es Nuv \u2264\u03c9(G). 8. A Polynomial-Time Consistency Checker This section describes our main technical contribution: MH-CHECK (Algorithm 1), an algorithm to \ufb01nd an m-halfspace consistent with a given a labeled sample if one exists. We show that such an algorithm runs in polynomial time; this proves Theorem 5, which we restate for convenience. Theorem 5 (Consistency check is polynomial time) Let G = (V, E) and (S, y) a labeled sample. Then, MH-CHECK(G, (S, y)) (Algorithm 1) runs in time poly(n) and returns an H \u2208HMP(G) consistent with (S, y), or reports that no such H exists. Let us illustrate the idea behind MH-CHECK. Let (S, y) be the labeled sample, and suppose a halfspace H consistent with (S, y) exists. For every uv \u2208E, we guess that uv in is the cut \u03b4(H), and we compute the set \u25b3uv = (N(u) \u2229N(v)) \u222a{u, v}. By exploiting the results of Section 7, we know that from \u25b3uv \u2229H one can infer the rest of H. Unfortunately, there seems to be no easy way of learning \u25b3uv \u2229H, apart from exhaustively guessing all subsets of \u25b3uv. Thus, we need a more sophisticate strategy. We use the results of Section 7, as well as the closure properties of monophonic halfspaces, to construct a set of constraints on the space of all possible subsets H \u2286V . The hope is that those constraints identify precisely the subsets H \u2286V that satisfy H \u2208HMP(G) and are consistent with (S, y). Formally, we will perform a reduction to 2-SAT. For every orientation uv of every edge of G, we construct in time poly(|V |) a 2-SAT formula \u03d5uv whose literals are in the form \u201cx \u2208H\u201d or \u201cx / \u2208H\u201d, where x \u2208V . Any H \u2286V yields an evaluation of \u03d5uv in the obvious way. We can prove that the solutions to \u03d5uv are precisely the m-halfspaces H \u2208HMP(G) such that uv \u2208\u03b4(H). Constructing the 2-SAT formula \u03d5uv in polynomial time is the challenging part of the algorithm. Once we have computed \u03d5uv, it is straightforward to extend it to a 2-SAT formula \u03d5uv(S, y) that is satis\ufb01ed precisely by the solutions to \u03d5uv that are consistent with (S, y): \u03d5uv(S, y) = \u03d5uv \u2227 ^ x\u2208y\u22121(0) (x / \u2208H) \u2227 ^ x\u2208y\u22121(1) (x \u2208H) . (1) Algorithm 1 gives the pseudocode of MH-CHECK. The rest of this section describes the construction of \u03d5uv and sketches the proof of Theorem 5. All missing details can be found in Appendix B. 10 \fEFFICIENT ALGORITHMS FOR LEARNING MONOPHONIC HALFSPACES IN GRAPHS Algorithm 1: MH-CHECK Input: G = (V, E), and a labeled sample (S, y) Output: H \u2208HMP(G) consistent with (S, y), or failure if no such H exists 1 for uv \u2208E do 2 compute the constraints given in Appendix B, and their conjunction \u03d5uv 3 compute \u03d5uv(S, y) as given by Equation (1) 4 compute a solution H to \u03d5uv(S, y) and return it, if one exists 5 return \u201cH does not exist\u201d Constructing \u03d5uv. To construct \u03d5uv, we start by computing the following subsets: \u2022 \u25b3\u2212 uv = N(u) \u2229N(v). Note that u, v / \u2208\u25b3\u2212 uv. \u2022 \u25a1uv = {x \u2208V : x appears in a 4-cycle having uv as an edge}. Note that u, v \u2208\u25a1uv. \u2022 A = \u25a1uv \\ \u25b3\u2212 uv = {x \u2208\u25a1uv : d(x, u) \u0338= d(x, v)}. \u2022 Au = {x \u2208A : d(x, u) < d(x, v)}. \u2022 Av = {x \u2208A : d(x, u) > d(x, v)} = A \\ Au. \u2022 T = G \\ E(G[\u25a1uv \u222a\u25b3\u2212 uv]). Note that all those sets are computable in time poly(|V |). A set of constraints. As said, \u03d5uv is a conjunction of several constraints (i.e., 2-SAT formulas). Each constraint is based on some of the sets de\ufb01ned above. We describe some of those constraints to convey the idea. The complete list is given in Appendix B, where the full proof of Theorem 5 checks that all of them are satis\ufb01ed if and only if H \u2208HMP(G). The \ufb01rst constraint we present here, given by the formula \u03d5conv, ensures that Au and Av are closed under monophonic convex hulls: \u03d5conv = ^ x\u2208conv(Au) (x \u2208H) \u2227 ^ x\u2208conv(Av) (x / \u2208H) . (2) Note that, as computing monophonic convex hulls in an n-vertex graph takes time poly(n) (Dourado et al., 2010), we can then compute \u03d5conv in time poly(n), too. The next constraint, \u03d5\u25b3\u2212 uv, ensures that G[\u25b3\u2212 uv \u2229H] \u2286G[\u0393(H)] and G[\u25b3\u2212 uv \u2229H] \u2286G[\u0393(H)] are both cliques, as required by Lemma 16. In fact it ensures a slightly stronger condition: that the edge complement of G[\u25b3\u2212 uv] is a bipartite graph where adjacent vertices have opposite labels under H, as imposed by Lemma 22: \u03d5\u25b3\u2212 uv = ^ {x,y}\u2208E(\u00acG[\u25b3\u2212 uv]) (x \u2208H \u2228y \u2208H) \u2227(x / \u2208H \u2228y / \u2208H) . (3) Denote by cc(T) the set of connected components of T. It is not hard to prove that every Ti \u2208cc(T) is either the m-shadow z/v or in the m-shadow z/u for some z \u2208\u25b3uv. Therefore, either Ti \u2286H or Ti \u2286H. This is captured by the next constraint: \u03d5T = ^ Ti\u2208cc(T) ^ x,y\u2208V (Ti) (x \u2208H \u2228y / \u2208H) \u2227(x / \u2208H \u2228y \u2208H) . (4) 11 \fBRESSAN ESPOSITO THIESSEN Finally, we compute constraints ensuring that there is no induced path starting from Au or conv(Au) that violates monophonic convexity. To compute them, we \ufb01rst need to de\ufb01ne what follows: for X, Y \u2286V and k \u2208{3, 4} denote by \u03a0k(X, Z) the set of all induced paths on k vertices that connect X to Y . We report only one of such constraints them as an example: \u03d5u,3 = ^ \u03c0=(x,y,z) \u03c0\u2208\u03a03(Au,\u25b3\u2212 uv) (y \u2208H \u2228z / \u2208H) . (5) It is easy to see that \u03d5\u25b3\u2212 uv, \u03d5T , and \u03d5u,3 can be computed in time poly(n), too. 9. Bounding the Number of Monophonic Halfspaces This section describes an algorithm to enumerate all m-halfspaces in FPT time and, through an additional charging argument, a near-tight bound on |HMP(G)|. The \ufb01rst intuition behind our algorithm comes from Lemma 16, as follows. First, we guess each edge uv of G as a cut edge of some H \u2208HMP. Starting from the endpoints u, v, we then try to \ufb01nd the vertices in U = \u0393(H) \u222a\u0393(H). Since by Lemma 16 both G[\u0393(H)] and G[\u0393(H)] are cliques, then |U| \u22642\u03c9(G). At that point we exhaustively guess the partition of U into \u0393(H) and \u0393(H), and we check if that partition is indeed an m-halfspace. This would result in a total running time of 22\u03c9(G) poly(n) per edge and, therefore, in total. Unfortunately, this idea does not work straight away. The reason is that there is no ef\ufb01cient way to \ufb01nd all of U. However, by Lemma 17, we can show that this approach still works if instead of U we look at the set \u25b3uv = (N(u) \u2229N(v)) \u222a{u, v}. Thus, the idea is to compute \u25b3uv, iterate over all possible labelings of it, and for each such labeling use Lemma 17 to infer H and check whether H is a halfspace. This still does not yield the desired bound, as \u25b3uv could be much larger than \u03c9(G); in fact we could have |\u25b3uv| = \u2126(n) even though \u03c9(G) = O(1). Thus, we \ufb01rst check if \u25b3uv is a subset of U, which can be done by just testing if \u00acG[\u25b3uv] is bipartite. If this is the case, then we already know |\u25b3uv| \u22642\u03c9(G). By a careful analysis of \u00acG[\u25b3uv] we can then show that we need only to test certain labelings of \u25b3uv, whose number can be bounded by roughly 2\u03c9(G). The pseudocode of MH-FPT-LIST is given in Appendix C, together with the proof of: Theorem 20 MH-FPT-LIST(G) lists HMP(G) in time 2\u03c9(G) poly(n). The proof of Theorem 20 immediately implies |HMP(G)| \u22642\u03c9(G)2m. However, starting from the analysis of MH-FPT-LIST, and using a charging argument that takes into account that an edge uv can appear in the cut \u03b4(H) of many m-halfspaces, we can prove the improved bound stated in Theorem 9 (see Appendix C.1 for the complete proof)."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00864v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00864v1",
|
| 3 |
+
"title": "Math Multiple Choice Question Generation via Human-Large Language Model Collaboration",
|
| 4 |
+
"abstract": "Multiple choice questions (MCQs) are a popular method for evaluating\nstudents' knowledge due to their efficiency in administration and grading.\nCrafting high-quality math MCQs is a labor-intensive process that requires\neducators to formulate precise stems and plausible distractors. Recent advances\nin large language models (LLMs) have sparked interest in automating MCQ\ncreation, but challenges persist in ensuring mathematical accuracy and\naddressing student errors. This paper introduces a prototype tool designed to\nfacilitate collaboration between LLMs and educators for streamlining the math\nMCQ generation process. We conduct a pilot study involving math educators to\ninvestigate how the tool can help them simplify the process of crafting\nhigh-quality math MCQs. We found that while LLMs can generate well-formulated\nquestion stems, their ability to generate distractors that capture common\nstudent errors and misconceptions is limited. Nevertheless, a human-AI\ncollaboration has the potential to enhance the efficiency and effectiveness of\nMCQ generation.",
|
| 5 |
+
"authors": "Jaewook Lee, Digory Smith, Simon Woodhead, Andrew Lan",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "LLM Fairness",
|
| 14 |
+
"gt": "Math Multiple Choice Question Generation via Human-Large Language Model Collaboration",
|
| 15 |
+
"main_content": "INTRODUCTION Multiple choice questions (MCQs) are widely used to evaluate students\u2019 knowledge since they enable quick and accurate administration and grading [2, 6, 9]. MCQs are constructed in a specific format. The stem refers to the statement on the problem setup and context, followed by a question that needs to be answered. Among the options, the correct one can be referred to as the key, while incorrect ones can be referred to as distractors. As the name implies, distractors in MCQs are typically formulated to align with common errors among students. These distractors are chosen because students either i) lack the necessary comprehension of the knowledge components (KCs) or concepts/skills tested in the question to accurately identify the key as the correct answer or ii) exhibit misconceptions that make them think a specific distractor is correct. While MCQs offer many advantages in student knowledge assessment, manually crafting high-quality MCQs, especially in math-related domains, is a demanding and labor-intensive process [5]. There are three main tasks in this process: First, educators need to formulate a question stem that effectively encapsulates the KCs they aim to test. Second, educators need to anticipate common errors and/or misconceptions among students and create corresponding distractors. Third, educators need to provide feedback to students who select distractors that can help them identify their errors and lead them to the correct answer, to expedite their learning process. The emergence of large language models (LLMs) has raised hopes for making MCQ creation more scalable by automating the process. Specifically, few-shot, in-context learning is promising for generating math MCQs since LLMs can follow instructions based on contextual information conveyed by a few examples. While automated question generation for open-ended questions has shown notable success, generating plausible distractors within MCQs presents a different challenge: distractors should be based on anticipated student errors/misconceptions [12], whereas LLMs have not necessarily learned this information during training. Moreover, math MCQs are challenging since they require mathematical reasoning, which means that distractors cannot be generated using a knowledge graph [13] or paraphrasing tool [8]. Consequently, math educators need to take an important role in guiding LLMs in math MCQ generation: LLMs are responsible for scaling up the process while humans use their expertise efficiently. Therefore, we raise following are two core research questions (RQs) that help identify opportunities to generate math MCQs through collaboration between LLMs and human educators: 1) RQ1: Can LLMs generate valid MCQs, especially distractors and feedback corresponding to common student errors/misconceptions? 2) RQ2: What are the key design elements in a system where human math educators and LLMs collaborate on MCQ generation? 1.1 Contributions In this paper, we introduce a prototype tool called the Human Enhanced Distractor Generation Engine(HEDGE) for math MCQ creation, which leverages the expertise of educators by asking them to edit LLM-generated MCQs in a two-step arXiv:2405.00864v1 [cs.CL] 1 May 2024 \fprocess. In the first step, we prompt the LLM to generate stem, key, and explanation in an MCQ, and ask educators to evaluate and edit the output to make sure it is mathematically correct and relevant to the intended KC. In the second step, we prompt the LLM to generate a set of possible errors/misconceptions and the corresponding distractors and feedback, and ask educators to evaluate and edit the output to make sure they correspond to valid distractors to the generated question stem. In a pilot study, we recruit four former/current math teachers to evaluate our tool on generating math MCQs related to five pre-defined KCs. Results show that educators considered 70% of the generated stem, key, and explanation generated by GPT-4 as valid. However, they only considered 37% of the generated misconception, distractor, and feedback valid, which reveals significant limitations of LLMs in capturing anticipated common errors/misconceptions among real students. This observation underscores the necessity of involving humans in the process of generating math MCQs and leveraging real math educators\u2019 expertise on common errors among students. 2. HUMAN ENHANCED DISTRACTOR GENERATION ENGINE 2.1 Overview Figure 1: HEDGE Overview: the human-AI collaboration setting for generating math MCQs for a given KC. Strikethrough text represents edits made to LLM-generated content while boldface text indicates misconceptions that correspond to distractors. HEDGE is our prototype for math MCQ generation that generates math MCQ for a given mathematical KC, as illustrated in Figure 1. These KCs are categorized into three levels of granularity: coarse, medium, and fine-grained. For instance, KCs can cover either a broad topic such as\u201cbasic arithmetic\u201d or a specific topic like \u201cIdentify that a problem needs to be solved using addition.\u201d HEDGE is designed to utilize LLMs within OpenAI. The provided example is generated using ChatGPT. We take a two-step approach for MCQ generation: 1) generate the question step and answer key, and an explanation, and 2) generate a list of possible misconceptions, corresponding distractors, and feedback messages. We implement both steps using by prompting LLMs with an in-context example of these tasks. The in-context example shows the KC converting ratios to fractions, employing a real-life scenario in which Kate and Isaac share yogurt in a 2 : 5 ratio. The objective is to calculate the fraction representing Kate\u2019s share, 2 7. In this Table 1: The in-context example used for prompting LLMs for math MCQ generation. KC Coarse Ratio, Medium Writing ratios, Fine Convert ratios to fractions Stem Kate and Isaac share yogurt in a 2 : 5 ratio. Kate has \u25a1of the total. Identify the fraction. Key 2 7 Explanation The total ratio is 7 parts. Kate\u2019s share of 2 7 is derived by dividing her 2 parts by the total. Misconceptions 1. Misinterpreting the ratio as a fraction. 2. Confusing the difference in ratio parts as relevant. 3. Calculating Isaac\u2019s share instead of Kate\u2019s. Distractors 1. 2 5 2. 3 7 3. 5 7 Feedback 1. The ratio 2 : 5 means 7 parts total, not 2 5. 2. The ratio splits the total, not the difference between parts. 3. Ensure you are calculating Kate\u2019s share, not Isaac\u2019s. context, we list three common misconceptions. First, a student mistakenly thinks that the ratio 2 : 5 could be directly converted into the fraction 2 5. Second, a student mistakenly calculates the difference between Kate\u2019s and Issac\u2019s share. Third, a student mistakenly think the goal is to calculate Issac\u2019s share. These misconceptions, along with the corresponding feedback on how to resolve them, are included as part of the in-context example. Now, we explore a scenario where an educator creates MCQs using our tool based on the concept of basic arithmetic, specifically focusing on mental addition. In the first step, given the target KC, along with an in-context example consisting of the concept, stem, key, and explanation, the LLM generates the following stem: \u201cSally has 5 apples. She gives 2 apples to her friend. How many apples does Sally have left?\u201d However, this stem mistakenly embodies the KC of subtraction rather than addition. Therefore, the educator edits the generated results to align it with the intended KC of addition. In the second step, using the adjusted stem, key, and explanation, as well as incorporating in-context examples with distractors, misconceptions, and feedback, the LLM generates distractors along with corresponding misconceptions and feedback. Figure 1 illustrates option B, which contains a misconception related to subtraction instead of addition, accompanied by feedback designed to correct this error. Additionally, the educator has the option to edit option D to address any misconceptions associated with multiplication. 2.2 User Interface We develop HEDGE interface, as illustrated in Figure 2. This interface is built using React and employs Firestore as its database for data storage. The interface comprises three components: a Sidebar, a Preview, and a Generation. The educator generates MCQs using the Generation component as discussed in Section 2.1. Here, after prompting LLMs using the edited stem, key, and explanation, we add a rating step to assess the overall quality of misconceptions, distractors, and feedback that the educator rates based on a 5-point Likert scale. Once the educator completes the distractor editing process, \fthe Preview component displays a fully structured MCQ, with the answer options randomized. We store any metadata that isn\u2019t visually represented within the image. Following the completion of distractor editing, the Sidebar component is refreshed. The educator can click on the stem to view the generated image along with the answer sheet or create a new MCQ. Figure 2: HEDGE Interface: what human participants use to generating an MCQ by editing LLM output. 3. PILOT STUDY 3.1 Experimental Setup We perform a pilot study to assess the usability of HEDGE in generating MCQs. In this study, we select pre-defined KCs and instruct participants to utilize these KCs to simulate a scenario where an educator is crafting MCQs. We select the KCs and the in-context example from a large education company\u2019s content repository, categorized under the label \u201cNumber,\u201d encompasses various subtopics, such as \u201cBasic Arithmetic,\u201d \u201cFractions,\u201d and \u201cRounding and Estimating.\u201d We choose five KCs, as shown in Table 2, from the KCs that incorporate mathematical expressions, such as fractions, powers, and surds. We utilize GPT-4 as LLM for the study and set the parameters to temperature = 0.7 and top p value = 0.9 to balance creativity and consistency of the generated MCQs. After completing the study, participants are asked to complete an exit survey. The survey includes open-ended questions and ratings on their satisfaction with the quality of LLM-generated responses and the usability of the tool using a 5-point Likert scale. 3.2 Participants We recruit four participants for the study, comprising one male and three females, all recruited through Upwork [14]. Among them, two currently work as middle/high school math teachers, while the other two currently work as tutors, with prior experience as former math teachers. All participants are selected based on their qualifications and expertise in mathematics education. Each participant was tasked with creating five MCQs using the HEDGE, employing the five KCs specified in Table 2. 4. RESULTS 4.1 Stem, Key, and Explanation Table 3 shows the stems produced by participants utilizing HEDGE. In the\u201cFine-grained KC\u201dcolumn, the original stem is indicated in italics, while the stems modified by each participant denoted as a, b, c, and d, respectively. In what follows, we label each MCQ in the format of 1a, where 1 denotes the index of the fine-grained KC and a denotes index of the participant. Out of 20 sets of stem, key, and explanation generated by the LLM, participants deemed 14 sets of them as valid. Among these valid sets, two added more details in their explanations, while the remaining sets were adopted without any need for edits. For example, italicized details were added in the explanation for 2c: \u201cThe fraction 3 9 simplifies to 1 3 because both the numerator and the denominator can be divided by a common factor of 3. 3 divided by 3 is 1, and 9 divided by 3 is 3. Hence, 1 3 is an equivalent fraction to 3 9.\u201d The other case was to make the question setting more realistic: In 4d, the educator edited the initial price of the car worth $5000 to $35000. This adjustment reveals the limitations of LLMs in accurately representing real-life problem scenario. We now analyze the cases that participants deemed invalid. Grammar error. In 2a, educator corrected grammar error of\u201cshe have\u201dto\u201cshe has.\u201d No other grammar errors occurred in the study besides this one, underscoring the capability of LLMs to consistently produce grammatically correct sentences. Not mastering KC. Regarding 5th KC, GPT-4 shows a lack of knowledge on the distinction between simplified and non-simplified surd. The followings are invalid stems generated by GPT-4: 1) 5a. If \u221a 20 is a simplified surd, what is its non-simplified form? 2) 5c. Express the simplified surd \u221a 45 in a non-simplified form. 3) 5d. A simplified surd is \u221a 8. How can it be represented in non-simplified form? This invalid stem has misled a participant to edit a stem to convey KC as simplifying surd, which is the opposite of non-simplifying surd (5c). Calculation error. In 4c, GPT-4 generated a key of $4750, erroneously calculating the car price after one year instead of two years. However, in the other three cases within the same KC, GPT-4 calculated correctly, showing its math problemsolving skills. 4.2 Distractor, Misconception, and Feedback Table 4 shows a breakdown of 60 distractors (comprising three distractors for 20 stems), categorized based on the validity of misconceptions, distractors, and feedback. Adopt All Responses (Case 1, 37%). Among 60 distractors, educators identified 22 responses as valid, including two cases that are actually invalid. Edit Feedback Only (Case 2, 8%). These cases have valid misconception and distractor and educators has made adjustments to the feedback to enhance its clarity. For example, one of the distractors for 2d is 2 3. The feedback \fTable 2: Pre-defined math KCs used in the pilot study. Coarse-grained Medium-grained Fine-grained Factors, Multiples and Primes Factors and Highest Common Factor Identify factors of a number Fractions Equivalent Fractions Identify equivalent fractions when presented as numbers Indices, Powers and Roots Squares, Cubes, etc Understand the notation for powers Percentages Repeated Percentages and Compound Interest Understand the elements of the formula for compound percentage decrease Surds Simplifying Surds Write a simplified surd in a non-simplified form Table 3: Question stems generated using HEDGE and the corresponding KCs. Fine-grained KC Stem 1. Identify factors of a number Which of these numbers is not a factor of 9? a. What are all the factors of the number 12? b. What are the factors of 18? c. Which of the following is a factor of 18? d. Which of the following numbers is a factor of 36? 2. Identify equivalent fractions when presented as numbers Which fraction is equivalent to 9 13? a. Sue has a fraction of 4 8. What fraction is equivalent to the fraction she has? b. The fraction 6 18 is equivalent to which of the following fractions? c. Which of the following fractions is equivalent to 3 9? d. Which of the following fractions is equivalent to 2 4? 3. Understand the notation for powers To calculate 532 you need to do ... a. The number 32 is equal to \u25a1. What number completes the sentence? b. The number 34 represents \u25a1. What number completes the sentence? c. If a3 is read as \u201da cubed\u201d, how is a4 read? d. What is the value of 23? 4. Understand the elements of the formula for compound percentage decrease A car depreciates in value by 10% each year. If a car was bought for $4500, what calculation would find the value of the car after 3 years? a. A car that costs $5000 loses 12% of its value each year. After one year, the car is worth \u25a1. What completes the sentence? b. A new car loses 20% of its value each year. If the car was originally priced at $15,000, what will be its value after 2 years? c. The price of a car is reduced by 5% each year. If the car was originally priced at $5000, what will be the price of the car after two years? d. A car depreciates in value by 10% each year. If the car is initially worth $35000, what is the formula to calculate the car\u2019s value after n years? 5. Write a simplified surd in a non-simplified form 5 \u221a 13 = \u221an What is the value of n? a. If 2 \u221a 5 is a simplified surd, what is its non-simplified form? b. The square root of 18 is written in simplified surd form as 3 \u221a 2. How can it be rewritten in a non-simplified form? c. Simplify the surd \u221a 45. d. A non-simplified surd is \u221a 8. How can it be represented in simplified form? generated by GPT-4 is as follows: \u201cYou seem to have compared only the numerators of the fractions. However, when checking for equivalent fractions, both the numerator and denominator need to be considered. The fraction 2 3 is not equivalent to 2 4.\u201d The educator removed the redundant final sentence and introduced \u201cRemember, equivalent fractions require both the numerator and denominator to be proportional.\u201d, which helps students better understand the importance of considering both the numerator and denominator when comparing fractions for equivalence. This adjustment emphasizes that the equivalence between fractions relies on maintaining proportionality between the numerator and denominator. While GPT-4 provides valid explanations, it sometimes fail to include critical insights that are necessary for students\u2019 improvement. Table 4: Breakdown of the 60 generated distractors and their quality ratings. (\u2713: valid, \u2717: invalid) Case Misconception Distractor Feedback Ratio Rating 1 \u2713 \u2713 \u2713 37% 4.8 2 \u2717 8% 2.8 3 \u2717 \u2713 4 \u2717 18% 2.1 5 \u2717 \u2713 \u2713 12% 3.4 6 \u2717 5% 3.0 7 \u2717 \u2713 8 \u2717 20% 2.3 Adopt Misconception Only (Case 4, 18%). These cases are often due to a mismatch between the misconception and the distractor. In 4c, the misconception \u201cThe student mistakenly believed that the car depreciates by a constant amount each year, not a percentage.\u201d did not match the distractor 35000 \u22120.10n. Additionally, there are cases when, even if the distractor is valid, it may not effectively encapsulate student misconceptions. In 1a, the educator updated the distractor from 1, 2, 3, 4, 6, 12, 24 to 12, 24, 36, 48, 60, making it a more attractive distractor for those who confuse factors for multiples. Edit Misconception Only (Case 5, 12%). As in Case 4, invalid cases are often due to a mismatch between the misconception and the distractor. In 5d, the misconception \u201cThe student may believe that all square roots are in their simplest form.\u201d did not match the distractor\u201c \u221a 2.\u201d The educator updated the misconception as \u201cThe student may have confused square roots with cube roots.\u201d providing a more accurate misconception for the distractor. Additionally, there are cases when, even if the misconception is valid, it may not likely be the misconception why the student selects the distractor. In 1c, the educator updated the misconception of distractor \u201c4\u201d from \u201cThe student might think that only the numbers less than 18 can be the factors of 18.\u201d to \u201cThe student might think that any even number can be a factor of an even number.\u201d, making it more accurate for addressing the student\u2019s misconception. Adopt Distractor Only (Case 6, 5%). These cases were when educators adopted distractors and edited wrong misconceptions and feedback. For example, in the case of 5a, \u221a 10 is a valid distractor as the student could simply multiply 2 and 5. However, the misconception and feedback generated by GPT-4 did not align with the distractor; therefore the educator had to edit it accordingly. In Cases 4, 5, and 6, LLMs revealed inconsistent mathematical reasoning when analyzing misconceptions, distractors, and feedback for a given stem. The inconsistency under\fscores a necessity for human educators to manually align distractors and their underlying misconceptions and corresponding feedback in many cases. Reject All Responses (Case 8, 20%). These cases were when misconceptions had poor quality or were wrong, resulting in inadequate distractors and feedback. Two of the distractors generated for 2b by GPT-4 shows both poor quality and wrong misconceptions. While the misconception in the first distractor is valid, stating that \u201cThe student may not divide both the numerator and denominator by the same number,\u201d the distractor itself, represented by 3 9, and its associated feedback lack coherence and fail to align with this misconception. Meanwhile, the misconception in the second distractor ( 8 24) lacks coherence, as expressed in the following manner: \u201cThe student may confuse the concept of equivalent fractions with simplifying fractions.\u201d These results reveal that LLMs often fail to anticipate valid misconceptions and errors that are common among students, making human educators\u2019 involvement crucial in the creation of math MCQs. 4.3 Takeaways from the Survey After the study, participants were asked to fill out the survey, asking the experience using HEDGE. We categorize result into two: Quality of LLM-generated responses and Tool Usability. 4.3.1 Quality of LLM-generated responses. Stem, Key, and Explanation. On a 5-point Likert scale, the participants gave an average rating of 4. This rating aligns with the open-ended responses regarding most of the generated stem, key, and explanation valid. However, two participants addressed the tool\u2019s limitation in terms of the level of question difficulty. One participant points out that the questions appear to be at a low Bloom\u2019s Taxonomy level. For example, \u201cIf a3 is read as \u2018a cubed\u2019, how is a4 read?\u201d While it\u2019s important for students to grasp the verbal representation of these terms, educators often place greater emphasis on whether students understand the equivalent expressions and concepts associated with them. The other participant points out that the Depth of Knowledge (DOK) levels predominantly focused on Level 1 (Recall) and Level 2 (Skill or Concept). We can prompt LLMs to generate questions at various Bloom\u2019s or DOK levels to enhance the question difficulty and promote deeper understanding [3]. Moreover, we can invite educators to craft in-context examples with higher Bloom\u2019s or DOK levels. Distractor, Misconception, and Feedback. On a 5-point Likert scale, the participants gave an average rating of 2.5. This rating aligns with the open-ended responses regarding most of the generated misconceptions, distractors, and feedback that do not reflect what students typically make in the classroom based on the participant\u2019s teaching experience. The responses again point to the observation that LLMs do not understand errors that student are likely to make. One participant suggest providing a \u201cbank\u201d of misconceptions that educators could refer to. We can prompt LLMs to generate multiple misconceptions and engage educators in ranking these misconceptions based on their alignment with actual student errors. 4.3.2 Tool Usability User Interface. On a 5-point Likert scale, the participants gave an average rating of 4 for comfort level with generating MCQs using HEDGE while giving an average rating of 3.25 for the effectiveness of generating high-quality MCQs. Participants are enthusiastic about the tool\u2019s potential for simplifying the process of generating MCQs but are nevertheless skeptical about LLMs\u2019 capability to generate valid distractors. We will need to enhance the tool by making improvements in the quality of generated distractors to align more closely with educators\u2019 expectations. 5. CONCLUSIONS AND FUTURE WORK In this paper, we conducted a pilot study using a prototype tool HEDGE to explore the opportunity for collaboration between LLMs and humans in generating math MCQs. We identified that while LLMs can generate valid stems, keys, and explanations, they are currently limited in capturing anticipated student errors, which is reflected in invalid misconceptions, distractors, and feedback. This study opens up many avenues for future work. First, we can extend the prompt with more in-context examples. Currently, we use only one in-context example; using multiple in-context examples can help guide LLMs to help capture valid misconceptions for the target stem. As mentioned in our survey takeaways, we can also add in-context examples with different Bloom\u2019s taxonomy and difficulty levels to enhance the diversity of the generated questions. We could also use techniques for optimally selecting these in-context examples [10]. Second, we can change the interface to choose generated distractors from a bank that contains more than three distractors. When building the bank, we can employ a k-nearest neighbor approach that gauges question similarity and leverage LLMs to generate distractors [4]. Educators will have less burden on thinking of anticipated misconceptions and even benefit from discovering misconceptions they might have overlooked. Third, improving this tool as a platform so that educators can share their misconceptions will result in a constantly-expanding error bank, which will benefit future MCQ generation. Fourth, we can provide educators choice to create personalized questions by adding a module to customize the named entity or topic (e.g., sports, popular culture) in an MCQ to stimulate student interest and make the question more culturally relevant [15]. Lastly, we can extend the area of Human-LLM collaboration to other domains beyond math MCQ generation, such as feedback generation [11], question generation [7], and programming education [1]. 6. ACKNOWLEDGEMENTS We thank Schmidt Futures and the NSF (under grants IIS2118706 and IIS-2237676) for partially supporting this work. \f7."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.00899v1.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00899v1",
|
| 3 |
+
"title": "Characterising the Creative Process in Humans and Large Language Models",
|
| 4 |
+
"abstract": "Large language models appear quite creative, often performing on par with the\naverage human on creative tasks. However, research on LLM creativity has\nfocused solely on \\textit{products}, with little attention on the creative\n\\textit{process}. Process analyses of human creativity often require hand-coded\ncategories or exploit response times, which do not apply to LLMs. We provide an\nautomated method to characterise how humans and LLMs explore semantic spaces on\nthe Alternate Uses Task, and contrast with behaviour in a Verbal Fluency Task.\nWe use sentence embeddings to identify response categories and compute semantic\nsimilarities, which we use to generate jump profiles. Our results corroborate\nearlier work in humans reporting both persistent (deep search in few semantic\nspaces) and flexible (broad search across multiple semantic spaces) pathways to\ncreativity, where both pathways lead to similar creativity scores. LLMs were\nfound to be biased towards either persistent or flexible paths, that varied\nacross tasks. Though LLMs as a population match human profiles, their\nrelationship with creativity is different, where the more flexible models score\nhigher on creativity. Our dataset and scripts are available on\n\\href{https://github.com/surabhisnath/Creative_Process}{GitHub}.",
|
| 5 |
+
"authors": "Surabhi S. Nath, Peter Dayan, Claire Stevenson",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "cs.HC",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.HC",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.CL",
|
| 13 |
+
"q-bio.NC"
|
| 14 |
+
],
|
| 15 |
+
"label": "Original Paper",
|
| 16 |
+
"paper_cat": "LLM Fairness",
|
| 17 |
+
"gt": "Characterising the Creative Process in Humans and Large Language Models",
|
| 18 |
+
"main_content": "Introduction Much recent work has benchmarked and quantified the generative creative aptitudes of large language models (LLMs) (Chakrabarty et al. 2023; Gilhooly 2023; Franceschelli and Musolesi 2023; Tian et al. 2023; Wang et al. 2024; Hubert, Awa, and Zabelina 2024). LLMs often perform as well as the average human on creative thinking tasks such as the Alternate Uses Task (AUT) (Orwig et al. 2024; Koivisto and Grassini 2023; Stevenson et al. 2022; G\u00b4 oes et al. 2023; Guzik, Byrge, and Gilde 2023). However, these works largely analysed creativity from a Product perspective (Rhodes 1961), assessing how original and useful model responses are to determine \u201cwhat makes them creative (or not)\u201d. An equally important component of creativity, less studied in the field of Artificial Creativity, is the Process perspective (Rhodes 1961), addressing the question of \u201chow creativity arises\u201d. This paper aims to fill this gap and characterise human and LLM creativity by looking at the creative process (Stevenson et al. 2022), particularly the way humans and LLMs explore semantic spaces while generating creative ideas. RESPONSES (ri) FLEXIBLE PERSISTENT MIXED r1 r2 r3 r4 r5 r6 r7 r8 r11 r12 r13 r14 r15 r16 r17 r18 r9 r10 r1 r2 r3 r4 r5 r6 r7 r8 r11 r12 r13 r14 r15 r16 r17 r18 r9 r10 r1 r2 r3 r4 r5 r6 r7 r8 r11 r12 r13 r14 r15 r16 r17 r18 r9 r10 Figure 1: Example persistent, flexible and mixed response sequences. ri denotes the ith response, coloured regions denote the semantic spaces/concepts/categories. Note, in practice, most sequences will be mixed, containing different patterns of persistence and flexibility. When humans generate creative ideas, for example, alternate uses for a \u201cbrick\u201d, two types of response pathways are observed (Baas et al. 2013; Nijstad et al. 2010). In the persistent pathway, responses stem from deeper search within limited conceptual spaces, exhibiting high clustering and similarities in responses (e.g., using a brick to break a window, break a lock, and as a nutcracker; i.e., for breaking things). In the flexible pathway, responses arise from broader search across multiple conceptual spaces, exhibiting frequent jumps between categories and dissimilarities in responses (e.g., using a brick to build a dollhouse, as an exercise weight, and as a coaster) (Figure 1). There are two complementary ways of quantifying response clustering borrowed from the literature on memory search and semantic fluency. The first is to categorise responses temporally using inter-item retrieval times, i.e. responses that occur shortly after each other are expected to belong to the same category and longer pauses are expected to signal jumps from one category to another. The second method is to group successive responses semantically using a set of pre-defined categories (e.g., into \u201cbuilding\u201d or \u201cbreaking\u201d for uses of a brick). The number of categories divided by the number of responses provides a flexibility index (Hills, Jones, and Todd 2012). Hass (2017) compared clustering in creative thinking tasks like AUT to that in a verbal fluency task (VFT) of naming animals and reported less evident clustering and higher flexibility in AUT than VFT (where responses were highly clustered, for example naming zoo animals followed by sea animals). However, the methods used in these works are either based on handcrafted lists of categories or on response-time arXiv:2405.00899v1 [cs.HC] 1 May 2024 \fprofiles which do not apply to responses from LLMs. In addition, these works show that semantic similarity is related to jumps in response sequences, but semantic similarity has not been used to code for jumps directly until now. In this paper, we propose a fully automated, data-driven method to signal jumps in response sequences using response categorisation and semantic similarites and apply it to characterise the creative process in both humans and LLMs. In the next sections, we first introduce method and investigate its reliability and validity. We then apply it to characterise human and LLM flexibility on the AUT and VFT. We find that LLMs as a population match the variability in human response sequences on AUTs, but unlike humans, their relationship to creativity differs. We also discuss how to use these insights to use LLMs as artificial participants or cocreators. Method Data Collection: We collected data from humans and LLMs on the AUT for \u201cbrick\u201d and \u201cpaperclip\u201d, and the VFT of naming animals (Figure 2A). Human data were collected from (anonymized) undergraduate participants using a within-subjects design. For the AUT, participants listed as many creative uses for \u201cbrick\u201d and \u201cpaperclip\u201d as possible in a fixed time of 10 minutes. For the VFT, participants named as many animals as possible in a fixed time of 2 minutes. Participants not adhering to instructions were removed, resulting in a total of 220 participants. The responses, originally in Dutch were translated to English for analysis using the deep-translator Python package. Translations were manually inspected to correct for errors due to spelling mistakes. LLM data were collected in English by prompting several recent open and closed source models. For open source models, we used the Together AI API. The prompt matched instructions given to humans, but with specific response number and length requirements. We tested multiple prompt versions to achieve the best quality LLM responses. The final prompt for the AUT instructed LLMs to generate nAUT creative uses for \u201cbrick\u201d or \u201cpaperclip\u201d, and to answer in short phrases of maximum mAUT words. For the VFT, the final prompt instructed LLMs to name nVFT animals, and to answer in short phrases of maximum mVFT words. nAUT, nVFT were set to the mean number of human responses (N) in AUT (=ceil[max[Nbrick, Npaperclip]]) and VFT tasks. mAUT, mVFT were set to the maximum mean human response word length (M) in AUT (=floor[max[Mbrick,Mpaperclip]]) and VFT. In pilots, only \u223c20 models gave valid responses for the AUT tasks, of which we selected the 4 that followed the prompt instructions for length and number of responses, namely, Meta 70B Llama 3 chat HF (Llama) model, Mistral AI 7B Instruct (Mistral) model, NousResearch 7B Nous-Hermes Mistral DPO (NousResearch) model and Upstage 10.7B SOLAR Instruct (Upstage) model. We experimented with temperature and repetition penalty parameters. However, varying the repetition penalty did not produce higher quality responses, so we only varied the temperature, through 11 levels (0-1, inclusive, at every 0.1). We also tested the latest versions of 4 closed source models: OpenAI GPT-4 turbo (GPT), Google Palm bison (Palm), Google Gemini 1.0 pro (Gemini) and Anthropic Claude 3 (Claude), with the same prompt and parameters as for the open models. All 4 models generated valid responses and adhered to the response number and length instructions. We generated 5 samples per model \u00d7 temperature combination, and therefore our LLM data set consisted of 440 (8 \u00d7 11 \u00d7 5) LLM response sequences in all. The 220 human and 440 LLM response sequences were cleaned by removing stopwords, punctuations and common words such as \u201cuse\u201d or \u201cbrick\u201d/\u201cpaperclip\u201d. They were also manually inspected for correctness and validity. Invalid responses (verbatim repeats/junk responses) were removed 1. Response Categorisation, Semantic Similarities and Jump Signal: First, we encoded all responses using sentence-transformers, using the gte-large model given its encodings\u2019 suitability for clustering. Each response was encoded as a 1024 dimensional normalised embedding vector. Next, all responses were aggregated, dropping duplicates, resulting in 2770 unique alternate uses for brick, 3512 unique alternate uses for paperclip and 482 unique animals. The vector embeddings of these response sets were categorised using the scipy linkage, fcluster hierarchical clustering functions with the ward distance metric and a distance threshold chosen such that the mean minimum pairwise semantic similarity (vector dot product) per category was just above 0.7. This resulted in 26 brick, 28 paperclip, and 15 animal categories. Using these categories, we defined a binary variable jumpcat for each response in a response sequence (except for the first response) as 1 if it marked a change in category compared to the previous response, and 0 otherwise. jumpcat provided us course-grained similarities (for example \u2018elevation\u2019 and \u2018table leg\u2019 belonged to the same category as did \u2018keep scarf together\u2019 and \u2018hang clothes\u2019). To address finer-grained differences, we evaluated the semantic similarity (SS) between successive embeddings of responses in a response sequence (Hass 2017; Camenzind et al. 2024). Using SS, we defined a second binary variable jumpSS and set it to 0 if SS was above a threshold and 1 otherwise. jumpSS signaled finer-grained similarities (for example\u2018piercing\u2019 and \u2018ring\u2019). A combined jump signal was defined as their logical AND: jump = jumpcat \u2227jumpSS. We set the threshold for jumpSS such that jump has at least 0.8 True Positive and True Negative Rates on hand-coded2 jump signals for AUT brick. Our entire procedure is illustrated in Figure 2B. We conduct psychometric analyses to investigate the reliability and validity of the method. 1For AUT brick, we removed low temperature responses in Mistral and NousResearch models as these were verbatim repeats. For VFT, we excluded NousResearch and Palm models fully, as they only listed animals in alphabetical order. 2The jump signals were hand-coded by the first author. \fSentence Embeddings Categorization jumpcat jumpSS jump Semantic similarity HUMAN LLM VERBAL FLUENCY: ANIMALS ALTERNATE USES: PAPERCLIP ALTERNATE USES: BRICK A B AND Figure 2: (A) Humans and LLMs perform 3 tasks\u2014Alternate Uses Task (AUT) for brick and paperclip, and a Verbal Fluency Task (VFT) of naming animals. (B) Our method for obtaining jumps in the response sequence. Sentence embeddings are used for assigning response categories and evaluating semantic similarities, which respectively give jumpcat and jumpSS. Their logical AND gives jump. Jump Profiles and Participant Clustering: Using the jump signals, we determined a jump profile for each response sequence as the cumulative count of jumps at each response (for example, a response sequence of length 4 with jumps [1, 0, 1] will have a jump profile [1, 1, 2]). Different human participants produced different numbers of responses, so we considered just the first 18 responses from each sequence (the median human sequence length), excluding shorter sequences. The remaining profiles (AUT brick: 97; AUT paperclip: 103; VFT: 195) were clustered using KMeans (sklearn KMeans) with K-Means++ initialization (Arthur, Vassilvitskii, and others 2007) per task. LLM jump profiles were assigned to the closest human cluster. Evaluating Response Creativity: We used Open Creativity Scoring ocsai-chatgpt (Organisciak et al. 2023) to score response originality in AUT brick and paperclip. Results Jump Signal Reliability and Validity: We first test the reliability and validity of the jump signal. For reliability, we measured the test-retest correlation of the number of jumps for AUT brick and paperclip response sequences from 81 participants (who had >=18 responses in both). We found a positive Pearson correlation of r=0.42 (p<0.001, CI=[0.22, 0.58]), which is high considering the test-retest and alternate-form reliability of AUT product creativity seldom exceeds r=0.5. For validity, we test for agreement with past findings in humans. In keeping with Hass 2017, who showed more jumping in AUT than VFT, we found significantly more jumps in AUT brick and paperclip than in VFT (both p<0.001). Moreover, in line with Hass (2017), Hills et al. (2012), we also found greater mean response times for jump = 1 than jump = 0 (p<0.001). Participant Clusters: Based on the literature and clustering elbow plots, we assigned human jump profiles to 3 clusters for each task (Figure 3A). These map to different levels of flexibility in the response sequences\u2014cluster 1: persistent profiles (7-12 jumps for AUT and 1-6 jumps for VFT); cluster 2: flexible profiles (15-18 jumps for AUT and 6-11 jumps for VFT); and cluster 3: mixed profiles (12-16 jumps for AUT and 4jumps for VFT). The different numbers of jumps in AUT and VFT are clear, where the flexible cluster in VFT closely resembles the persistent cluster in AUTs. Thus the classifications are task-relative. The proportion of participants assigned to each cluster further reinforces that people are more flexible in AUT and more persistent in VFT. LLM Assignments: The LLM jump profiles were assigned to one of the 3 human clusters with proportions of assignment shown in Figure 3B. Different models exhibited different biases towards persistence or flexibility in the AUTs. For example, in AUT brick, Upstage, GPT, Claude and Palm are mostly flexible while Llama and Gemini are mostly persistent. However, models were less consistent across the two AUTs. In AUT paperclip, while Upstage and GPT remained mostly consistent in their assignments, but Llama and Gemini switched from persistent to flexible. This is also evident in the test-retest correlation, which was lower than for humans (r=0.22, p<0.001, CI=[0.12, 0.31]). Taken together, we find that LLMs are not significantly different than humans in number of jumps on AUTs (p>0.05 in both). However, on the VFT, LLMs were overwhelmingly persistent, and significantly more persistent than humans (p<0.001). Comparing the human and model cluster assignment percentages, we observe that Mistral and NousResearch models closely resemble the human distribution in AUT brick; Gemini model does so in AUT paperclip; but no model resembles humans for VFT. Temperature neither influenced cluster assignments nor number of jumps in AUTs (p>0.05). In VFT, temperature did influence jumping (p<0.001), but did not influence cluster assignment. This is consistent with previous research suggesting no role of temperature in flexibility (Stevenson et al. 2022) and suggests that model responses cannot be easily manipulated parametrically. Relationship to Creativity: We calculated the mean originality ratings in each response sequence. For humans, mean originality was similar for persistent and flexible clusters in both AUTs (both p>0.05). Mean originality did not predict the number of jumps in AUT brick (p>0.05), and weakly predicted jumps in AUT paperclip (0.01<p<0.05). This is in line with the literature suggesting that creativity can arise both from deeper and broader search of semantic spaces. In contrast, for LLMs, on both AUTs, mean originality was higher in the flexible cluster compared to the persistent cluster (both p<0.01), and mean originality predicted the number of jumps(both p<=0.01). Therefore, even though the number of jumps in AUT tasks for humans and LLMs do \f11.1% 1.8% 49.1% 0.0% 19.0% 1.8% 58.2% 12.7% 25.0% 83.6% 16.4% 33.3% 58.2% 63.6% 9.1% 54.5% 63.9% 14.5% 34.5% 47.6% 41.8% 34.5% 32.7% 32.7% 36.4% 1.8% 1.8% 34.5% 0.0% 5.5% 12.7% 10.9% 25.5% 72.7% 49.1% 23.6% 54.5% 25.5% 47.3% 41.8% 38.2% 25.5% 49.1% 41.8% 45.5% 69.1% 40.0% 47.3% 90.9% 98.2% 100.0% 92.7% 100.0% 96.4% 5.5% 0.0% 0.0% 0.0% 0.0% 0.0% 3.6% 1.8% 0.0% 7.3% 0.0% 3.6% A B 19.6% 25.7% 54.6% 36.9% 40.7% 22.3% 38.4% 18.9% 42.5% * * Figure 3: (A) 3 human clusters for each task\u2013persistent, flexible and mixed. Each coloured trajectory represents 1 participant. Percentages in each row indicate the percentage of participants assigned to that cluster. (B) Percentages of each LLM response sequences assigned to each cluster. * indicates not all temperatures for that model were included (0.4-1 for Mistral, and 0.7-1 for NousResearch were used). not differ, their relationship with originality differs. Further, LLMs also scored higher on overall mean response sequence originality compared to humans on AUTs (both p<0.001). Discussion We introduce an automated, data-driven method to study the creative process in humans and LLMs. We defined an algorithmic jump signal to indicate persistance or flexibility while solving divergent thinking tasks such as the AUT and VFT. Our jump signal proved reliable and valid for human responses and replicated findings using the traditional methods (Hills, Jones, and Todd 2012; Hass 2017). We used this signal to investigate human and LLM jump profiles. For AUT, we found that both human and LLM jump profiles spanned from persistent to flexible. As in previous literature, human creativity was not correlated with flexibility profile (De Dreu, Baas, and Nijstad 2008). However, in LLMs more flexible models had higher originality scores. Our work has limitations. First, we used the same embedding model (with different distance metrics) for response categorisation and semantic similarities. Second, each response was categorised into a single category\u2014however, a response such as \u201cthrow a brick to produce sound\u201d should include concepts of both \u201cthrow\u201d and \u201cproduce sound\u201d. Multiclass classification based on predefined categories could tackle this issue. Third, we only scored response originality ignoring utility, which could inflate creativity comparisons as inappropriate responses (e.g., \u201cusing a brick as a ribbon\u201d) have high originality scores but are not considered creative. Lastly, the AUT being a popular creativity task (especially AUT brick3) could be a source of LLM data contamination (Gilhooly 2023). The implications of our work are many. There is an emerging trend in cognitive science to use LLMs as artificial participants (Argyle et al. 2023; Frank 2023; Dillion et al. 2023; Binz and Schulz 2023). Our results suggest that often LLMs are biased towards either persistence or flexibility, regardless of parameter settings such as temperature, and may be inconsistent across tasks. Therefore, we suggest using a host of models to approximate the human distribution and draw valid inferences. The creative collaboration literature suggests that more diverse teams yield more creative ideas (Hoever et al. 2012). An implication of our work for human-AI co-creativity, is to use an LLM to complement one\u2019s own brainstorming pathway. For example, more persistent participants could collaborate with a flexible model such as Upstage, which could help them diversify their ideas. Through our work, we offer a first step to study human and LLM creative processes under the same metric. We provide some directions that are worth exploring in the future to further our understanding of human and artificial verbal creativity processes. 3checked with WIMBD tool (Elazar et al. 2023)"
|
| 19 |
+
}
|
title_10K/test_title_short_2405.00902v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00902v1",
|
| 3 |
+
"title": "MESA: Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure",
|
| 4 |
+
"abstract": "Multi-agent reinforcement learning (MARL) algorithms often struggle to find\nstrategies close to Pareto optimal Nash Equilibrium, owing largely to the lack\nof efficient exploration. The problem is exacerbated in sparse-reward settings,\ncaused by the larger variance exhibited in policy learning. This paper\nintroduces MESA, a novel meta-exploration method for cooperative multi-agent\nlearning. It learns to explore by first identifying the agents' high-rewarding\njoint state-action subspace from training tasks and then learning a set of\ndiverse exploration policies to \"cover\" the subspace. These trained exploration\npolicies can be integrated with any off-policy MARL algorithm for test-time\ntasks. We first showcase MESA's advantage in a multi-step matrix game.\nFurthermore, experiments show that with learned exploration policies, MESA\nachieves significantly better performance in sparse-reward tasks in several\nmulti-agent particle environments and multi-agent MuJoCo environments, and\nexhibits the ability to generalize to more challenging tasks at test time.",
|
| 5 |
+
"authors": "Zhicheng Zhang, Yancheng Liang, Yi Wu, Fei Fang",
|
| 6 |
+
"published": "2024-05-01",
|
| 7 |
+
"updated": "2024-05-01",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.MA"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "Multi AND Agent AND Reinforcement AND Learning",
|
| 16 |
+
"gt": "MESA: Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure",
|
| 17 |
+
"main_content": "INTRODUCTION Reinforcement learning (RL) algorithms often adopt a trial-anderror learning paradigm and optimize the policy based on the reward signals given by the environment. The e\ufb00ectiveness of RL relies on e\ufb03cient exploration, especially in sparse reward settings, as it is critical to get su\ufb03cient experiences with high rewards to guide the training. \u2217Equal contribution. Proc. of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024), N. Alechina, V. Dignum, M. Dastani, J.S. Sichman (eds.), May 6 \u2013 10, 2024, Auckland, New Zealand. \u00a9 2024 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). This work is licenced under the Creative Commons Attribution 4.0 International (CC-BY 4.0) licence. Figure 1: Illustration of structured exploration and unstructured exploration behavior in the 2-player climb game. The rows and columns indicate the players\u2019 action space. While unstructured exploration aims to visit novel states, structured exploration exploits structures in the joint stateaction space, helping agents coordinatedly and more e\ufb03ciently explore the potential high-reward subspace. The exploration challenge has been studied extensively and existing works can be categorized mainly into two streams. One core idea with great success is to incentivize the agent to visit the underexplored states more frequently by adding an intrinsic reward based on a visitation measure [3, 25, 28, 37] or some other heuristics [17, 39]. However, in multi-agent settings, due to the exponential growth of the joint state-action space, simply visiting more novel states can be increasingly ine\ufb00ective. Exploration policies need to better capture the low-dimensional structure of the tasks and leverage the structural knowledge for higher exploration e\ufb03ciency. Another line of work speci\ufb01cally learns exploration strategies. However, these works do not explicitly consider the underlying task structure. For example, Mahajan et al. conditions the policy on a shared latent variable [24] learned via mutual information maximization. Liu et al. adopts a goal-conditioned exploration strategy by setting state features as goals [21]. Other works in the singleagent settings [6, 26, 35] learn exploration policies through a prede\ufb01ned intrinsic reward. All these works train the exploration policy using task-agnostic exploration-speci\ufb01c rewards. In Section 4, we will present a simple matrix game to show that popular exploration methods can have di\ufb03culties \ufb01nding the optimal solution due to the reward structure of the game. \fHow can we enable the agents to more e\ufb00ectively explore by leveraging the intrinsic structure of the environment? We adopt a meta-exploration framework (i.e., learning to explore) for MARL: we \ufb01rst train multiple structured exploration policies from a set of training tasks (referred to as the meta-training stage), and then use these exploration policies to facilitate agents\u2019 learning in a testtime task, which is typically a new task sampled from the task distribution (referred to as meta-testing stage). We develop a multiagent meta-exploration method, Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure (MESA) for fully cooperative settings. MESA leverages the task structures by explicitly identifying the agents\u2019 high-rewarding joint state-action subspace in the training tasks. It then trains a set of diverse exploration policies to cover this identi\ufb01ed subspace. The exploration policies are trained with a reward scheme induced by the distance to the high-rewarding subspace. The meta-learned exploration policies can be combined with any o\ufb00-policy MARL algorithm during the meta-testing stage by randomly selecting learned exploration policies to collect valuable experiences. Such structured exploration can help the agents to learn good joint policies e\ufb03ciently (Figure 1). We empirically show the success of MESA on the matrix climb game and its harder multi-stage variant. In addition, we evaluate MESA in two continuous control tasks, i.e., the MPE environment [23] and the multi-agent MuJoCo benchmark [29]. We demonstrate the superior performance of MESA compared to existing multi-agent learning and exploration algorithms. Furthermore, we show that MESA is capable of generalizing to unseen test-time tasks that are more challenging than any of the training tasks. 2 RELATED WORK Exploration has been a long-standing challenge in RL with remarkable progress achieved in the single-agent setting [3, 5, 10, 25, 28, 34, 37]. Most of these works maintain pseudo-counts over states and construct intrinsic rewards to encourage the agents to visit rarely visited states more frequently [3, 25, 28, 37]. These countbased methods have been extended to the multi-agent setting by incentivizing intra-agent interactions or social in\ufb02uence [17\u201319, 39]. However, in the multi-agent setting, a simple count-based method can be less e\ufb00ective due to the partial observability of each agent, an exponentially large joint state-action space, and the existence of multiple non-Pareto-optimal NE. Therefore, recent works focus on discovering the structures of possible multi-agent behaviors. For example, [24] adopts variational inference to learning structured latent-space-policies; [15] generates similar tasks with simpler reward functions to promote cooperation; [21] learns to select a subset of state dimensions for e\ufb03cient exploration. We follow a metalearning framework and learn structured exploration strategies by exploiting high-rewarding subspace in the joint state-action space. Our method also leverages a count-based technique as a subroutine during the meta-training phase to prevent over-exploitation and mode collapse. Meta reinforcement learning (meta-RL) is a popular RL paradigm that focuses on training a policy that can quickly adapt on an unseen task at test time [9, 12, 14, 20, 32, 40, 42, 44]. Such a paradigm has been extended to the setting of learning to explore. The key idea is to meta-learn a separate exploration policy that can be used in the testing task. Most closely related to our work is [26], where an exploration policy is pretrained on a set of training tasks. However, their method is designed for the single-agent setting and learns the exploration policy by using a task-agnostic intrinsic reward to incentivize visitation of interesting states , while we directly utilize the task reward to learn the structure of the environments. Other existing works in meta-exploration propose to learn a latent-space exploration policy that is conditioned on a task variable, which can be accomplished by meta-policy gradient [14, 20, 40], variational inference [32] or information maximization [42] over the training tasks. Therefore, at test time, posterior inference can be performed for the latent variable towards fast exploration strategy adaption. Our approach follows a similar metaexploration paradigm by learning additional exploration policies. However, existing meta-exploration methods focus on the singleagent setting while we consider much more challenging multi-agent games with a distribution of similarly-structured tasks, for example, the MPE environment [23] with a distribution of target landmarks that the agents need to reach. In addition, we meta-learn a discrete set of exploration policies through an iterative process, which results in a much simpler meta-testing phase without the need for posterior sampling or gradient updates on exploration policies. Besides, some other methods pretrain exploration policies from an o\ufb04ine dataset [7, 31, 36], which is beyond the scope of this paper. Finally, our approach largely di\ufb00ers from the setting of multitask learning [1, 2, 11, 16, 27], which are commonly evaluated in environments with heterogeneous tasks or scenarios. Our exploration policies are not trained to achieve high returns in the training tasks. Instead, they are trained to reach as many high-reward state-action pairs as possible collectedin a diverse set of tasks. Therefore, the state-action pairs covered by a single exploration policy are very likely to be distributed across di\ufb00erent training tasks. 3 PRELIMINARIES Dec-POMDP. We consider fully-cooperative Markov games described by a decentralized partially observable Markov decision process (Dec-POMDP), which is de\ufb01ned by \u27e8S, A, \ud443, \ud445, \u03a9, O,\ud45b,\ud6fe\u27e9. S is the state space. A \u2261A1\u00d7...\u00d7A\ud45bis the joint action space. The dynamics is de\ufb01ned by the transition function \ud443(\ud460\u2032 | \ud460, \ud482). Agents share a reward function \ud445(\ud460, \ud482), and\ud6fe\u2208(0, 1) is the discount factor. \u03a9 \u2261\u03a91 \u00d7 .. \u00d7 \u03a9\ud45bis the joint observation space, where \u03a9\ud456is the observation space for agent \ud456. At each timestep, each agent \ud456only has access to its own observation \ud45c\ud456\u2208\u03a9\ud456de\ufb01ned by the function O : S \u00d7A \u21a6\u2192\u03a9. The goal of agents in Dec-POMDP is to maximize the common expected discounted return under the joint policy \ud745: J (\ud745) = E\ud745 \u0002\u00cd \ud461\ud6fe\ud461\ud445(\ud460\ud461, \ud482\ud461) \u0003 . Learning to Explore. Meta-RL assumes a task distribution\ud45d(T) over tasks, and an agent aims to learn to quickly adapt to a testtime task T test drawn from \ud45d(T) after training in a batch of training tasks {T \ud456| T \ud456\u223c\ud45d(T)}\ud435 \ud456=1. Inspired by the explicit exploration methods [6, 42], we adopt a meta-exploration framework for MARL: we learn joint exploration policies \ud745\ud452from training tasks {T \ud456| T \ud456\u223c\ud45d(T)}\ud435 \ud456=1 and use \ud745\ud452to collect experiences for the training of the agents\u2019 policy pro\ufb01le \ud745in task T test, denoted as \f\ud745(\ud745\ud452, T test). Formally, the objective of meta-exploration is max \ud745\ud452ET test\u223c\ud45d(T) \" E\ud745(\ud745\ud452,T test) \"\u00d5 \ud461 \ud6fe\ud461\ud445\ud456(\ud460\ud461, \ud482\ud461) ## . (1) Nash Equilibrium and Pareto Optimality. A joint policy \ud745 is an NE if each agent\u2019s policy \ud70b\ud456is a best response to the other agents\u2019 policies \ud745\u2212\ud456. That is, for any agent \ud456\u2019s alternative policy \ud70b\u2032 \ud456, we have \ud444\ud456(\ud745) \u2265\ud444\ud456(\ud70b\u2032 \ud456, \ud745\u2212\ud456), where \ud444\ud456is the value function for agent \ud456. A joint policy \ud745is Pareto optimal if there does not exist an alternative joint policy \ud745\u2032 such that \u2200\ud456, \ud444\ud456(\ud745\u2032) \u2265\ud444\ud456(\ud745) and \u2203\ud456, \ud444\ud456(\ud745\u2032) > \ud444\ud456(\ud745). 4 A MOTIVATING EXAMPLE: CLIMB GAME We analyze a fully cooperative matrix game known as Climb Game. In Section 4.1, we show how popular exploration strategies, including unstructured strategies like uniform exploration and taskspeci\ufb01c strategies like\ud716\u2212greedy, fail to e\ufb03ciently explore the climb game. By contrast, we show in Section 4.2 that a simple structured exploration strategy can substantially improve the exploration ef\ufb01ciency. A climb game \ud43a\ud453(\ud45b,\ud462,\ud448) is a \ud45b-player game with action space A\ud456= {0, . . . ,\ud448\u22121} for any player\ud456. The reward of a joint action \ud482\u2208 A is determined by the number of players performing a speci\ufb01c action \ud462(denoted as #\ud462), which is \ud445(\ud482) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1, if #\ud462= \ud45b, 1 \u2212\ud6ff(0 < \ud6ff< 1), if #\ud462= 0, 0, otherwise. . (2) 4.1 Exploration Challenge A climb game \ud43a\ud453(\ud45b,\ud462,\ud448) has three groups of NE: the Pareto optimal NE (\ud462,\ud462, . . . ,\ud462), the sub-optimal NEs {(\ud44e1,\ud44e2, . . . , \ud44e\ud45b) | \u2200\ud456, \ud44e\ud456\u2260 \ud462}, and the zero-reward NEs {(\ud44e1, \ud44e2, . . . , \ud44e\ud45b) | 1 < #\ud462< \ud45b}. The sheer di\ufb00erence in the size of the three subsets of NEs makes it particularly challenging for RL agents to learn the optimal policy pro\ufb01le without su\ufb03cient exploration, as evidenced by the theoretical analysis below and empirical evaluation in Section 6. Consider a 2-agent climb game \ud43a\ud453(2, 0,\ud448). A joint action \ud482can be represented by a pair of one-hot vectors [e\ud456, e\ud457] \u2208{0, 1}2\ud448. Let \ud45e(x, y;\ud703) be a joint Q function parameterized by \ud703that takes input x, y \u2208{0, 1}\ud448and is learned to approximate the reward of the game. We hope the joint Q function has the same optimal policy pro\ufb01le. De\ufb01nition 4.1. We call a joint \ud444function \ud45e(x, y;\ud703) equivalently optimal when \ud45e(e0, e0;\ud703) = max0\u2264\ud456,\ud457<\ud448\ud45e(e\ud456, e\ud457;\ud703). When a joint \ud444function is equivalently optimal, one can use it to \ufb01nd the optimal policy. Since neural networks are di\ufb03cult to analyze in general [4], we parameterize the joint \ud444function in a quadratic form: \ud45e(x, y; W, b, c,\ud451) = x\u22a4Wy + b\u22a4x + c\u22a4y + \ud451 (3) A Gaussian prior \ud45d(W) = N (W; 0,\ud70e2 \ud464\ud43c) is introduced under the assumption that a non-linear W is harder and slower to learn. Quadratic functions have been used in RL [13, 38] as a replacement for the commonly-used multi-layer perceptron, and there are also theoretical results [8] analyzing neural networks with quadratic activation. For the climb game, it is easy to verify that the quadratic coe\ufb03cients make the joint\ud444function su\ufb03ciently expressive to perfectly \ufb01t the reward function by setting W to be the reward matrix. Therefore, the learning process of \ud444is mainly a\ufb00ected by how the exploration policy samples the data. Consider an exploration policy \ud45d(\ud461) \ud452 that selects joint action \ud482= (\ud456, \ud457) at step \ud461with probability \ud45d(\ud461) \ud452 (\ud456, \ud457). The e\ufb03ciency of an exploration policy can be measured by the required number of steps for learning an equivalently optimal \ud444function using the maximum likelihood estimator over the data sampled from \ud45d(\ud461) \ud452 . The learning objective includes both the prior\ud45d(W) and the likelihood of prediction error \ud45d(\ud438\ud456\ud457), where the prediction error \ud438\ud456\ud457= \ud45e(e\ud456, e\ud457; \u00b7) \u2212\ud445\ud456\ud457. If the prediction error is assumed to be depicted by a Gaussian distribution \ud45d(\ud438\ud456\ud457) = N (\ud438\ud456\ud457; 0,\ud70e2 \ud452) for every visited joint action (\ud456, \ud457), then the learning objective for the \ud444function can be formulated as: J (\ud447) (W, b, c,\ud451) =E{ (\ud456(\ud461),\ud457(\ud461) )\u223c\ud45d(\ud461) \ud452 }\ud447 \ud461=1 log \ud45d(W) \ud447 \u00d6 \ud461\u2032=1 \ud45d(\ud438\ud456(\ud461) \ud457(\ud461) ) ! = \ud447 \u00d5 \ud461=1 E(\ud456,\ud457)\u223c\ud45d(\ud461) \ud452 \u0002 log N (\ud45e(e\ud456, e\ud457; W, b, c,\ud451) \u2212\ud445\ud456\ud457; 0,\ud70e2 \ud452) \u0003 + log N (W; 0,\ud70e2 \ud464\ud43c) + Const. (4) We use \ud45eJ (\ud447) (W, b, c,\ud451) to denote the learned joint \ud444function that maximizes J (\ud447) at step \ud447. \ud45eJ (\ud447) (W, b, c,\ud451) is determined by the exploration policy \ud45d(\ud461) \ud452 and the exploration steps \ud447. Then we have the following theorem for the uniform exploration strategy. Theorem 4.2 (uniform exploration). Assume \ud6ff\u22641 6,\ud448\u22653. Using a uniform exploration policy in the climb game \ud43a\ud453(2, 0,\ud448), it can be proved that \ud45eJ (\ud447) (W, b, c,\ud451) will become equivalently optimal only after \ud447= \u03a9(|A|\ud6ff\u22121) steps. When \ud6ff= 1, \ud447= \ud442(1) steps su\ufb03ce to learn the equivalently optimal joint Q function, suggesting the inef\ufb01ciency of uniform exploration is due to a large set of sub-optimal NEs. The intuition behind Theorem 4.2 is that the hardness of exploration in climb games largely comes from the sparsity of solutions: a set of sub-optimal NEs exist but there is only a single Pareto optimal NE. Learning the joint \ud444function can be in\ufb02uenced by the sub-optimal NEs. And if the exploration attempts are not well coordinated, a lot of zero reward would be encountered, making it hard to \ufb01nd the Pareto optimal NE. We also remark that uniform exploration can be particularly ine\ufb03cient since the term |A| can be exponentially large in a multi-agent system. This indicates that more e\ufb03cient exploration can potentially be achieved by reducing the search space and identifying a smaller \u201ccritical\u201d subspace. To formally prove Theorem 4.2, we de\ufb01ne \ud4531, \ud4532, \ud4533 as the stepaveraged probability of taking the joint action in optimal NE, suboptimal NE and zero-reward, respectively. We show that to make the joint \ud444function equivalently optimal, there is a necessary condition that \ud4531, \ud4532, \ud4533 should follow. When\ud447is not large enough, this condition cannot be satis\ufb01ed. Detailed proof is in Appendix A.2. \fFigure 2: MESA\u2019s meta-learning framework. In the meta-training stage, MESA learns exploration policies to cover the highrewarding subspace. In the meta-testing stage, MESA uses the learned exploration policies to assist the learning in an unseen task. Each color corresponds to a di\ufb00erent task, and the colored points represent the high-rewarding joint state-action pairs collected in that task. Next, we consider the case of another popular exploration paradigm, \ud716-greedy exploration. Theorem 4.3 (\ud716-greedy exploration). Assume \ud6ff\u22641 32,\ud448\u22654,\ud448\u2265 \ud70e\ud464\ud70e\u22121 \ud452. In the climb game \ud43a\ud453(2, 0,\ud448), under \ud716-greedy exploration with \ufb01xed \ud716\u22641 2, \ud45eJ (\ud447) (W, b, c,\ud451) will become equivalently optimal only after \ud447= \u03a9(|A|\ud6ff\u22121\ud716\u22121) steps. If \ud716(\ud461) = 1/\ud461, it requires \ud447= exp \u0000\u03a9 \u0000|A|\ud6ff\u22121\u0001\u0001 exploration steps to be equivalently optimal. The proof is similar to that of Theorem 4.2 (detailed in Appendix A.3). By comparing 4.2 and 4.3, \ud716-greedy results in even poorer exploration e\ufb03ciency than uniform exploration. Note the\ud716-greedy strategy is training policy speci\ufb01c, i.e., the exploration behavior varies as the training policy changes. Theorem 4.3 suggests that when the policy is sub-optimal, the induced \ud716-greedy exploration strategy can be even worse than uniform exploration. Hence, it can be bene\ufb01cial to adopt a separate exploration independent from the training policy. The above analysis shows that common exploration strategies like uniform exploration or \ud716-greedy exploration are ine\ufb03cient for such a simple game and the main reason is that it requires coordination between di\ufb00erent agents to reach high-rewarding states, but naive exploration strategies lack such cooperation. 4.2 Structured Exploration We will show that it is possible to design a better exploration strategy with some prior knowledge of the climb game structure. Consider a speci\ufb01c structuredexploration strategy \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 , where both agents always choose the same action. With such a strategy, we can quickly \ufb01nd the optimal solution to the game. More formally, we have the following theorem. Theorem 4.4 (structuredexploration). In the climb game\ud43a\ud453(2, 0,\ud448), under structured exploration \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 ,\ud45eJ (\ud447) (W, b, c,\ud451) is equivalently optimal at step \ud447= \ud442(1). Theorem 4.4 shows the e\ufb03ciency of exploration can be greatly improved if the exploration strategy captures a proper structure of the problem, i.e., all agents taking the same action. We further remark that by considering a set of similar climb games G, where G = {\ud43a\ud453(2,\ud462,\ud448)}\ud448\u22121 \ud462=0 , the structured exploration strategy \ud45d(\ud461) \ud452 (\ud456, \ud457) = \ud448\u22121 \u0002 1\ud456=\ud457 \u0003 can be interpreted as a uniform distribution over the optimal policies of this game set G. This interesting fact suggests that we can \ufb01rst collect a set of similarly structuredgames and then derive e\ufb00ective exploration strategies from these similar games. Once a set of structuredexploration strategies are collected,we can further adopt them for fast learning in a novel game with a similar problem structure. We take the inspiration here and develop a general meta-exploration algorithm in the next section. 5 METHOD We detail our method Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure (MESA) for cooperative multi-agent learning. As shown in Figure 2, MESA consists of a meta-training stage (Algo. 1) and a meta-testing stage (Algo. 2). In the meta-training stage, MESA learns exploration policies by training in a batch of training tasks that share intrinsic structuresin the state-action space. In the meta-testing stage, MESA utilizes the meta-learned exploration policies to assist learning in an unseen task sampled from the distribution of the training tasks. 5.1 Meta-Training The meta-training stage contains two steps: 1) identify the highrewarding state-action subspace, and 2) train a set of exploration policies using the subspace-induced rewards. 5.1.1 Identifying High-Rewarding Joint State-Action Subspace. For each training task T \ud456, we collect experiences D\ud456= {(\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1)}. If the reward \ud45f\ud461is higher than a threshold \ud445\u2605, we call this joint state-action pair (\ud460\ud461, \ud482\ud461) valuable and store it into a dataset M\u2217. For goal-oriented tasks where \ud45f= 1\ud460=\ud454\ud45c\ud44e\ud459, the threshold can be \fAlgorithm 1 MESA: Meta-Training Input: Meta-training tasks {T \ud456}\ud435 \ud456=1 \u223c\ud45d(T), o\ufb00-policy MARL algorithm \ud453, distance metric \u2225\u00b7 \u2225F Parameter: #policies \ud438, threshold \ud445\u2605, horizon \u210e Output: Exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1 1: M\u2217\u2190\u2205, global pseudo-count \u02c6 \ud441\u21900 2: for i = 1 to B do 3: Initialize policy \ud745\ud703 4: Train \ud745\ud703with \ud453and collect dataset \ud437\ud456= {(\ud494\ud461, \ud482\ud461,\ud45f\ud461, \ud494\ud461+1)} 5: M\u2217\u2190M\u2217\u222a{\ud70f| \ud445(\ud70f) \u2265\ud445\u2605,\ud70f\u2208\ud437\ud456} 6: end for 7: for i = 1 to E do 8: Initialize exploration policy \ud745\ud456 \ud452 9: while \ud745\ud456 \ud452\u2019s training not converged do 10: Initialize \ud441as \u02c6 \ud441, D \u2190\u2205 11: for t = 0 to h-1 do 12: Execute \ud482\ud461\u223c\ud745\ud456 \ud452(\ud460\ud461), and observe (\ud460\ud461, \ud482\ud495,\ud45f\ud461,\ud460\ud461+1) 13: Calculate \u02c6 \ud45f\ud461based on Eq. 5 or 6 14: Store (\ud460\ud461, \ud482\ud461, \u02c6 \ud45f\ud461,\ud460\ud461+1) into D 15: \ud441(\ud719(\ud460\ud461, \ud482\ud461)) \u2190\ud441(\ud719(\ud460\ud461, \ud482\ud461)) + 1 16: end for 17: Optimize policy \ud745\ud456 \ud452with algorithm \ud453 18: end while 19: Update \u02c6 \ud441using D 20: end for 21: return {\ud745\ud456 \ud452}\ud438 \ud456=1 set as \ud445\u2605= 1. For other tasks, the threshold can be set as a hyperparameter, for example, a certain percentile of all collectedrewards. A smaller \ud445\u2605results in a larger identi\ufb01ed subspace but a less e\ufb03cient exploration policy. The data stored in M\u2217is highly diversi\ufb01ed since it comes from all the \ud435training tasks, which are expected to share an intrinsic structure. We expect that with this intrinsic structure, the highrewarding joint state-action pairs fall into some low-dimensional subspace. In the simplest case, they may form several dense clusters, or many of them lie in a hyperplane. Even if the subspace is not easily interpretable to humans, it may still be e\ufb00ectively \u201ccovered\u201d by a set of exploration policies (to be found in the subsequent step). We also explicitly deal with the reward sparsity problem by assigning a positive reward to a joint state-action pair (\ud460\ud461, \ud482\ud461) if it has zero reward but leads to a valuable state-action pair (\ud460\ud461\u2032, \ud482\ud461\u2032) later in the same trajectory. We also put these relabeled pairs into the dataset M\u2217. Let \ud461\u2032 = arg min\ud461\u2032>\ud461[\ud45f\ud461\u2032 > 0], we therefore have the following densi\ufb01ed reward function \u02c6 \ud45f\ud461= ( \ud6fe\ud461\u2032\u2212\ud461\u00b7 \ud45f\ud461\u2032, \ud45f\ud461= 0, \ud45f\ud461, \ud45f\ud461> 0. (5) 5.1.2 Learning Exploration Policies. In this step, we aim to learn a diverse set of exploration policies to cover the identi\ufb01ed highrewarding joint state-action subspace. We use a distance metric \u2225\u00b7 \u2225F (e.g., \ud4592 distance) to determine whether two state-action Algorithm 2 MESA: Meta-Testing Input: Test task \u02c6 T , meta-trained exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1, o\ufb00-policy MARL algorithm \ud453 Parameter: horizon \u210e Output: Policy \ud745\ud703for task \u02c6 T 1: Initialize policy \ud745\ud703, D = \u2205, annealing \ud716 2: while not converged do 3: Determine \ud45d\ud452under annealing probability schedule \ud716 4: Choose policy to perform rollouts by \ud745\ud451= ( \ud745\ud452\u223cU({\ud745\ud456 \ud452}\ud438 \ud456=1), w.p. \ud45d\ud452 \ud745\ud703, otherwise. 5: for t = 0 to h-1 do 6: Execute \ud482\ud461\u223c\ud745\ud451(\ud460\ud461). 7: Observe transition (\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1). 8: D \u2190D \u222a(\ud460\ud461, \ud482\ud461,\ud45f\ud461,\ud460\ud461+1) 9: end for 10: Optimize \ud745\ud703with algorithm \ud453on replay bu\ufb00er D 11: end while 12: return \ud745\ud703 pairs are close. Then if a visited joint state-action pair (\ud460, \ud482) is close enough to the identi\ufb01ed subspace M\u2217, i.e., min\ud451\u2208M\u2217\u2225(\ud460, \ud482),\ud451\u2225F < \ud716, it would be assigned a derived positive reward \u02c6 \ud45f. Increasing the value of \ud435in the collection step would generally result in a more accurate distance measurement. However, this comes at the cost of making the minimization calculation more computationally expensive. To encourage a broader coverage of the subspace and to avoid mode collapse, the reward assignment scheme ensures that repeated visits to similar joint state-action pairs within one trajectory would result in a decreasing reward for each visit. Similar to [37], we adopt a pseudo-count function \ud441with a hash function \ud719(\ud494, \ud482) to generalize between similar joint state-action pairs. We then apply a decreasing function \ud453\ud451: N \u21a6\u2192[0, 1] on the trajectory-level pseudocount \ud441(\ud719((\ud460, \ud482)). The resulted reward assignment scheme is de\ufb01ned as follows: \u02dc \ud45f\ud461= \u02c6 \ud45f\ud461\ud453\ud451(\ud441(\ud719((\ud460\ud461, \ud482\ud461))) h 1min\ud451\u2208M\u2217\u2225(\ud460\ud461,\ud482\ud461),\ud451\u2225F<\ud716 i (6) After one exploration policy is trained with this reward, we will train a new policy to cover the part of the identi\ufb01ed subspace that has not yet been covered. This is achieved by having a global pseudo-count \u02c6 \ud441which is updated after training each exploration policy using its visitation counts and is maintained throughout the training of all exploration policies. This iterative process continues until the subspace is well-covered by the set of trained exploration policies. 5.2 Meta-Testing During meta-testing, MESA uses the meta-learned exploration policies {\ud745\ud456 \ud452}\ud438 \ud456=1 to assist the training of any generic o\ufb00-policy MARL algorithm on a test-time task \u02c6 T . Speci\ufb01cally, for each rolloutepisode, \fwe choose with probability \ud716to execute one uniformly sampled exploration policy \ud745\ud452\u223cU({\ud745\ud456 \ud452}\ud438 \ud456=1). For the best empirical performance, we also adopt an annealing schedule \ud716: \ud447\u21a6\u2192[0, 1] so that the exploration policies provide more rollouts at the initial stage of the training and are gradually turned o\ufb00later. Here we further provide some analysis of deploying the metalearned exploration policy on unseen testing tasks. Theorem 5.1 (Exploration during Meta-Testing). Consider goaloriented tasks with goal space G \u2286S. Assume the training and testing goals are sampled from the distribution \ud45d(\ud465) on G, and the dataset has \ud441i.i.d. goals sampled from a distribution \ud45e(\ud465) on S. If the exploration policy generalizes to explore \ud716nearby goals for every training sample, we have that the testing goal is not explored with probability at most \ud443fail \u2248 \u222b \ud45d(\ud465)(1 \u2212\ud716\ud45e(\ud465))\ud441\ud451\ud465\u2264\ud442 \u0012\ud43e\ud43f(\ud45d||\ud45e) + H (\ud45d) log(\ud716\ud441) \u0013 . (7) Theorem 5.1 shows that the good performance of meta-learned exploration policy relies on 1) a small di\ufb00erence between the training and testing distribution; and 2) a structured,e.g., low-dimensional, high-rewarding subspace G to reduce H (\ud45d). And when uniformly sampling the training data, \ud43e\ud43f(\ud45d||\ud45e) is bounded by log \u03a9G in our method. This term, however, can be up to log \u03a9S with an uncoordinated exploration on the joint state space S, where \u03a9S can be exponentially larger than \u03a9G. 5.3 Implementation Detail of MESA We choose MADDPG, following the centralized training with decentralized execution (CTDE) paradigm, as the o\ufb00-policy MARL algorithm for MESA since it can be applied to both discrete and continuous action space, as shown in its original paper [23]. We use a clustering mapping \ud453\ud450as the hash function \ud719so that the dataset M\u2217is clustered into \ud436clusters de\ufb01ned by the clustering function \ud453\ud450: S \u00d7 A \u21a6\u2192[\ud436]. The cluster mapping is implemented with the KMeans clustering algorithm [22]. The number of exploration policies to learn is viewed as a hyperparameter. See the Appendix for detailed hyperparameter settings. 6 EXPERIMENTS Our experimental evaluation aims to answer the following questions: (1) Are the meta-learned exploration policies capable of achieving more e\ufb03cient exploration during meta-testing on newly sampled tasks in matrix climb game variants (Section 6.2) and highdimensional domains (Section 6.3 and 6.4)? (2) Can these metalearned exploration policies successfully generalize to unseen testtime tasks from a more challenging (e.g., with more agents) test task distribution which is di\ufb00erent the training task distribution (Section 6.5)? 6.1 Evaluation Setup Compared Methods. We compare to 3 multi-agent reinforcement learning algorithms: MADDPG [23], MAPPO [41], and QMIX [33], to measure the e\ufb00ectiveness of our exploration policies. We also compare to 3 multi-agent exploration algorithm: MAVEN [24], MAPPO with RND exploration [5], and EMC [43]. To compare with baselines that adopt a similar meta-training stage, we add two naive Figure 3: Learning curve of the two climb game variants w.r.t number of environment steps. The return is averaged over timesteps for the multi-stage games. The dotted lines indicate the suboptimal return of 0.5 (purple) and the optimal return 1 (blue) for each agent. meta-learning baselines, including one with an unconditioned shared policy, which is trained over all training tasks, and one with a goal-conditioned policy, which takes the target landmarks as parts of the input. We also adapt the single-agent meta-RL algorithm MAESN [14] to the multi-agent setting. Finally, we adapt the singleagent C-BET [26] to multi-agent settings based on MAPPO. The training and testing tasks are as de\ufb01ned in Section 6.1. Please refer to the Appendix for more visualization and experimental results. Environments.We experiment on the Climb Game, Multi-agent Particle Environment (MPE) [23], and multi-agent MuJoCo [29], on which generating a distribution of meta-training tasks \ud45d(T) is feasible. 6.2 Climb Game Variants First, we consider task spaces consisting of variants of the aforementioned climb games. We extend previous climb game to (1) one-step climb game\ud43a(\ud45b,\ud458,\ud462,\ud448), which is a\ud45b-player game with \ud448actions for each player, and the joint reward is 1 if #\ud462= \ud458, 1 \u2212\ud6ff if #\ud462= 0, and 0 otherwise. The task space T one \ud448 consists of all one-step climb games that contain two players and \ud448actions; (2) multi-stage climb game, which is an \ud446-stage game where each stage is a one-stage climb game with the same number of available actions. Each stage \ud461has its own con\ufb01guration (\ud458\ud461,\ud462\ud461) of the one-stage climb game \ud43a(2,\ud458\ud461,\ud462\ud461,\ud448). Agents observe the history of joint actions and the current stage \ud461. The task space T multi \ud446,\ud448 consists of all multi-stage climb games with \ud446stages and \ud448actions. In our experiments, we use T one 10 and T multi 5,10 as the task space for the one-step and multi-stage Climb Games. We choose uniformly at random ten training tasks and three di\ufb00erent test tasks from the task space T , and we keep \ud6ff= 1 2 as in the classic climb games. Results on Climb Game Variants. For the matrix games, we additionally compare with MA-MAESN, which is our adaptation of the original single-agent meta-learning algorithm MAESN [14] to the multi-agent scenario In the single-step matrix game, MESA exhibits better performance, being able to \ufb01nd the optimal reward in some harder tasks when \ud458= 2, while other baselines are stuck at the sub-optimal reward for almost all tasks. \fFigure 4: Learning curves of MESA and the compared baselines w.r.t the number of environment interactions during the metatesting stage in the MPE domain and the multi-agent MuJoCo environment Swimmer. The two dotted lines indicate the ideal optimal (purple) and sub-optimal (blue) return summed over timesteps. A return above the blue line would typically indicate that the agents are able to learn the optimal strategy. In the more challenging 10-action multi-stage game where task space is exponentially larger, MESA outperforms all compared algorithms by a large margin. With the help of the exploration policies that have learned the high-rewarding joint action pairs, MESA quickly learns the optimal joint action for each stage and avoids being stuck at the sub-optimal. Figure 5: Visualizations of a 2-player 3-landmark MPE climb game. 6.3 MPE Domain We extend the matrix climb games to MPE [23], which has a continuous high-dimensional state space. Agents must \ufb01rst learn to reach the landmarks under sparse rewards and then learn to play the climb games optimally. In a MPE Climb Game \u00af \ud43a(\ud45b,\ud458,\ud462,\ud448, {\ud43f\ud457}\ud448\u22121 0 ) (Figure 5), there are \ud448non-overlapping landmarks with positions {\ud43f\ud457}\ud448\u22121 \ud457=0 . The reward is non-zero only when every agent is on some landmark. Agents will be given a reward of 1 if there are exactly \ud458agents located on the \ud462-th landmark (target landmark), and a suboptimal reward of 1 \u2212\ud6ffwill be given when none of the agents are located on the target landmark. Otherwise, the reward will be zero. As before, \ud462 and \ud458are not present in the observation and can only be inferred from the received reward. A task space T MPE \ud45b,\ud448 consists of all MPE climb games with \ud45bplayers and \ud448landmarks. We evaluate MESA Figure 6: Visualization of structured exploration behaviors discovered by the meta-trained exploration policy in MESA. on the 2-agent tasks (T MPE 2,5 and T MPE 2,6 ) and 3-agent tasks (T MPE 3,5 and T MPE 3,6 ) while \ufb01xing \ud458= 2. Each sampled training and testing task has a di\ufb00erent con\ufb01guration of landmark positions. Adaptation Performance in MPE. We show in Figure 4 the learning curve of our approach MESA compared with the aforementioned baseline methods. MESA outperforms the compared baselines by a large margin, being able to coordinately reach the task landmark quickly, as evidenced by the near-optimal reward. Even when combined with RND-based exploration, MAPPO easily sticks to the sub-optimal equilibrium. Value-based methods like QMIX and MAVEN are unable to learn the correct \ud444-function because the reward is quite sparse before agents can consistently move themselves to a landmark. EMC sometimes jumps out of the suboptimal equilibrium with curiosity-driven exploration, but the performance is not robust. Furthermore, as the meta-learning baselines only learn the sub-optimal behavior during meta-training, they fail to learn the optimal equilibrium during test time and quickly converge to the suboptimal equilibrium. Visualization of Exploration Policies. To answer question (2), we visualize the learned exploration policies in a 2-agent 3landmark MPE task in Figure 6. We can see that the learned exploration policy consecutively visited the 3 landmarks within 20 timesteps in one trajectory. 6.4 Multi-agent MuJoCo Environments We also extend the matrix climb games to multi-agent MuJoCo environments [29]. We consider speci\ufb01cally the 2-agent Swimmer environment where each agent is a hinge on the swimmer\u2019s body, and each agent\u2019s action is the amount of torque applied to hinge rotors. The extension considers the angles between the two hinges and the \fbody segments. Each task in the task space is a target angle such that a reward of 1 will be given only if the two angles are both close to the target angles, a 0.5 suboptimal reward is given if none of two angles are close to the target, and a reward of 0 if only one of the two angles are close. This multi-agent environment is extremely hard as agents are very likely to converge to the suboptimal reward of 0.5, which is con\ufb01rmed by the results that none of the baselines were able to \ufb01nd the optimal equilibrium in Figure 4. Therefore, MESA vastly outperforms all the compared baselines by learning a \ufb01nal policy that frequently reaches the target angle. 6.5 Generalization Performance of MESA In this section, our goal is to evaluate the generalization performance of the meta-trained exploration policy in scenarios where the meta-training and meta-testing task distributions are di\ufb00erent. In particular, we focus on the setting where the test-time tasks are more challenging than the training-time tasks and examine how an exploration policy learned from simpler tasks can boost training performances on harder tasks. The test task here is uniform on the 3-agent high-di\ufb03culty MPE Climb games. The task di\ufb03culty is de\ufb01ned by the average pairwise distances between the landmark positions and the initial positions of the agents. We consider two simpler training task distributions, including (1) a 2-agent setting with the same di\ufb03culty, and (2) a 3-agent setting with a lower di\ufb03culty. In both settings, the metatraining tasks are less challenging than the test-time tasks. For evaluation, the meta-trained exploration policy from each setting will be directly applied to assist the training on the more challenging test-time tasks, without any \ufb01ne-tuning. We modi\ufb01ed the neural network architecture by adopting an attention layer in both actor and critic to ensure they are compatible with a varying number of agents. The attention mechanism acts as an aggregation function between the relative positions of the other agents and its own relative position to the landmarks to handle the varying observation dimensions. Additionally, we employed behavior cloning (BC) [30] on the rollouts of the exploration policies as a warm-up to accelerate learning of the \ufb01nal policy. In Figure 7, we present the generalization results from our study. We evaluate the zero-shot generalization ability of the meta-exploration policy by measuring the average number of high-reward transitions hit in a test task randomly sampled from the test task distribution. As shown on the left of Figure 7, the meta-exploration policies are able to explore the test-time tasks much more e\ufb03ciently than a random exploration policy, even on test-time tasks that are drawn from a harder task distribution. Notably, the generalization ability increases with the number of exploration policies (\ud435). Using the meta-exploration policies trained on the simpler tasks, MESA is able to consistently reach the high-reward region in the unseen hard 3-agent tasks, as opposed to the vanilla MADDPG algorithm that only learns the sub-optimal equilibrium. We also see that with an increasing number of meta-exploration policies, the performance of MESA increases, but the improvement becomes marginal, while the meta-training time increases linearly with E. Figure 7: Generalization results of MESA on the hard 3-agent MPE Climb game. Left: Zero-shot generalizability of the meta-exploration policies, measured by the number of visitations on high-reward transitions per episode on the test tasks. The purple dotted line corresponds to the random exploration policy. The plot shows the concatenated training curves for all exploration policies. Right: Learning curves of MESA under di\ufb00erent settings using the meta-exploration policies trained on the two di\ufb00erent training-task distributions. 7 CONCLUSIONS This paper introduces a meta-exploration method, MESA, for multiagent learning. The key idea is to learn a diverse set of exploration policies to cover the high-rewarding state-action subspace and achieve e\ufb03cient exploration in an unseen task. MESA can work with any o\ufb00-policy MARL algorithm, and empirical results con\ufb01rm the e\ufb00ectiveness of MESA in climb games, MPE environments, and multi-agent MuJoCo environments and its generalizability to more complex test-time tasks. ACKNOWLEDGMENTS This research is supported in part by NSF IIS-2046640 (CAREER) and Sloan Research Fellowship. We thank NVIDIA for providing computing resources. Zhicheng Zhang is supported in part by SCS Dean\u2019s Fellowship. The funders have no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.00954v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00954v1",
|
| 3 |
+
"title": "X-Oscar: A Progressive Framework for High-quality Text-guided 3D Animatable Avatar Generation",
|
| 4 |
+
"abstract": "Recent advancements in automatic 3D avatar generation guided by text have\nmade significant progress. However, existing methods have limitations such as\noversaturation and low-quality output. To address these challenges, we propose\nX-Oscar, a progressive framework for generating high-quality animatable avatars\nfrom text prompts. It follows a sequential Geometry->Texture->Animation\nparadigm, simplifying optimization through step-by-step generation. To tackle\noversaturation, we introduce Adaptive Variational Parameter (AVP), representing\navatars as an adaptive distribution during training. Additionally, we present\nAvatar-aware Score Distillation Sampling (ASDS), a novel technique that\nincorporates avatar-aware noise into rendered images for improved generation\nquality during optimization. Extensive evaluations confirm the superiority of\nX-Oscar over existing text-to-3D and text-to-avatar approaches. Our anonymous\nproject page: https://xmu-xiaoma666.github.io/Projects/X-Oscar/.",
|
| 5 |
+
"authors": "Yiwei Ma, Zhekai Lin, Jiayi Ji, Yijun Fan, Xiaoshuai Sun, Rongrong Ji",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CV",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CV"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Distillation",
|
| 14 |
+
"gt": "X-Oscar: A Progressive Framework for High-quality Text-guided 3D Animatable Avatar Generation",
|
| 15 |
+
"main_content": "Introduction The creation of high-quality avatars holds paramount importance in a wide range of applications, including cartoon production (Li et al., 2022b; Zhang et al., 2022), virtual try-on (Santesteban et al., 2021; 2022), immersive telepresence (Li et al., 2020a;b; Xiu et al., 2023), and video game design (Zheng et al., 2021; Zhu et al., 2020). Conventional methods for avatar creation are notorious for being timeconsuming and labor-intensive, often demanding thousands of hours of manual work, specialized design tools, and expertise in aesthetics and 3D modeling. In this research, we propose an innovative solution that revolutionizes the generation of high-quality 3D avatars with intricate geometry, refined appearance, and realistic animation, solely based on a text prompt. Our approach eliminates the need for manual sculpting, professional software, or extensive artistic skills, thus democratizing avatar creation and making it accessible to a broader audience. The emergence of deep learning has brought forth a new era in 3D human body reconstruction, showcasing promising methods for automatic reconstruction from photos (Liao et al., 2023b; Han et al., 2023; Men et al., 2024; Zhang et al., 2023d) and videos (Weng et al., 2022; Jiang et al., 2022). However, these approaches primarily focus on reconstructing human bodies from visual cues, limiting their applicability to real-world scenarios and posing challenges when it comes to incorporating creativity, editing, and control. Recent advancements in large-scale vision-language 1 arXiv:2405.00954v1 [cs.CV] 2 May 2024 \fX-Oscar models (VLM) (Radford et al., 2021; Li et al., 2022a; 2023a; Xu et al., 2023a; Ma et al., 2023b) and diffusion models (Ho et al., 2020; Sohl-Dickstein et al., 2015; Welling & Teh, 2011; Kulikov et al., 2023) have opened up exciting possibilities for generating 3D objects and avatars from text prompts. These methods effectively combine pretrained VLMs and diffusion models with 3D representations such as DeepSDF (Park et al., 2019), NeRF (Mildenhall et al., 2021), DMTET (Shen et al., 2021), and 3D Gaussian Splatting (Kerbl et al., 2023). Despite these promising developments, current approaches still face several limitations. Some methods (Ma et al., 2023c; Chen et al., 2023a; Wang et al., 2023b) focus solely on generating static everyday objects, lacking animation ability. Other methods that aim to generate avatars based on human prior knowledge often suffer from poor geometry and appearance quality (Liao et al., 2023a; Hong et al., 2022; Zhang et al., 2023b) or are incompatible with conventional computer graphics workflows (Liu et al., 2023; Huang et al., 2023b; Cao et al., 2023). This paper presents X-Oscar, an innovative and advanced framework that leverages text prompts to generate highquality animatable 3D avatars. Specifically, X-Oscar builds upon the SMPL-X body model (Pavlakos et al., 2019a) as prior knowledge and employs a strategic optimization sequence of \u201cGeometry \u2192Texture \u2192Animation\u201d. To overcome the common challenge of oversaturation during avatar generation, we propose Adaptive Variational Parameter (AVP), a novel technique that utilizes a trainable adaptive distribution to represent the geometry and appearance of the avatars. By optimizing the distribution as a whole instead of focusing on specific parameters, X-Oscar effectively mitigates oversaturation, resulting in visually appealing avatars. Furthermore, we introduce Avatar-aware Score Distillation Sampling (ASDS), an innovative module that incorporates geometry-aware and appearance-aware noise into the rendered image during the optimization process. This strategic approach significantly enhances the visual attributes of the avatars and improves their geometry and appearance quality. Extensive experimentation demonstrates the superiority of X-Oscar over existing methods, showcasing improvements in both geometry and appearance quality. Moreover, the avatars generated by X-Oscar are fully animatable, unlocking exciting possibilities for applications in gaming, animation, and virtual reality. To summarize, our main contributions are three-fold: \u2022 We present X-Oscar, an innovative and progressive framework that enables the creation of delicate animatable 3D avatars from text prompts. \u2022 To overcome the persistent challenge of oversaturation, we propose Adaptive Variational Parameter (AVP), which represents avatars as adaptive distributions instead of specific parameters. \u2022 We introduce Avatar-aware Score Distillation Sampling (ASDS), an advanced module that incorporates geometry-aware and appearance-aware noise into the rendered image during the optimization process, resulting in high-quality outputs. 2. Related Work Text-to-3D Generation. The emergence of vision-language models (VLMs) (Radford et al., 2021; Ma et al., 2022) and diffusion models has brought about a revolutionary impact on text-to-3D content generation. Pioneering studies like CLIP-forge (Sanghi et al., 2022), DreamFields (Jain et al., 2022), CLIP-Mesh (Mohammad Khalid et al., 2022), and XMesh (Ma et al., 2023c) have showcased the potential of utilizing CLIP scores (Radford et al., 2021) to align 3D representations with textual prompts, enabling the generation of 3D assets based on textual descriptions. Subsequently, DreamFusion (Poole et al., 2022) introduced Score Distillation Sampling (SDS), a groundbreaking technique that leverages pretrained diffusion models (Saharia et al., 2022) to supervise text-to-3D generation. This approach has significantly elevated the quality of generated 3D content. Building on these foundations, researchers have explored various strategies to further enhance text-to-3D generation. These strategies encompass coarse-to-fine optimization (Lin et al., 2023), conditional control (Li et al., 2023c; Chen et al., 2023b), bridging the gap between 2D and 3D (Ma et al., 2023a), introducing variational score distillation (Wang et al., 2023b), and utilizing 3D Gaussian Splatting (Chen et al., 2023c; Li et al., 2023b; Yi et al., 2023; Tang et al., 2023). Nevertheless, despite these advancements, existing methodologies primarily concentrate on generating common static objects. When applied to avatar generation, they face challenges such as poor quality and the inability to animate the generated avatars. In contrast, our proposed framework, X-Oscar, specifically aims to generate high-quality 3D animatable avatars from text prompts. X-Oscar caters to the unique requirements of avatar generation, including intricate geometry, realistic textures, and fluid animations, to produce visually appealing avatars suitable for animation. Text-to-Avatar Generation. The domain of text-to-avatar generation (Kolotouros et al., 2024; Zhang et al., 2024; Huang et al., 2023a; Xu et al., 2023b; Zhou et al., 2024) has emerged as a prominent and vital research area to cater to the demands of animated avatar creation. This field incorporates human priors such as SMPL (Loper et al., 2015), SMPL-X (Pavlakos et al., 2019b), and imGHUM (Alldieck et al., 2021) models. AvatarCLIP (Hong et al., 2022) utilizes SMPL and Neus (Wang et al., 2021) models to generate 3D avatars guided by the supervision of CLIP scores. Dreamwaltz (Huang et al., 2023b) introduces NeRF (Mildenhall et al., 2021) to generate 3D avatars based on 3D2 \fX-Oscar consistent occlusion-aware SDS and 3D-aware skeleton conditioning. AvatarBooth (Zeng et al., 2023) leverages dual fine-tuned diffusion models to achieve customizable 3D human avatar generation. AvatarVerse (Zhang et al., 2023a) utilizes ControlNet (Zhang et al., 2023c) and DensePose (G\u00a8 uler et al., 2018) to enhance view consistency. TADA (Liao et al., 2023a) employs a displacement layer and a texture map to predict the geometry and appearance of avatars. HumanNorm (Huang et al., 2023a) proposes a normal diffusion model for improved geometry. HumanGaussian (Liu et al., 2023) uses 3D Gaussian Splatting as human representation for text-to-avatar generation. Despite these advancements, existing methods often produce low-quality and over-saturated results. To overcome these limitations, we introduce a progressive framework that incorporates two key modules, namely Adaptive Variational Parameter and Avatar-aware Score Distillation Sampling. Our framework effectively generates high-fidelity avatars that are visually appealing and realistic. 3. Preliminaries Score Distillation Sampling (SDS) (Poole et al., 2022), also known as Score Jacobian Chaining (SJC) (Wang et al., 2023a), is a powerful optimization method that adapts pretrained text-to-image diffusion models for text-to-3D generation. Given a pretrained diffusion model p\u03d5(zt|y, t), where \u03d5 represents the model\u2019s parameters, y is the input text prompt, and zt denotes the noised image at timestep t, SDS aims to optimize a 3D representation to align with the text prompt. The forward diffusion process in SDS is formulated as q(zt|g(\u03b8, c), y, t), where \u03b8 represents the trainable parameters of the 3D representation, c denotes the camera, and g(\u00b7) is the rendering function. The objective of SDS can be expressed as follows: min LSDS(\u03b8) = E(t,c) \u0014r1 \u2212\u03b3t \u03b3t \u03c9(t)DKL(q(zt|g(\u03b8, c), y, t) \u2225p\u03d5(zt|y, t)) \u0015 , (1) where \u03c9(t) is a weighting function dependent on the timestep t, zt = \u221a\u03b3tg(\u03b8, c) + \u221a1 \u2212\u03b3t\u03f5 is the noised image, and DKL(\u00b7) represents the Kullback-Leibler Divergence (Kullback & Leibler, 1951). To approximate the gradient of the SDS objective, the following equation is leveraged: \u2207\u03b8LSDS(\u03b8) \u225cEt,\u03f5,c \uf8ee \uf8ef \uf8f0\u03c9(t)(\u02c6 \u03f5\u03d5(zt; y, t) | {z } predicted noise \u2212 \u03f5 |{z} Guassian noise )\u2202g(\u03b8, c) \u2202\u03b8 \uf8f9 \uf8fa \uf8fb, (2) where \u03f5 \u223cN (0, I) represents sampled noise from a normal distribution, and \u02c6 \u03f5\u03d5(zt; y, t) denotes the predicted noise of the pretrained diffusion model at timestep t. SMPL-X (Pavlakos et al., 2019b) is a widely adopted parametric 3D human body model in the fields of computer graphics and animation. It offers a comprehensive representation of the human body, consisting of 10, 475 vertices and 54 joints, facilitating detailed and realistic character rendering. By specifying shape s, pose p, and expression e parameters, the SMPL-X model generates a human body using the following equation: T(s, p, e) = T + Bs(s) + Bp(p) + Be(e), (3) where T denotes a standard human template, while Bs(\u00b7), Bp(\u00b7), Be(\u00b7) represent shape, expression, and pose blend shapes, respectively. These blend shapes deform the template to generate a wide range of body shapes, poses, and expressions. To transition the human body from a standard pose to a target pose, linear blend skinning (LBS) is employed: M(s, p, e) = WLBS(T(s, p, e), J(s), p, W), (4) where WLBS(\u00b7) represents the LBS function, J(s) corresponds to the skeleton joints, and W represents the skinning weight. The LBS function calculates the final vertex positions by interpolating between the deformed template vertices based on the assigned skinning weights. This process ensures a smooth and natural deformation of the body mesh. 4. Approach The overview of X-Oscar is depicted in Fig. 2, and the workflow is illustrated in Fig. 3. In the upcoming sections, we present a comprehensive description of the X-Oscar framework: In Sec. 4.1, we delve into the progressive modeling pipeline of X-Oscar. This pipeline breaks down the complex task of avatar generation into three manageable subtasks, with each subtask focusing on a specific aspect of avatar creation. In Sec. 4.2, we introduce Adaptive Variational Parameter (AVP). This component employs a trainable adaptive distribution to represent the avatar, addressing the issue of oversaturation that is commonly encountered in avatar generation. In Sec. 4.3, we present Avatar-aware Score Distillation Sampling (ASDS). This module incorporates geometry-aware and appearance-aware noise into the denoising process, enabling the pretrained diffusion model to perceive the current state of the generated avatar, resulting in the production of high-quality outputs. 4.1. Progressive Modeling Geomotry Modeling. During this phase, our objective is to optimize the geometry of the avatars, represented by the SMPL-X model, to align with the input text prompt y. Formally, we aim to optimize the trainable vertex offsets \u03c8v \u2208RN\u00d73, initialized as a matrix of zeros, to align the modified vertex coordinates \u03bd\u2032 = \u03bd + \u03c8v with the text 3 \fX-Oscar Stable Diffusion Vertex Coordinates Stable Diffusion \u201cFlash from DC\u201d Stable Diffusion \u201cFlash from DC\u201d \u201cFlash from DC\u201d Frozen Trainable Random Sampling Motified SMPL-X Motified SMPL-X Offset Distribution Appearance Distribution Offset Distribution Appearance Distribution Pose Prior (a) Geometry Modeling (b) Appearance Modeling (c) Animation Refinement Update Update Update Avatar-aware Noise Avatar-aware Noise Avatar-aware Noise Pose Prior Camera Figure 2: Overview of the proposed X-Oscar, which consists of three generation stages: (a) geometry modeling, (b) appearance modeling, and (c) animation refinement. prompt y, where \u03bd represents the vertex coordinates of the template avatar body, and N is the number of vertices of the SMPL-X model. To achieve this, we utilize a differentiable rendering pipeline. By taking the original mesh M of SMPL-X and the predicted vertex offsets \u03c8v as inputs, we render a normal image N of the modified mesh using a differentiable renderer (Laine et al., 2020): N = g(M, \u03c8v, c), (5) where g(\u00b7) denotes the rendering function, and c represents a randomly sampled camera parameter. In each iteration, we introduce Gaussian noise \u03f5 to the normal map N and apply a pretrained Stable Diffusion (SD) model (Rombach et al., 2022) to denoise it. The gradient of the trainable vertex offsets \u03c8v during denoising is then calculated as follows: \u2207\u03c8vLgeo(\u03c8v, N) = Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(zN t ; y, t) \u2212\u03f5 \u0011 \u2202N \u2202\u03c8v \u0015 , (6) where \u02c6 \u03f5\u03d5(zN t ; y, t) represents the predicted noise by SD based on the timestep t, input text embedding y, and the noisy normal image zN t . Appearance Modeling. After completing the geometry modeling phase, we obtain a mesh that aligns with the prompt in terms of shape, with vertex coordinates \u03bd\u2032 = \u03bd + \u03c8v. In this stage, our objective is to optimize an albedo map \u03c8a \u2208Rh\u00d7w\u00d73 to represent the appearance of the resulting avatar, where h and w represent the height and width of the albedo map. To achieve this, we start by rendering a colored image I from a randomly sampled camera parameter c based on the vertex offsets \u03c8v and the albedo map \u03c8a using a differentiable renderer (Laine et al., 2020): I = g(M, \u03c8v, \u03c8a, c). (7) To optimize the albedo map \u03c8a, we employ a loss function similar to Eq. (6) used in the geometry modeling phase: \u2207\u03c8aLapp(\u03c8a, I) = Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(zI t ; y, t) \u2212\u03f5 \u0011 \u2202I \u2202\u03c8a \u0015 , (8) where \u02c6 \u03f5\u03d5(zI t ; y, t) represents the predicted noise by the SD model. This loss function encourages the rendered image I to align with the text prompt y by minimizing the discrepancy between the predicted noise \u02c6 \u03f5\u03d5 and the added Gaussian noise \u03f5. By optimizing the albedo map \u03c8a using this loss function, we can generate appearances for the avatars that are consistent with the provided text prompts. Animation Refinement. Given that both the geometry modeling and appearance modeling stages optimize the avatar in a canonical pose, it is inevitable that certain parts of the avatar may be obstructed, leading to lower-quality results in those areas. To overcome this challenge, we introduce an animation refinement stage where we adjust the pose of the avatar and simultaneously optimize both the geometry and appearance. Specifically, we sample viable pose parameters p from a pre-trained model such as VPoser (Pavlakos et al., 2019a). For each sampled pose, we render the normal image Np and colored image Ip of the animated avatar using a differentiable renderer (Laine et al., 2020): Np = g(M, \u03c8v, c, p), Ip = g(M, \u03c8v, \u03c8a, c, p), (9) where pose parameters p and camera parameters c vary in each iteration. To optimize the geometry and appearance of the avatar in the animated pose, we define an animation loss 4 \fX-Oscar Lani as follows: Lani(\u03c8v, \u03c8a, Np, Ip) = Lgeo(\u03c8v, Np)+Lapp(\u03c8v, \u03c8a, Ip), (10) where Lgeo and Lapp are the geometry loss and appearance loss, respectively. The gradients of the animation loss for the vertex offsets \u03c8v and the albedo maps \u03c8a are calculated as follows: \u2207\u03c8v Lani(\u03c8v, Np, Ip) =Et,\u03f5 \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(z Np t ; y, t) \u2212\u03f5 \u0011 \u2202Np \u2202\u03c8v + w(t) \u0010 \u02c6 \u03f5\u03d5(z Ip t ; y, t) \u2212\u03f5 \u0011 \u2202Ip \u2202\u03c8v \u0015 , (11) \u2207\u03c8aLani(\u03c8a, Ip) = E(t,\u03f5) \u0014 w(t) \u0010 \u02c6 \u03f5\u03d5(z Ip t ; y, t) \u2212\u03f5 \u0011 \u2202Ip \u2202\u03c8a \u0015 , (12) The notations used here are similar to those defined in Eq. (2). By minimizing the animation loss using these gradients, we refine the geometry and appearance of the avatar in various poses, resulting in improved quality in the final output. 4.2. Adaptive Variational Parameter As formulated in Eq. (1) and Eq. (2), SDS aims to optimize a precise 3D representation to align all images rendered from arbitrary viewpoints with the input prompt evaluated by 2D diffusion models. However, there exists a fundamental contradiction between achieving an accurate 3D representation and the inherent multi-view inconsistency associated with 2D diffusion models. Specifically, it is often unreasonable to expect high similarity scores of a 2D diffusion model between all multi-view images of a specific 3D representation and text prompts. Consequently, when SDS is employed to enforce similarity between each perspective of a specific 3D representation and the text prompt, it can lead to the undesirable issue of oversaturation. To address this concern, we propose formulating the 3D representation as a distribution of vertex offsets, denoted as offset distribution, and a distribution of albedo maps, referred to as appearance distribution. Specifically, we perturb \u03c8v and \u03c8a of the 3D human representation with Gaussian noises to improve the robustness of the model and alleviate the oversaturation problem. This perturbation process can be expressed as: \u03c8\u2032 v \u223c\u03c8v + \u03bbvN (0, I) , \u03c8\u2032 a \u223c\u03c8a + \u03bbaN (0, I) , (13) where \u03bbv and \u03bba serve as weights to control the magnitude of the perturbations. The mean of the offset distribution and appearance distribution can be learned by optimizing \u03c8v and \u03c8a, while their standard deviations are determined by \u03bbv and \u03bba. Thus, choosing appropriate values for \u03bbv and \u03bba is crucial and challenging. If these values are too small, the model may not fully benefit from learning the distributions. In extreme cases, when \u03bbv = \u03bba = 0, the model essentially learns specific parameters instead of distributions. Conversely, when \u03bbv and \u03bba are excessively large, the learning 3D Paramaters Adaptive Variational Parameter 2D Image Sample &Render Add Perturbation Add Noise Update Avatar-aware Diffusion Model Figure 3: The workflow of the proposed X-Oscar. First, we incorporate the adaptive perturbation into the 3D parameters, forming the avatar distribution. Next, we sample a set of parameters from the avatar distribution and render a 2D image. Finally, we apply avatar-aware noise to the rendered image for denoising to optimize 3D parameters. process becomes challenging due to highly unstable perturbations. In extreme cases, when \u03bbv = \u03bba = +\u221e, the generated results become independent of the underlying \u03c8v and \u03c8a. To overcome the above challenges and facilitate a learning process that progresses from easy to difficult without manual weight assignment, we propose Adaptive Variational Parameter (AVP) for 3D representation. Specifically, we leverage the standard deviations of \u03c8v and \u03c8a as weights for perturbations, which can be formulated as follows: \u03c8\u2032 v \u223c\u03c8v + \u03c3(\u03c8v)N (0, I) = N \u0000\u03c8v, \u03c3(\u03c8v)2\u0001 , (14) \u03c8\u2032 a \u223c\u03c8a + \u03c3(\u03c8a)N (0, I) = N \u0000\u03c8a, \u03c3(\u03c8a)2\u0001 , (15) where \u03c3(\u00b7) represents the standard deviation. This adaptive approach has several advantages. Firstly, it enables the model to learn progressively from easy to difficult scenarios. Initially, \u03c8v and \u03c8a are initialized as matrices of all zeros and all 0.5, respectively, resulting in a standard deviation of 0. Consequently, during the early stages of training, the model focuses on optimizing the means of \u03c8\u2032 v and \u03c8\u2032 a to reasonable values. As training progresses, the standard deviations gradually increase, promoting the model\u2019s ability to maintain high similarity between the 3D representation and the text even in the presence of noise interference. Secondly, this approach is fully automatic. The model learns to adapt the perturbation weights based on the current state of the 3D representation, eliminating the need for manual intervention or hyperparameter tuning. During the inference phase, we utilize the mean values of \u03c8\u2032 v and \u03c8\u2032 a to represent the avatar. 4.3. Avatar-aware Score Distillation Sampling In previous work on SDS (Poole et al., 2022), a Gaussian noise related to timestep t was introduced to the rendered 5 \fX-Oscar image, and a pretrained diffusion model was utilized to denoise the noisy image for optimizing the 3D representation. The process of adding noise can be formulated as follows: zt =\u221a\u03b1tzt\u22121 + \u221a 1 \u2212\u03b1t\u03f5t\u22121 =\u221a\u03b1t\u03b1t\u22121zt\u22122 + p 1 \u2212\u03b1t\u03b1t\u22121\u00af \u03f5t\u22122 = \u00b7 \u00b7 \u00b7 =\u221a\u00af \u03b1tz0 + \u221a 1 \u2212\u00af \u03b1t\u00af \u03f50, (16) where zt represents the noised image at timestep t, \u00af \u03b1t = Qt i=1 \u03b1i, and \u03f5i, \u00af \u03f5i \u223cN (0, I). Since t \u223cU(0.02, 0.98) is randomly sampled, the noise added to the rendered image is independent of the avatar\u2019s current state. To establish a correlation between the denoising process and the avatar\u2019s current state, and to facilitate a learning process from easy to difficult, we propose Avatar-aware Score Distillation Sampling (ASDS). Specifically, the noised image with avatar-aware noise can be formulated as follows: zt = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t(\u03bbn\u03f5n + \u03bbv\u03c3(\u03c8v)\u03f5v + \u03bba\u03c3(\u03c8a)\u03f5a) = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t p (\u03bbn)2 + (\u03bbv\u03c3(\u03c8v))2 + (\u03bba\u03c3(\u03c8a))2\u03f5 = \u221a \u00af \u03b1z0 + \u221a 1 \u2212\u00af \u03b1t\u03f5\u03b8, (17) where \u03f5n, \u03f5v, \u03f5a, and \u03f5 are i.i.d. Gaussian random variables with zero mean and unit variance, i.e., \u03f5n, \u03f5v, \u03f5a, \u03f5 \u223c N(0, I), and \u03f5\u03b8 \u223c N(0, (\u03bbn)2 + (\u03bbv\u03c3(\u03c8v))2 + (\u03bba\u03c3(\u03c8a))2). At the initial stage, when \u03c3(\u03c8v) = \u03c3(\u03c8a) = 0, the initial variance of the noise is relatively small, resulting in an easier denoising process for diffusion models. As the training progresses, \u03c3(\u03c8v) and \u03c3(\u03c8a) gradually increase, leading to an increase in the noise variance. Consequently, this increases the difficulty of denoising. By incorporating avatar-aware noise, the model can undergo a learning process from easy to difficult. The gradient of ASDS is then formulated as follows: \u2207\u03b8LASDS(\u03b8) \u225c E(t,\u03f5,c) \uf8ee \uf8ef \uf8f0\u03c9(t) \u0000\u02c6 \u03f5\u03d5(zt; y, t) | {z } precited noise \u2212 \u03f5\u03b8 |{z} avatar-aware noise \u0001\u2202g(\u03b8, c) \u2202\u03b8 \uf8f9 \uf8fa \uf8fb, (18) where zt = \u221a\u00af \u03b1g(\u03b8, c) + \u221a1 \u2212\u00af \u03b1\u03f5\u03b8 represents the noised image, and \u03f5\u03b8 is an avatar-aware noise that encourages the paradigm of learning from easy to difficult. 5. Experiments 5.1. Implementation Details Our experiments are conducted using a single Nvidia RTX 3090 GPU with 24GB of memory and the PyTorch library (Paszke et al., 2019). The diffusion model employed in our implementation is the Stable Diffusion provided by HuggingFace Diffusers (von Platen et al., 2022). During the training phase, we set the resolution of the rendered images to 800 \u00d7 800 pixels. The resolution of the albedo map is 2048 \u00d7 2048 pixels. The geometry modeling, appearance modeling, and animation refinement stages consist of 5000, 10000, and 5000 iterations, respectively. We set the learning rates for the vertex offset \u03c8v and albedo map \u03c8a to 1e-4 and 5e-3, respectively. Furthermore, we set the values of \u03bbn, \u03bbv, and \u03bba to 0.8, 0.1, and 0.1, respectively. To enhance facial details, we employ a strategy where there is a 0.2 probability of rendering facial images for optimization during the training process, and a 0.8 probability of rendering full-body images for optimization. 5.2. Comparison Qualitative Comparison with Text-to-Avatar Methods. We present a comparative analysis of our methodology against five state-of-the-art (SOTA) baselines: TADA (Liao et al., 2023a), DreamWaltz (Huang et al., 2023b), HumanGaussian (Liu et al., 2023), AvatarCLIP (Hong et al., 2022), and AvatarCraft (Jiang et al., 2023), as illustrated in Fig. 4. We observe certain limitations in the geometry and texture of avatars generated by TADA, which we emphasize by enclosing them within a red box. Furthermore, the outcomes produced by the other baselines exhibit issues such as blurriness and inconsistencies with the provided text. In contrast, our proposed X-Oscar consistently generates high-quality avatars with intricate details. Moreover, in addition to static avatars, X-Oscar is also capable of generating animatable avatars, as demonstrated in Fig. 1. Qualitative Comparison with Text-to-3D Methods. We also conduct a comparative analysis of X-Oscar with SOTA text-to-3D methods, namely DreamFusion (Poole et al., 2022), Magic3D (Lin et al., 2023), Fantasia3D (Chen et al., 2023a), and ProlificDreamer (Wang et al., 2023b). As shown in Fig. 5, we observe evident limitations in the avatars generated by text-to-3D methods, including poor geometry and noisy texture. Furthermore, owing to the absence of human prior knowledge, the avatars generated by text-to-3D methods lack flexibility and pose challenges in terms of animation. In contrast, our proposed method excels in generating high-quality, animatable avatars. Quantitative Comparison. To assess X-Oscar quantitatively, we conduct user studies comparing its performance with SOTA text-to-3D content and text-to-avatar methods using the same prompts. We randomly selected 40 prompts generated by ChatGPT for avatar creation, and the user studies involved 52 participants who provided subjective evaluations. Participants rated the generated avatars based on three specific aspects: texture quality (Geo. Qua.), geometry quality (Tex. Qua.), and text consistency (Tex. Con.). Scores range from 1 to 10, with higher scores indicating better quality. As shown in Tab. 1, our method consistently outperforms all other methods across all evaluated aspects. 6 \fX-Oscar Table 1: Quantitative comparison of SOTA Methods: The top-performing and second-best results are highlighted in bolded and underlined, respectively. As AvatarCLIP employs the CLIP score as its training supervision signal, it is inappropriate to gauge its performance using the CLIP score. Therefore, we set the CLIP score of AvatarCLIP to gray. User Study CLIP Score OpenCLIP Score Method Geo. Qua. Tex. Qua. Tex. Con. ViT-B/32 ViT-B/16 ViT-L/14 ViT-B/32 ViT-B/16 ViT-L/14 DreamFusion 2.66 4.18 3.29 29.29 29.29 25.30 31.57 28.22 30.17 Magic3D 4.21 3.12 1.61 28.52 30.92 27.02 31.14 28.21 30.21 Fantasia3D 2.14 2.42 2.53 30.34 30.42 26.12 29.68 28.46 31.46 ProlificDreamer 2.11 3.72 6.29 30.30 30.28 25.00 30.81 28.59 30.75 AvatarCLIP 3.28 2.64 2.09 34.49 32.45 28.20 32.77 31.20 31.98 AvatarCraft 4.39 4.55 3.37 27.59 29.70 25.23 26.19 24.60 25.55 DreamWaltz 6.38 6.09 6.99 30.86 31.20 27.32 30.65 29.09 29.83 HumanGuassian 6.03 4.51 6.08 28.46 29.18 26.26 26.37 26.82 29.09 TADA 5.03 6.95 7.62 31.09 30.48 27.72 30.67 30.05 30.17 X-Oscar 8.85 8.91 9.22 31.70 31.97 28.10 30.91 30.28 30.42 Ours TADA DreamWaltz HumanGaussian AvatarCLIP AvatarCraft Figure 4: Qualitative comparisons with SOTA text-to-avatar methods. The prompts (top \u2192down) are \u201cGandalf from The Lord of the Rings\u201d, \u201cAladdin in Aladdin\u201d, and \u201cCaptain Jack Sparrow from Pirates of the Caribbean\u201d. DreamFusion Fantasia3D Magic3D ProlificDreamer Ours Figure 5: Qualitative comparisons with SOTA text-to-3D methods. The prompts (top \u2192down) are \u201cAnna in Frozen\u201d, \u201cHilary Clinton\u201d, and \u201cKnight\u201d. 7 \fX-Oscar w/o AVP w/o ASDS X-Oscar Figure 6: Ablation study on the Adaptive Variational Parameter and Avatar-aware Score Distillation Sampling. The prompts (top \u2192down) are \u201cBatman\u201d, and \u201cMulan\u201d. w/o PM w PM \u201cWarren Buffett\u201d \u201cJeff Bezos\u201d \u201cAlbert Einstein\u201d w/o PM w PM w/o PM w PM Figure 7: Ablation study on progressive modeling. \u201cPM\u201d is short for \u201cprogressive modeling\u201d. \u201cw/o PM\u201d means that geometry, appearance, and animation are optimized together. Additionally, we calculate similarity scores between the generated results and text prompts using CLIP (Radford et al., 2021) and OpenCLIP (Cherti et al., 2023) with different backbones. Our method consistently achieves either the best or second-best results, demonstrating its ability to generate 3D avatars that are semantically consistent with the provided text prompts. 5.3. Ablation Studies Progressive Modeling. To evaluate the effectiveness of the progressive modeling paradigm in X-Oscar, we performed additional experiments by coupling the three training stages together. The results shown in Fig. 7 reveal a significant enhancement in the quality of geometry and appearance in the generated avatars when using the progressive modeling paradigm. For example, consider the prompt \u201cAlbert Einstein\u201d. Without employing the progressive modeling approach, the generated avatar is limited to a rudimentary shape and color, lacking the intricate details necessary for recognizing Albert Einstein. However, when employing the progressive modeling paradigm, we observe a remarkable improvement in the generated avatars. Adaptive Variational Parameter. To provide robust evidence of the impact of AVP, we conducted comprehensive ablation studies by using specific parameters instead of distributions to represent avatars. As depicted in Fig. 6, our observations strongly indicate that the omission of AVP in X-Oscar can lead to an excessive optimization of geometry and appearance, as an effort to align the generated outputs with the text. This subsequently leads to the problem of oversaturation. Geometry oversaturation leads to topological overlay problems in the generated meshes, while appearance oversaturation results in avatars with exaggerated color contrast. By integrating AVP, we successfully tackle these issues, significantly improving the realism of both the geometry and appearance in the generated avatars. Avatar-aware Score Distillation Sampling. To investigate the impact of ASDS, we conducted additional experiments by adding random Gaussian noise instead of avatar-aware noise to the rendered image for optimization. As demonstrated in Fig. 6, the absence of ASDS directly results in a noticeable decline in the overall quality of both the geometry and appearance of the generated avatars. For instance, without ASDS, two ears on Batman\u2019s head exhibit a geometric merging phenomenon. In the case of Mulan, the facial details become blurred and the colors on the front and back of the pants are inconsistent. 6. Conclusion This paper introduces X-Oscar, an advanced framework for generating high-quality, text-guided 3D animatable avatars. The framework incorporates three innovative designs to enhance avatar generation. Firstly, we present a progressive modeling paradigm with clear and simple optimization objectives for each training stage. Additionally, we propose Adaptive Variational Parameter (AVP), which optimizes the distribution of avatars, addressing oversaturation. Furthermore, we introduce Avatar-aware Score Distillation Sampling (ASDS), leveraging avatar-aware denoising to enhance overall avatar quality. Extensive experiments demonstrate the effectiveness of the proposed framework and modules. 8 \fX-Oscar Impact Statements This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.00957v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00957v1",
|
| 3 |
+
"title": "IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors",
|
| 4 |
+
"abstract": "Graph Neural Networks (GNNs) demonstrate excellent performance on graphs,\nwith their core idea about aggregating neighborhood information and learning\nfrom labels. However, the prevailing challenges in most graph datasets are\ntwofold of Insufficient High-Quality Labels and Lack of Neighborhoods,\nresulting in weak GNNs. Existing data augmentation methods designed to address\nthese two issues often tackle only one. They may either require extensive\ntraining of generators, rely on overly simplistic strategies, or demand\nsubstantial prior knowledge, leading to suboptimal generalization abilities. To\nsimultaneously address both of these two challenges, we propose an elegant\nmethod called IntraMix. IntraMix innovatively employs Mixup among low-quality\nlabeled data of the same class, generating high-quality labeled data at minimal\ncost. Additionally, it establishes neighborhoods for the generated data by\nconnecting them with data from the same class with high confidence, thereby\nenriching the neighborhoods of graphs. IntraMix efficiently tackles both\nchallenges faced by graphs and challenges the prior notion of the limited\neffectiveness of Mixup in node classification. IntraMix serves as a universal\nframework that can be readily applied to all GNNs. Extensive experiments\ndemonstrate the effectiveness of IntraMix across various GNNs and datasets.",
|
| 5 |
+
"authors": "Shenghe Zheng, Hongzhi Wang, Xianglong Liu",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.SI"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "Knowledge AND Graph",
|
| 16 |
+
"gt": "IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors",
|
| 17 |
+
"main_content": "Introduction Graph Neural Networks (GNNs) have shown great ability in various graph tasks (Zhou et al., 2020). The essence of GNN lies in the propagation mechanism, where information is gathered from neighbors, and involves supervised learning with labels (Xu et al., 2019). However, most graphs suffer from the absence of high-quality labeled data and *Equal contribution 1Harbin Institute of Technology. Correspondence to: Shenghe Zheng <shenghez.zheng@ gmail.com>, Hongzhi Wang <[email protected]>. incomplete node neighbors, which precisely pinpoint the pain points of GNN, i.e., the dual demands for high-quality labels and rich neighborhoods (Ding et al., 2022). Data augmentation is one way to address these issues. However, graphs are non-Euclidean data, making it challenging to apply augmentation methods developed for regular Euclidean data such as images to graphs due to the semantic and topological structure of graphs (Han et al., 2022). Therefore, unique augmentation methods need to be proposed for graphs. The first step is to clarify the objective of graph augmentation: generating high-quality nodes and enriching neighborhoods. However, most current methods can only solve one aspect. Moreover, some require training generators for the node or edge, incurring extensive costs (Zhao et al., 2021; Liu et al., 2022), while others rely on overly simplistic methods such as random drops that result in marginal improvement (Fang et al., 2023). Additionally, some methods require excessive prior knowledge, leading to weak generalization abilities (Yoo et al., 2022). Therefore, current methods are insufficient to address the two problems. So there is an urgent need for an efficient method that can generate nodes with high-quality labels and construct reasonable neighborhoods for node classification. We observe that existing methods mostly overlook lowquality labels. The distribution of low-quality data often slightly differs from high-quality data and may contain features not present in current data. Extracting information from low-quality data would contribute to enriching data diversity. In general, noise diverges in all directions, leading the data beyond the expected distribution, and resulting in low-quality data (Fr\u00b4 enay & Verleysen, 2013). A natural idea is to blend noisy data, utilizing the directionality of noise to neutralize noise and generate high-quality data within the correct distribution. Therefore, Mixup (Zhang et al., 2018) comes into our view as a data augmentation method that involves mixing data. Mixup is mathematically defined as \u02c6 x = \u03bbxi + (1 \u2212\u03bb)xj, \u02c6 y = \u03bbyi + (1 \u2212\u03bb)yj, where (xi, yi), (xj, yj) are randomly selected, and y represents the label. However, although Mixup excels in Euclidean data, experiments commonly suggest that Mixup is hard to be effective in node classification (Wu et al., 2023). Therefore, a natural question emerges: Can Mixup solve the graph augmentation problems for node classification? 1 arXiv:2405.00957v1 [cs.LG] 2 May 2024 \fIntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Label(Noisy) True Label Mixup Label Wrong Message Passing Wrong Mixup Label \u2026 \u2026 High Quality Mixup Label \u2026 \u2026 High Quality Label Correct Message Passing (a). Vallina Mixup in Graph (b). Intra-Class Mixup of IntraMix in Graph Mixup Process Figure 1. a). Vanilla Mixup may retain noise in labels, and connecting generated nodes to nodes used for their generation leads to incorrect propagation. b).IntraMix generates high-quality labels by Intra-Class Mixup and enriches the neighborhood while preserving correctness by connecting generated nodes to high-quality nodes. Due to the characteristics of graphs, using Mixup is highly challenging. Typically, Mixup randomly mixes two data in proportion. In node classification, its poor performance can be attributed to two reasons, as illustrated in Figure 1(a): Mixup relies on existing labeled data to generate data, and some graphs have extremely sparse labels, leading to the information bottleneck in generation (Li et al., 2020). Even if there are enough labeled nodes, based on the assumption that nodes of the same class are more likely to appear as neighbors (neighborhood assumption) (Zhu et al., 2020), using Mixup between random classes generating nodes whose distribution lies in the middle of the distributions of two classes (Wu et al., 2023). This makes it hard to confirm the neighbors of the generated nodes, and connecting them to any class of nodes is inappropriate, potentially confusing the GNNs. The second problem highlights the fact that Mixup cannot seamlessly integrate into node classification. To address the aforementioned issues, we propose IntraMix, a novel graph augmentation method for node classification, as shown in Figure 1(b). The basic idea is to Mixup within nodes of the same class while the generated single labels facilitate neighbor selection, addressing the primary challenge faced by Mixup. After obtaining low-quality labels by pseudo-labeling (Lee et al., 2013), we apply Mixup among data of the same class. The labels of generated data are of much higher quality than the original nodes. Intuitively, if we simplify the labeled noise as \u03f5 \u223cN(0, \u03c32), the mean distribution of two noises \u00af \u03f5 \u223cN(0, 1 2\u03c32), with a smaller variance, increases the likelihood that the label of generated data is accurate. Therefore, we address the issue of sparse high-quality labeled data in graphs by Intra-Class Mixup. For neighbors of the node v generated by Intra-Class Mixup, we connect v to two nodes with high confidence of the same class with v. This has two benefits. Firstly, based on the neighborhood assumption mentioned above, we accurately find neighbors for v, providing it with information gain. Secondly, by connecting v to two nodes that may belong to the same class, we not only bring message interaction to the neighborhoods of these two nodes, but also reduce the noise impact that arises from inaccurate labels. In this way, we construct rich and reasonable node neighborhoods. Therefore, IntraMix simultaneously addresses two data issues in node classification. It is a data augmentation method decoupled from GNNs and exhibits strong generalization capabilities. Our key contributions are as follows: \u2022 For the first time, we introduce Intra-Class Mixup as the core data augmentation in node classification, highlighting its effectiveness in generating high-quality labeled data. \u2022 The proposed IntraMix tackles sparse labels and incomplete neighborhoods in GNNs through a simple but efficient way of Intra-Class Mixup and neighborhood selection. \u2022 Extensive experiments show that IntraMix improves the performance of GNNs on diverse datasets. Theoretical analysis elucidates the rationale behind IntraMix. 2. Preliminaries Notations: Given a graph G = (V, E), where V = {vi}N i=1 is the set of nodes, and E represents the set of edges, the adjacency relationship between nodes can be represented by A \u2208{0, 1}N\u00d7N, where Aij = 1 if and only if (vi, vj) \u2208E. We use X \u2208RN\u00d7D to denote the feature of nodes. The labels of nodes are represented by Y . Based on the presence or absence of labels, the node set can be divided into Dl = {(xl1, yl1), ...(xlN , ylN )} and Du = {xu1, ...xuN }. We can use pseudo-labeling methods to assign low-quality labels Yu to nodes in Du, getting a low-quality set Dp = {(xu1, yu1), ...(xuN , yuN )}. We define Ni = {vj|Aij = 1} as the neighboring nodes of vi. Node Classification with GNNs: Given a graph G, the node classification involves determining the category of nodes on G. GNNs achieve this by propagating messages on G, representing each node as a vector hv. The propagation for the k-th layer of a GNN is represented as follows: hk v = COM(hk\u22121 v , AGG({hk\u22121 u |u \u2208Nv})) (1) where COM and AGG are COMBINE and AGGREGATE functions, respectively, and hk v denotes the feature of v at 2 \fIntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Input Graph (with Sparse Labels) High Quality Labeled Node Low Quality Labeled Node Unlabeled Node Mixup Progress Added Edge Low Quality Labeled Graph Find High Quality Label Intra-Class Mixup Generate Neighbor Output GNN dropout\ud835\udc8a\ud835\udc8a \u2026 \u2026 \u2026 \u2026 Label Detector Generate Low Quality Label Label Generator \u2026 \u2026 Figure 2. The workflow of IntraMix involves three main steps. First, it utilizes pseudo-labeling to generate low-quality labels for unlabeled nodes. Following that, Intra-Class Mixup is employed to generate high-quality labeled nodes from low-quality ones. Additionally, it identifies nodes with high confidence in the same class and connects them, thus constructing a rich and reasonable neighborhood. the k-th layer. The output hv in the last layer of GNN is used for classification as yv = softmax(hv), where yv is the predicted label for v. 3. Methodology In this section, we provide a detailed explanation of IntraMix. Firstly, we present the Intra-Class Mixup in 3.1. This method generates high-quality labeled data from lowquality data, addressing the issue of label sparsity. Then, we show the method for finding neighbors for nodes in 3.2. Next, in 3.3, we present the workflow and conduct complexity analysis in 3.4. The framework is illustrated in Figure 2. 3.1. Intra-Class Mixup Motivation: In supervised learning, labels aid in helping the model to explore classification boundaries, and having more labels typically allows for learning finer boundaries (Van Engelen & Hoos, 2020). However, in node classification, the scarcity of high-quality labels hampers GNNs from effectively learning, resulting in poor performance. Nevertheless, directly utilizing low-quality labels from pseudo-labeling introduces noise detrimental to learning. As we know, lowquality labeled data are often closer to the boundaries that GNNs can learn from the current data, containing distribution information (Lee et al., 2013). There is the possibility of generating high-quality data from them as mentioned in Sec 1. We aim to neutralize data noise in different directions by blending data. Considering that data generated using Vallina Mixup lies between two existing classes, making it hard to determine neighbors, we innovatively propose Intra-Class Mixup. It not only generates high-quality data but also facilitates the determination of neighbors. Approach: We use pseudo-labeling to transform the unlabeled nodes Du into a set of nodes with low-quality labels Dp. Then, we get D = Dl\u222aDp = {(x1, y1), ..., (xN, yN)}, where there are a few high-quality labels and lots of lowquality labels. In contrast to the Vallina Mixup performed between random samples, to overcome noise, we perform Mixup among nodes with the same low-quality labels to obtain high-quality labeled data guaranteed by Theorem 3.1. The generated dataset is represented as: Dm = {(\u02c6 x, \u02c6 y)|\u02c6 x = M\u03bb(xi, xj), \u02c6 y = yi = yj} (2) where M\u03bb(xi, xj) = \u03bbxi +(1\u2212\u03bb)xj, (xi, yi), (xj, yj) \u2208D (3) The number of generated nodes is manually set. The labels in Dm are of higher quality compared to D, a guarantee provided by Theorem 3.1. The proof can be found in Appendix A.1. In other words, the generated labels exhibit less noise than those of their source nodes. Through Intra-Class Mixup, we obtain a dataset with high-quality labels, leading to improved performance of GNNs. Theorem 3.1. Assuming different classes of data have varying levels of noise, i.e., Pnoise(yi|x) = P(yi|x) + \u03f5i, where Pnoise(yi|x) and P(yi|x) represent the label distribution of class i with and without noise, respectively, and \u03f5i \u223cN(0, \u03c32 i ) is the noise. For Intra-Class Mixup 3 \fIntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors satisfying Equation 2, the probability that the noise is smaller in the generated data than in the original data is 2 \u03c0 arctan[\u03bb2 + (1 \u2212\u03bb)2]\u22121 2 > 0.5. The ratio of the expected noise in the generated data to the expected noise in the original data is [\u03bb2 + (1 \u2212\u03bb)2] 1 2 < 1, indicating that the noise in the generated data is smaller. 3.2. Neighbor Selection Motivation: The strength of GNN lies in gathering information from the neighborhoods to generate node embeddings, making the neighborhood crucial for GNNs (Hamilton et al., 2017). After generating the node v in Sec 3.1, to leverage the advantages of GNN, it is necessary to find neighbors for v. We aim to construct a neighborhood that satisfies two requirements: a). The neighborhood is accurate for v; b). The neighbors of v can obtain richer information through v. If v is simply connected to the node that generated it, since the nodes used for Mixup are mostly labeled with low quality, it is prone to unreasonably connecting two nodes, resulting in incorrect propagation. Since nodes of the same class are more likely to appear in the neighborhood in homogeneous graphs, a natural idea is to connect v with nodes of high confidence in the same class. In this way, we can find the correct neighbors for v and, acting as a bridge, connect the neighborhoods of two nodes of the same class through v to obtain more information, as shown in Figure 1(b). Approach: As mentioned above, neighborhood selection involves two steps. First, finding nodes highly likely to be of the same class as v, and second, determining how to connect v with these nodes. We will now introduce them separately. In the process of finding high-quality nodes, we ingeniously design an ensemble learning approach without additional training costs. We employ the GNN utilized for pseudolabeling to predict nodes under n different dropout rates. The nodes consistently predicted in all n trials are considered high-quality. This is essentially an ensemble approach using n GNNs but with only n inference costs, significantly reducing consumption. The selection is expressed as: Dh = {(x, y)|f1(x) = ... = fn(x), (x, y) \u2208D} (4) where fi represents GNNs with different dropout rates. After obtaining the high-quality set Dh, it is time to establish the neighborhoods between Dh and Dm generated by Mixup. To ensure the correctness of the neighborhoods, we adopt the approach of randomly connecting the generated data to high-quality nodes of the same class. The augmented edge set \u02c6 E of the original edge set E can be expressed as: \u02c6 E = E \u222a{e(\u02c6 x, xi)|(\u02c6 x, y) \u2208Dm, (xi, y) \u2208Dh} (5) where e(a, b) represents en edge between nodes a and b. In this way, we not only find reasonable neighbors for the generated nodes but also establish an information exchange Algorithm 1 Workflow of IntraMix Input: Graph G = (V, E), V can be divided into Dl and Du as Sec 2, Class category of nodes C, GNN model f 1: Pre-train f on G according to Equation.6 2: Generate pseudo labels for Du using f, get \u02c6 Du 3: D = Dl \u222a\u02c6 Du 4: Generate Mixup set Dm = {Vm, Em} as Equation.2 5: V = V \u222aVm 6: Generate high-quality set Dh according to Equation.4 7: for (\u02c6 x, \u02c6 y) \u2208Dm do 8: E \u222a{e(\u02c6 x, xi), e(\u02c6 x, xj)}, where (xi/xj, \u02c6 y) \u2208Dh 9: end for Output: the augmented graph G = (V, E) path between two nodes of the same class. Additionally, by not directly connecting the two nodes, potential noise impacts are avoided. The elimination effect of noise is guaranteed by Theorem 3.2. The detailed proof can be found in Appendix A.2. Through this method, the issue of missing neighborhoods in the graph is alleviated, and a graph with richer structural information is constructed. Theorem 3.2. The label noise can be equivalently represented as node noise, i.e., Pnoise(x|yi) = P(x|yi) + \u03b4i, where \u03b4i \u223cN(0, \u03c32 xi). Assuming Equation 1 can be simplified as hk v = MLP k[(1 + \u03b7k)hk\u22121 v + 1 |Nv| P u\u2208Nv hk\u22121 u ], where \u03b7k is a learnable variable. Let m and n be nodes from the i-th class, xm \u223cP(x|yi), xn \u223cPnoise(x|yi). In a two-layer GNN, the ratio of the expected impact of n\u2019s noise on m when nodes connected through Intra-Class Mixup to the expected impact when directly connecting m and n is q (\u03bb2 + (1 \u2212\u03bb)2) + 1 4(2+\u03b71+\u03b72). This ratio can be controlled to be less than 1 by adjusting learnable \u03b7, indicating that our method leads to a smaller noise impact. 3.3. Workflow In this section, we will briefly introduce the loss function in training and the overall workflow of IntraMix. Firstly, we demonstrate the most commonly used crossentropy loss, and it can be replaced with any function based on the task. Given the node feature X, the node labels Y , and the training set DT , the loss function is as follows: L = \u2212 1 |DT | X (xi,yi)\u2208DT C X c=1 I[yi = c] log P(c|xi, \u03b8) (6) where C is the class number, \u03b8 denotes the GNN parameters, and P(c|xi, \u03b8) is the probability of xi belonging to class c. Next, we introduce the workflow of IntraMix. For detailed steps, refer to Algorithm 1. The process begins by generating low-quality labels for unlabeled nodes through pseudolabeling (lines 1-3). Following that, high-quality labeled 4 \fIntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Table 1. Node Classification accuracy(%) with semi-supervised Graphs. The average result of 30 runs is reported. Models Strategy Cora CiteSeer Pubmed CS Physics GCN Original 81.51 \u00b1 0.42 70.30 \u00b1 0.54 79.06 \u00b1 0.31 91.24 \u00b1 0.43 92.56 \u00b1 1.31 GraphMix 82.29 \u00b1 3.71 74.55 \u00b1 0.52 82.82 \u00b1 0.53 91.90 \u00b1 0.22 90.43 \u00b1 1.76 CODA 83.47 \u00b1 0.48 73.48 \u00b1 0.24 78.50 \u00b1 0.35 91.01 \u00b1 0.75 92.57 \u00b1 0.41 DropMessage 83.33 \u00b1 0.41 71.83 \u00b1 0.35 79.20 \u00b1 0.25 91.50 \u00b1 0.31 92.74 \u00b1 0.72 MH-Aug 84.21 \u00b1 0.38 73.82 \u00b1 0.82 80.51 \u00b1 0.32 92.52 \u00b1 0.37 92.91 \u00b1 0.46 LA-GCN 84.61 \u00b1 0.57 74.70 \u00b1 0.51 81.73 \u00b1 0.71 92.60 \u00b1 0.26 93.26 \u00b1 0.43 NodeMixup 83.47 \u00b1 0.32 74.12 \u00b1 0.35 81.16 \u00b1 0.21 92.69 \u00b1 0.44 93.97 \u00b1 0.45 IntraMix 85.25 \u00b1 0.42 74.80 \u00b1 0.46 82.98 \u00b1 0.54 92.86 \u00b1 0.04 94.27 \u00b1 0.14 GAT Original 82.04 \u00b1 0.62 71.82 \u00b1 0.83 78.00 \u00b1 0.71 90.52 \u00b1 0.44 91.97 \u00b1 0.65 GraphMix 82.76 \u00b1 0.62 73.04 \u00b1 0.51 78.82 \u00b1 0.44 90.57 \u00b1 1.03 92.90 \u00b1 0.42 CODA 83.36 \u00b1 0.31 72.93 \u00b1 0.42 79.37 \u00b1 1.33 90.41 \u00b1 0.41 92.09 \u00b1 0.62 DropMessage 82.20 \u00b1 0.24 71.48 \u00b1 0.37 78.14 \u00b1 0.25 91.02 \u00b1 0.51 92.03 \u00b1 0.72 MH-Aug 84.52 \u00b1 0.91 73.44 \u00b1 0.81 79.82 \u00b1 0.55 91.26 \u00b1 0.35 92.72 \u00b1 0.42 LA-GAT 84.72 \u00b1 0.45 73.71 \u00b1 0.52 81.04 \u00b1 0.43 91.52 \u00b1 0.31 93.42 \u00b1 0.45 NodeMixup 83.52 \u00b1 0.31 74.30 \u00b1 0.12 81.26 \u00b1 0.34 92.69 \u00b1 0.21 93.87 \u00b1 0.30 IntraMix 85.03 \u00b1 0.45 74.50 \u00b1 0.24 81.76 \u00b1 0.32 92.40 \u00b1 0.24 94.12 \u00b1 0.24 SAGE Original 78.12 \u00b1 0.32 68.09 \u00b1 0.81 77.30 \u00b1 0.74 91.01 \u00b1 0.93 93.09 \u00b1 0.41 GraphMix 80.09 \u00b1 0.82 70.97 \u00b1 1.21 79.85 \u00b1 0.42 91.55 \u00b1 0.33 93.25 \u00b1 0.33 CODA 83.55 \u00b1 0.14 73.24 \u00b1 0.24 79.28 \u00b1 0.46 91.64 \u00b1 0.41 93.42 \u00b1 0.36 MH-Aug 84.50 \u00b1 0.39 75.25 \u00b1 0.44 80.68 \u00b1 0.36 92.27 \u00b1 0.49 93.58 \u00b1 0.53 LA-SAGE 84.41 \u00b1 0.35 74.16 \u00b1 0.32 80.72 \u00b1 0.42 92.41 \u00b1 0.54 93.41 \u00b1 0.31 NodeMixup 81.93 \u00b1 0.22 74.12 \u00b1 0.44 79.97 \u00b1 0.53 91.97 \u00b1 0.24 94.76 \u00b1 0.25 IntraMix 84.72 \u00b1 0.34 74.37 \u00b1 0.45 81.02 \u00b1 0.49 92.80 \u00b1 0.26 94.87 \u00b1 0.04 APPNP Original 80.03 \u00b1 0.53 70.30 \u00b1 0.61 78.67 \u00b1 0.24 91.79 \u00b1 0.55 92.36 \u00b1 0.81 GraphMix 82.98 \u00b1 0.42 70.26 \u00b1 0.43 78.73 \u00b1 0.45 91.53 \u00b1 0.61 94.12 \u00b1 0.14 DropMessage 82.37 \u00b1 0.23 72.65 \u00b1 0.53 80.04 \u00b1 0.42 91.25 \u00b1 0.51 93.54 \u00b1 0.63 MH-Aug 85.04 \u00b1 0.41 74.52 \u00b1 0.32 80.71 \u00b1 0.31 92.95 \u00b1 0.34 94.03 \u00b1 0.25 LA-APPNP 85.42 \u00b1 0.33 74.83 \u00b1 0.29 81.41 \u00b1 0.55 92.71 \u00b1 0.47 94.52 \u00b1 0.27 NodeMixup 83.54 \u00b1 0.45 75.12 \u00b1 0.33 79.93 \u00b1 0.12 92.82 \u00b1 0.24 94.34 \u00b1 0.22 IntraMix 85.99 \u00b1 0.48 75.25 \u00b1 0.42 81.96 \u00b1 0.34 93.24 \u00b1 0.21 94.79 \u00b1 0.14 data is generated by Intra-class Mixup. Subsequently, a reasonable neighborhood is constructed (lines 6-8). The output is a graph better suitable for node classification. 3.4. Complexity Analysis Assuming the number of generated nodes determined during the Mixup generation process is m, the time cost incurred during the generation and finding neighbors of nodes is O(m). Since our Intra-Class Mixup for node classification does not incur additional computations, its complexity is the same as the original GNNs. Assuming the original time complexity of the GNN is O(|V | \u00d7 F \u00d7 F \u2032) + O(|E| \u00d7 F \u2032), where F denotes the input feature dimension of nodes, and F \u2032 is the hidden layer dimension of GNN. The time complexity after using IntraMix is O(|V | \u00d7 F \u00d7 F \u2032) + O(|E| \u00d7 F \u2032) + O(m \u00d7 F \u00d7 F \u2032) + O(2m \u00d7 F \u2032) + O(m). As in most cases, m \u226a|V |, the time complexity is in the same order of magnitude as the original GNN. 4. Experiment In this section, we show the excellent performance of IntraMix in both semi-supervised and full-supervised node classification using various GNNs across multiple datasets. Sec 4.3 shows the inductive learning ability of IntraMix. The analysis includes how IntraMix overcomes over-smoothing, and we conduct ablation experiments for in-depth analysis. 4.1. Semi-supervised Learning Datasets: We evaluate IntraMix on commonly used semi-supervised datasets for node classification, including Cora, CiteSeer, Pubmed (Sen et al., 2017), CS, and Physics (Shchur et al., 2018). We follow the original splits for these datasets. Details can be found in Appendix B.1. Baselines: We utilize four popular GNNs: GCN (Kipf & Welling, 2017), GAT (Veli\u02c7 ckovi\u00b4 c et al., 2018), GraphSAGE (SAGE) (Hamilton et al., 2017), and APPNP (Gasteiger 5 \fIntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors et al., 2019). Additionally, we compare IntraMix with various mainstream graph augmentation methods (Verma et al., 2021; Duan et al., 2023; Fang et al., 2023; Park et al., 2021; Liu et al., 2022; Lu et al., 2023). Details can be found in Appendix B.2. For each graph augmentation applied to each GNN, we use the same hyperparameters for fairness. When comparing with other methods, we use the settings from their open-source code and report the average results over 30 runs. All experiments are conducted on an RTX-3090. Result: It is crucial to note that semi-supervised experiments are more important than fully supervised ones. This is primarily due to the sparse labels in most real-world scenarios of graphs. The results of semi-supervised experiments reflect the method\u2019s potential when applied to real-world situations. Observing the results in Table 1, it is evident that IntraMix demonstrates superior performance across almost all GNNs and datasets. This indicates that the IntraMix generation of high-quality labeled nodes and neighborhoods, enriches the knowledge on the graph, making the graph more conducive for GNNs. Additionally, it is noteworthy that IntraMix exhibits greater advantages on SAGE and APPNP. This is attributed to the use of neighbor sampling for message aggregation of SAGE and the customized message-passing of APPNP, both of which prioritize the correct and richness of the neighborhood of nodes in comparison to GCN and GAT. The superiority on these two models further validates the rationality and richness of the neighborhoods constructed by IntraMix and the correctness of the high-quality labeled generated nodes. 4.2. Full-supervised Learning Datasets: To evaluate IntraMix on fully supervised datasets, we utilized the well-known ogbn-arxiv (Hu et al., 2020) and Flickr (Zeng et al., 2020) datasets, following standard partitioning methods. For detailed information on the datasets, please refer to the Appendix B.1. Baselines: In this part, we consider three GNNs: GCN, GAT, and GraphSAGE. Additionally, we compare IntraMix with various mainstream methods, and details about the comparative methods can be found in Appendix B.2. Results: Observing the results in Table 2, it is evident that in fully supervised experiments, IntraMix consistently demonstrates superior performance across almost all GNNs and datasets, aligning with the outcomes in semi-supervised learning. This further affirms the rationale behind IntraMix. Despite the abundance of labeled data in fully supervised learning, which diminishes the improvement of high-quality labeled nodes provided by IntraMix, the rich and accurate neighborhoods constructed by IntraMix still enhance the quality of the graph. IntraMix constructs richer paths for propagation by connecting nodes with label similarity, thereby enhancing node representations. Consequently, Table 2. Node Classification accuracy(%) with full supervised graphs. The average result of 10 runs is reported. Models Strategy ogbn-arxiv Flickr GCN Original 71.74 \u00b1 0.29 51.88 \u00b1 0.41 FLAG 72.04 \u00b1 0.20 52.05 \u00b1 0.16 LAGCN 72.08 \u00b1 0.14 52.63 \u00b1 0.16 NodeMixup 73.26 \u00b1 0.25 52.54 \u00b1 0.21 IntraMix 73.51 \u00b1 0.22 53.03 \u00b1 0.25 GAT Original 73.65 \u00b1 0.11 49.88 \u00b1 0.32 FLAG 73.71 \u00b1 0.13 51.34 \u00b1 0.27 LAGAT 73.77 \u00b1 0.12 52.63 \u00b1 0.16 NodeMixup 73.24 \u00b1 0.32 52.82 \u00b1 0.36 IntraMix 73.85 \u00b1 0.12 53.49 \u00b1 0.09 SAGE Original 71.49 \u00b1 0.27 50.47 \u00b1 0.21 FLAG 72.19 \u00b1 0.21 52.39\u00b1 0.28 LASAGE 72.30 \u00b1 0.12 54.24 \u00b1 0.25 NodeMixup 72.01 \u00b1 0.35 53.49 \u00b1 0.24 IntraMix 73.61 \u00b1 0.09 54.65 \u00b1 0.26 Table 3. Node Classification with inductive settings. Models Strategy Cora CiteSeer GAT Original 81.3 \u00b1 0.5 70.4 \u00b1 1.2 LAGAT 82.7 \u00b1 0.8 72.1 \u00b1 0.7 NodeMixup 83.1 \u00b1 0.5 71.8 \u00b1 0.9 IntraMix 83.8 \u00b1 0.6 72.9 \u00b1 0.6 SAGE Original 80.1 \u00b1 1.7 69.1 \u00b1 2.9 LAGSAGE 81.7 \u00b1 0.8 73.0 \u00b1 1.1 NodeMixup 81.9 \u00b1 0.5 73.1 \u00b1 1.3 IntraMix 82.9 \u00b1 0.4 73.9 \u00b1 0.8 even in fully supervised datasets where labeled data is sufficient, IntraMix demonstrates excellent capabilities. 4.3. Inductive Learning The experiments mentioned above are conducted in a transductive setting. To demonstrate the reliability of IntraMix in inductive learning setups, we conduct inductive experiments on Cora and CiteSeer, combining GraphSAGE and GAT. The results are presented in Table 3. In inductive learning, GNNs can only observe non-test data during training, which aligns more closely with real-world scenarios. From the results, it is evident that IntraMix also exhibits excellent performance in inductive learning. This strongly validates that the nodes with more accurate labels and rich neighborhoods constructed by IntraMix indeed provide the graph with richer information. As a result, GNNs trained with IntraMix can learn more comprehensive patterns and make accurate predictions even for unseen nodes, confirming IntraMix as a generalizable graph augmentation framework applicable to real-world scenarios. 6 \fIntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Table 4. Ablation of Intra-Class Mixup on GCN. w con connects the generated nodes with nodes used in generation, and sim con is the similar connection. \u2191means an improvement to the original. Strategy Cora CiteSeer Pubmed Original 81.5 \u00b1 0.4 70.3 \u00b1 0.5 79.0 \u00b1 0.3 Only PL 82.9 \u00b1 0.2 72.3 \u00b1 0.3 79.5 \u00b1 0.2 Only UPS 83.1 \u00b1 0.4 72.8 \u00b1 0.6 79.7 \u00b1 0.4 Mixup(w/o con) 58.9 \u00b1 22.3 52.3 \u00b1 17.6 70.0 \u00b1 10.8 Mixup(w con) 83.0 \u00b1 1.2 71.3 \u00b1 3.5 79.4 \u00b1 1.1 Mixup(sim con) 83.1 \u00b1 1.8 71.5 \u00b1 1.9 79.8 \u00b1 3.8 Intra-Class Mixup 85.2 (\u21913.7) 74.8 (\u21914.5) 82.9 (\u21913.9) Table 5. Explore the effect of generating node with Intra-Class Mixup. Zeros means replacing the generated nodes with an allzero vector, and Ones means replacing them with an all-one vector. Strategy Cora CiteSeer Pubmed Original 81.5 \u00b1 0.4 70.3 \u00b1 0.5 79.0 \u00b1 0.3 Ones 31.9 (\u219349.6) 21.5 (\u219348.8) 38.1 (\u219340.9) Zeros 83.8 (\u21912.3) 73.6 (\u21913.3) 80.7 (\u21911.7) IntraMix 85.2 (\u21913.7) 74.8 (\u21914.5) 82.9 (\u21913.9) 4.4. Ablation Experiment To demonstrate the effects of each IntraMix component, we conduct ablation experiments using GCN on Cora, CiteSeer, and Pubmed. All other parts of IntraMix are kept unchanged except for the mentioned ablated components. Intra-Class Mixup: We discuss the effectiveness of IntraClass Mixup. We compare it with methods that do not use Mixup, relying solely on pseudo-labeling(PL), and introduce an advanced PL method called UPS (Rizve et al., 2021). Additionally, we compare Intra-Class Mixup with Vallina Mixup, which employs various connection methods for generated nodes. The results are shown in Table 4. Among these methods, Intra-Class Mixup demonstrates the best performance, demonstrating nearly 3.5% improvement in accuracy compared to the original GCN. This is because, compared to methods using only pseudo-labels, Intra-Class Mixup generates higher-quality labeled nodes and constructs richer neighborhoods, enabling GNNs to get more information. Regarding Mixup, we utilize three connecting methods: treating generated nodes as isolated (w/o con), connecting them with nodes used for generation (w con), and connecting them with nodes with similar embeddings (sim con). However, none of these methods perform well. As Theorem 3.1 suggests, Intra-Class Mixup ensures the improvement of label quality for each class, a guarantee that Mixup cannot provide. Furthermore, the fact that Intra-Class Mixup data have a single label makes it convenient to select similar neighbors. In contrast, Mixup generates data with mixed labels, introducing the risk of connecting to any class of node and potentially causing errors in propagation. This Table 6. Effects of Neighbor Selection on GCN. \u2191means improvement compared to the original, while \u2193indicates a reduction. Strategy Cora CiteSeer Pubmed Original-GCN 81.5 \u00b1 0.4 70.3 \u00b1 0.5 79.0 \u00b1 0.3 Direct Con 83.6 (\u21912.1) 73.4 (\u21913.1) 78.0 (\u21931.0) Random Con 76.7 (\u21934.8) 67.0 (\u21933.3) 65.1 (\u219313.9) Without Con 82.9 (\u21911.4) 72.8 (\u21912.5) 79.4 (\u21910.4) Vallina Con 84.3 (\u21912.8) 73.6 (\u21913.3) 79.8 (\u21910.8) Similar Con 84.5 (\u21913.0) 74.0 (\u21913.7) 80.3 (\u21911.3) IntraMix 85.2 (\u21913.7) 74.8 (\u21914.5) 82.9 (\u21913.9) is a key reason for the poor performance of Mixup in node classification. Simultaneously, in Table 5, when replacing the node generated by IntraMix with an all-zero vector and an all-one vector, both perform worse than IntraMix. This phenomenon indicates that, when excluding the influence of neighbor selection, the nodes generated by IntraMix are also effective for graph augmentation. Neighbor Selection: This part shows the importance of Neighbor Selection. We compare various selection methods together with the direct connection of high-quality labeled nodes of the same class (Direct con) in Table 6. We observe that these methods are less effective than IntraMix. Direct con is not effective because, despite high-quality labeling, noise is still present. However, IntraMix avoids this problem. The experimental results validate Theorem 3.2. Compared to other neighbor selection methods, IntraMix proposes a simple way to select nodes more likely to serve as neighbors, leading to more accurate information passing. Among the methods, Vallina Con indicates connecting the generated node to the nodes used for generation. Similar Con (SC) denotes connecting the nodes to nodes with similar embeddings. SC performs great, highlighting the importance of selecting similar nodes as neighbors, aligning with our intuition about the same class among neighborhoods. However, SC is not as good as IntraMix, mainly because the initial neighbors for generated nodes are empty, making it hard to provide accurate embeddings for similarity measurement. Even if the similarity could be determined, connecting overly similar nodes resulted in insufficient neighborhood information. In comparison, IntraMix connects nodes with the same label, maintaining neighborhood correctness while connecting nodes that are not extremely similar. IntraMix enriches the knowledge within the neighborhood. In Table 5, using an all-zero vector to eliminate the influence of Mixup still shows a performance improvement. This reflects the rationality of our proposed Neighbor Selection method, which is effective for graph augmentation. Utilization of unlabeled data: In this part, we show the importance of using unlabeled nodes to obtain low-quality data, and the results are shown in Figure 3(a). The poor 7 \fIntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Cora Pubmed CiteSeer 72 74 80 82 84 Accuracy(%) Usage Ratio 0.0 0.2 0.4 0.6 0.8 1.0 (a) Utilization of unlabeled data 0.0 0.1 0.2 0.3 0.4 0.5 83.0 83.5 84.0 84.5 85.0 85.5 86.0 l Accuracy(%) (b) Sensitivity Analysis of \u03bb 2 3 4 5 6 7 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 MADGap Metric Layer GCN GAT GraphSAGE LAGCN GRAND IntraMix (c) Over-smoothing Analysis on Cora Figure 3. a) Experimental results using different proportions of unlabeled nodes show that performance improves as more unlabeled nodes are utilized. b) Sensitivity analysis of \u03bb indicates that the best performance is achieved when \u03bb = 0.5. c) Analysis reveals that IntraMix demonstrates effective capabilities in overcoming over-smoothing with deep GNNs. performance of Mixup only within sparsely labeled nodes is due to the limited nodes. Even though Mixup can augment the label information to some extent, the insufficient nodes used for generation create a bottleneck in information gain, hindering GNNs from learning enough knowledge. Despite the low-quality labels provided by pseudo-labeling for unlabeled data, Intra-Class Mixup enhances the label quality, thus providing GNNs with ample knowledge space. Sensitivity Analysis of \u03bb: This part discusses the impact of \u03bb in Intra-Class Mixup. The experiment is conducted using GCN on Cora, and detailed results are presented in Figure 3 (b). According to Theorem 3.1, the best noise reduction in each class label is achieved when \u03bb = 0.5. The results validate our theoretical analysis, showing that the performance of GCN gradually improves as \u03bb varies from 0 to 0.5. This indicates that when \u03bb is closer to 0.5, the quality of the generated data labels is higher. Therefore, we choose \u03bb \u223cB(2, 2), where B denotes Beta Distribution. 4.5. Over-smoothing Analysis As is well known, deep GNNs can result in over-smoothing, a phenomenon characterized by the convergence of node embeddings. We show the ability to alleviate over-smoothing of IntraMix in Figure 3(c). We use MADgap (Chen et al., 2020) as the metric, where a larger MADgap indicates a milder over-smoothing. Surprisingly, although IntraMix is not specifically designed to address over-smoothing, it shows a strong ability to counteract over-smoothing, reaching a level similar to GRAND (Feng et al., 2020), a method specialized in addressing over-smoothing. This is attributed to the bridging effect of the generated nodes, connecting nodes of the same class with high confidence in a random manner. This process resembles random propagation, providing effective resistance against over-smoothing. Additionally, the richer neighborhood and node features inherently mitigate over-smoothing issues (Keriven, 2022). 5. Related Work Graph Augmentation: The primary purpose of graph augmentation is to address two common challenges in graphs encountered by GNN, scarcity of labels and incomplete neighborhoods (Ding et al., 2022). Graph augmentation can be categorized into Node Manipulation (Verma et al., 2021), Edge Manipulation (Rong et al., 2020), Feature Manipulation (Feng et al., 2019), and Subgraph Manipulation (Park et al., 2022). However, existing methods either require complex generators (Liu et al., 2022) or extensive empirical involvement (Wang et al., 2020), failing to effectively address the two issues. The proposed IntraMix offers a simple solution to simultaneously tackle the two challenges faced by GNNs. Details can be found in Appendix C.2. Mixup: Mixup is a promising data augmentation medthod (Zhang et al., 2018), enhancing the generalization of various tasks (Verma et al., 2019; Sun et al., 2020). However, there has been limited focus on the application of Mixup in node classification. We address the shortcomings of Mixup in node classification, proposing IntraMix, which involves Intra-Class Mixup. IntraMix provides richer information for the graph, improving the performance of GNNs in node classification. Details can be found in Appendix C.1. 6. Conclusion This paper presents IntraMix, an elegant graph augmentation method for node classification. We utilize Intra-Class Mixup to generate high-quality labels to address the issue of sparse high-quality labels. To address the problem of limited neighborhoods, we connect the generated nodes with nodes that are highly likely from the same class. IntraMix provides an elegant solution to the dual challenges faced by graphs. Moreover, IntraMix is a flexible method that can be applied to all GNNs. Future work will focus on exploring neighbor selection methods to construct more realistic graphs. 8 \fIntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors Impact Statements This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.00958v1.json
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00958v1",
|
| 3 |
+
"title": "Generative manufacturing systems using diffusion models and ChatGPT",
|
| 4 |
+
"abstract": "In this study, we introduce Generative Manufacturing Systems (GMS) as a novel\napproach to effectively manage and coordinate autonomous manufacturing assets,\nthereby enhancing their responsiveness and flexibility to address a wide array\nof production objectives and human preferences. Deviating from traditional\nexplicit modeling, GMS employs generative AI, including diffusion models and\nChatGPT, for implicit learning from envisioned futures, marking a shift from a\nmodel-optimum to a training-sampling decision-making. Through the integration\nof generative AI, GMS enables complex decision-making through interactive\ndialogue with humans, allowing manufacturing assets to generate multiple\nhigh-quality global decisions that can be iteratively refined based on human\nfeedback. Empirical findings showcase GMS's substantial improvement in system\nresilience and responsiveness to uncertainties, with decision times reduced\nfrom seconds to milliseconds. The study underscores the inherent creativity and\ndiversity in the generated solutions, facilitating human-centric\ndecision-making through seamless and continuous human-machine interactions.",
|
| 5 |
+
"authors": "Xingyu Li, Fei Tao, Wei Ye, Aydin Nassehi, John W. Sutherland",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.HC",
|
| 13 |
+
"cs.SY",
|
| 14 |
+
"eess.SY"
|
| 15 |
+
],
|
| 16 |
+
"label": "Original Paper",
|
| 17 |
+
"paper_cat": "Diffusion AND Model",
|
| 18 |
+
"gt": "Generative manufacturing systems using diffusion models and ChatGPT",
|
| 19 |
+
"main_content": "Introduction Manufacturing systems confront persistent uncertainties with varying forms, urgencies, and impacts. Firstly, the advent of mass personalization [1] coupled with regulatory and standard changes adds complexity to production requirements, mandating systems to adeptly navigate evolving demands and obligations. Secondly, production disruptions [2, 3], like natural disasters, pandemics, financial crises, and geopolitical conflicts trigger resource scarcities and shifts in consumer behavior. 20\u201330% of firms and businesses are compelled to close following a major disruption [4]. Lastly, emerging manufacturing initiatives driven by sustainable, societal, and ecological goals have the potential to reshape production objectives, necessitating a thorough reassessment of existing systems [5]. Future manufacturing systems require the flexibility to promptly adapt to uncertainties and maintain a nuanced balance between emerging initiatives and constraints. The first integration of flexibility into MSs traces back to the 1960s with the inception of flexible manufacturing systems [6]. Despite efforts to improve hardware and software flexibility, the inherent NP-hard complexity of centralized control in these manufacturing systems, especially with a growing number of assets and planning horizons, impedes system responsiveness to uncertainties. Increasing autonomy of manufacturing assets, including robots, vehicles, and mobile manipulators, poses an opportunity to address this challenge by delegating decision authority to each asset. Manufacturers like Audi have shifted from rigid line production to segmented workstations with autonomous assets [7]. Assets tailored for specific manufacturing tasks, such as Little Helper, OMRON MoMa, and KMR IIWA, have demonstrated efficacy across automotive and aerospace industries [8]. Through strategic task assignment and routing, these assets have the potential to realize adaptable layouts and schedules, anticipating up to 30% increase in worker utilization and output levels [9]. Emerging manufacturing systems, exemplified by agent-based manufacturing [10], matrix production systems [11], and anarchic manufacturing [12], incorporate asset autonomy via decentralized or distributed control. However, as autonomous assets become more complex and flexible, facilitated by open interfaces and universal standards [13, 14], these control approaches encounter challenges as well. Each asset often lacks comprehensive awareness of the entire system and its constraints [15], resulting in conflicts in aligning individual plans and impeding the attainment of optimal solutions. 2 \fMore importantly, optimal solutions are contingent upon effectively balancing diverse objectives and stakeholders\u2019 preferences [16, 17], which may not be completely and explicitly modeled. To fully exploit the benefits of asset autonomy, a revolutionary approach is imperative but missing\u2014one that efficiently manages a multitude of diverse assets for various production objectives under uncertainties, all while ensuring the centrality of humans in decision-making. Generative models provide a transformative opportunity to address these challenges through their distinctive generative capabilities, probabilistic modeling, and interactive decision-making. Herein, we proposed a GMS, signifying a fundamental transition from traditional explicit model of now to implicit knowledge of future. Drawing inspiration from the envisions of the daydreaming factory [18], our approach involves exploring diverse combinations of decisions and uncertainties to generate numerous potential futures. Through utilizing the generative models, including diffusion models and ChatGPT, GMS proficiently captures underlying patterns and distributions of the decisions from the future experience, facilitating creative decision-making even for scenarios beyond the initial scope of exploration. 2. Generative Manufacturing Systems We envisage a synergistic integration of stationary machinery, autonomous assets, and diverse human workforces in the futuristic manufacturing systems. Considering the growing asset autonomy and mobility, we propose that autonomous assets and humans can dynamically relocate and self-organize across various workstations to enhance manufacturing operations and streamline material flows. GMS is designed to skillfully adjust configuration and schedules to handle uncertainties and production objectives, all while under human supervision. As illustrated in Fig. 1, humans contribute concerns and suggestions for various objectives to the autonomous assets, fostering collaborative expertise under a unified production floor. To facilitate human interaction with assets, GMS employs large language models like ChatGPT, XLNet, and Turning-NLP to convert human inquiries into machine language. Image generation models, including diffusion models, BigGAN, and DALL-E, are then adopted to generate system configurations (humans and assets at each station) in response to human inquiries. Further granularity in decisions is achieved through operational schedules and task allocations, distributing tasks across stations and between humans and robots while considering material and process constraints. 3 \fFigure 1: Schematic representation of GMS. Left: assets receiving human inquiries; Middle: GMS models get trained from explorations of futures and process human inquiries to sample new decisions; Right: GMS responds to human inquiries by providing diverse options for configurations and schedules. Unlike existing approaches relying on explicit models and their convergence to find optimal decisions (model-optimum), GMS employs a trainingsampling approach by extensively exploring future scenarios, GMS implicitly learns probabilistic distributions of good decisions, assembling these distributions in accordance with human desires and production objectives for decision sampling. This shift from a model-optimum to a training-sampling approach not only addresses computational challenges in existing manufacturing systems but also introduces the following benefits: Creativity: The incorporation of noises during sampling enables a broader spectrum of potential decisions. Additionally, generative models can innovate 4 \fnovel decisions through purposeful combinations of learned distributions, a critical aspect of aligning with emerging human inquiries and unforeseen scenarios. Resilience: Training-sampling boosts system resilience in two folds: firstly, sampling decisions prove substantially more efficient as compared to optimization convergence, which enhances system responsiveness amidst uncertainties; secondly, sampling provides varied solutions for a wide range of scenarios, equipping the GMS with a diverse set of potential responses to enhance resilience. Human-centricity: The implicit knowledge of GMS seamlessly integrates with human inquiry, knowledge, and expertise, allowing humans to tap into the nuanced insights within generative models. This synergy enables a more cohesive and effective collaboration between humans and autonomous assets, where humans can harness the capabilities of GMS to augment their decision-making while instilling a sense of ownership and job satisfaction. 3. Generative Models In this section, we present two exemplary generative models for dynamically managing assets in GMS: 1) ChatGPT to extract system requirements from human inquiries, and 2) diffusion models to create configurations to meet those requirements. A configuration is encoded as a matrix x = {xij}, defines the quantity of assets of type i \u2208I in station j \u2208J. For scheduling, we leverage models from [19] to establish a mapping between the configuration and its corresponding optimal scheduling. For simplicity, we consider human heterogeneity only in skill levels, and machine health and production quality are maintained at consistently high levels. 3.1. ChatGPT Utilizing OpenAI\u2019s ChatGPT API in Python with the gpt-3.5-turbo model variant, we create a named entity recognition task to generate key requirements from human inquiries. For example, when presented with the query \u201dI need a production line with a minimal capacity of 240 part/hour, using no more than 9 machines.\u201d the response is a class c = \u2019(240, None, 9)\u2019, where \u2019None\u2019 functions as a placeholder for human skills not explicitly mentioned. 5 \f3.2. Diffusion models For the decision generation, we adopt diffusion models to learn the underlying patterns, features, and distributions in the training data of envisioned configurations. The diffusion model sets itself apart from other machine learning models by iteratively refining noise-corrupted data to generate new samples, which involves two processes as shown in Fig. 2: 1) forward process adding noises \u03f5t at each step until the data x0 is destroyed, and 2) reverse process \u2013 sampling new x0 by iteratively removing estimated noises. Figure 2: Forward and reverse processes in the diffusion model. We denote latent variables zt as noisy data in the forward process, which is calculated by introducing Gaussian noise \u03f5 \u223cN(0, I) to the input data x0 at each step t \u2208T, with weights determined by the forward process variances \u03b2t, namely, zt = \u03b1tx0 + \u03c3t\u03f5 (1) where, \u03b1t = qQt s=1(1 \u2212\u03b2s), and \u03c3t = p 1 \u2212\u03b12 t, are derived from the Markov Chain [20]. In the reverse process, a learning model h\u03b8 is utilized to estimate the noise \u00af \u03f5c t given zt to restore the original data x0. Here, we parameterize the added noise \u00af \u03f5c t as a function of zt and the current step t, and the class label c of x0, namely, \u00af \u03f5c t = h\u03b8(zt, t, c) (2) Training diffusion model involves minimizing the disparity between the estimated and true noises to maximize the likelihood of the generated samples 6 \faligning with the distribution of training data. Detailed training process is delineated in Algorithm 1. The sampling process involves using the following linear combinations to integrate estimated noises \u02dc \u03f5t from both unconditional and conditional data of target class c from human inquiries, namely, Algorithm 1 Diffusion Model Training: 1: repeat until converged 2: select t \u2208[1, T] \u25b7Sample step value 3: random select x0 \u25b7Sample training data 4: obtain the class c of data x0 \u25b7Obtain the data class 5: c \u2190\u2205with probability pu \u25b7Randomly discard class info 6: sample zt = \u03b1tx0 + \u03c3t\u03f5 \u25b7Obtain latent variable of t 7: take gradient descent on: \u25bd\u03b8J = \u25bd\u03b8\u2225h\u03b8(zt, t, c) \u2212\u03f5\u22252 2 \u25b7Gradient calculation \u02dc \u03f5t \u2190(1 + w)\u00af \u03f5c t \u2212w\u00af \u03f5\u2205 t (3) where w denotes the guidance strengths to control the blend of these two types of noises during sampling. Provided the estimated noise \u02dc \u03f5t and a sample with noisy xt, the sample at the preceding step xt\u22121 can be attained from the following distribution: xt\u22121 \u223cN( 1 \u221a1 \u2212\u03b2t (xt \u2212\u03b2t \u03c3t \u02dc \u03f5t), \u03b2tI) (4) By randomly initializing noise matrix xT \u223cN(0, I) and iteratively sampling using Eqn. 4, a new configuration x0 can be attained. 3.3. Learning model The learning model h\u03b8 aims to infer the noise estimate \u03f5t from the latent variable zt, both in dimensions I \u00d7J. To obtain a natural symmetry, padding is implemented, resulting in squared matrices of size p = max{I, J}. Fig. 3 shows the proposed learning model in a U-Net structure, which is utilized to facilitate information flow between pooling and transposed-convolution pathways. Residual convolutional blocks are tailored to enhance hierarchical feature extraction and pattern recognition for data in matrix format. The introduction of skip connections seamlessly integrates learned features and contextual information across diverse levels of U-Net. With identical input and output sizes, skip connections effectively facilitate the direct 7 \fFigure 3: U-Net architecture for noise estimation using the residual convolutional blocks. Each block has two consecutive convolutional layers with batch normalization, GELU activation, and a residual connection that adds input to the output tensor, ensuring the network learns residual mappings.. transfer of information across layers and preserve fine details and contextual information throughout the network. The U-Net architecture, enriched by skip connections, effectively retains spatial features in configuration and noise matrices, offering significant advantages in the generation process. 3.4. Daydreaming Process To effectively capture implicit knowledge, this study integrates the daydreaming process [18] with meta-heuristics to explore potential decisions in anticipated future scenarios. The process initiates with the generation of random future scenarios, including structured randomness in demands, humans, and autonomous asset capabilities, and corresponding decisions in configurations and schedules. Decision quality is enhanced by integrating selection, crossover, and mutation operations inspired by the genetic algorithm, which provides two advantages: 1) guiding the generation of diverse and adequate configurations for efficient learning, and 2) accelerating data accumulation by storing populations from each generation. Termination of daydreaming occurs after a predefined number of iterations rather than model convergence, 8 \fensuring a balanced dataset. Each explored configuration is evaluated based on multiple objectives of interest. 4. Result We implemented and simulated GMS in the industrial use case for part processing, following [19]. The system assumes 9 types of assets and operations/operation setups, distributed across 7 stations to facilitate flexible collaborations. Human skill levels were randomized as high/moderate/low (120/60/0 parts/hour) across different operations. The daydreaming process involved the randomization of worker skills over 25 generations, each including 40 potential configurations. Cplex was used to obtain the mapping between configurations and optimal schedules. The simulation spanned 120 runtime units, generating 120,000 data over 15 hours for training purposes. The diffusion process and learning model were implemented in Python using PyTorch. Based on the optimal tuning results, the process variance was set at \u03b20 = 10\u22124 and \u03b2T = 0.02 with total steps of T = 400 and guidance strength of w = 2. Fig. 4 shows the sampling process for generating configurations with a specified target capacity. As the step decreases, sampled configurations demonstrate increased rationality, yielding distinct layouts. The rational generation relies on the adept accumulation of implicit knowledge of key features and patterns. For instance, configurations with 0 capacity predominantly display light colors in later parts of the matrix, signifying minimal asset utilization in few types. A system in such uniform composition lacks the ability to perform all operations, resulting in 0 capacity. With capacity escalates, more assets (darker colors) in diverse types are included to enhance parallel production and operation efficiency. A comprehensive analysis of decision time was performed to assess the efficacy of GMS in responsiveness compared to existing methods. The comparative approaches applied widely utilized meta-heuristic algorithms for configuration optimization. Table 1 records the average decision time taken to obtain a configuration with the requisite capacity over five runs of each capacity. Diffusion models maintain low decision times, ranging from 9\u2030 to 16\u2030 seconds per decision, across specified capacities. This consistent efficiency signifies a quantitative improvement compared to other algorithms, which 9 \fFigure 4: Sampling configurations with target capacity over steps, depicting asset types (x-value) and quantities (pixel darkness) at each station (y-value). typically exceed 10 seconds, and, at times, failing to attain the target capacity even after 300 seconds. The consistent efficiency of diffusion models un10 \fTable 1: Comparison of decision time (in seconds) to other algorithms. Algorithm 0 60 120 180 240 300 Particle Swam Optimization 7.66 8.14 7.84 8.33 7.93 8.083 Genetic Algorithm 18.51 16.07 15.66 15.32 14.95 >300 Differential Evolution 32.50 31.11 28.98 28.66 30.11 >300 Simulated Annealing 20.20 21.87 21.37 20.93 21.60 36.189 Imperial Competitive Algorithm 20.14 19.57 19.63 20.56 20.13 >300 Diffusion Models 0.009 0.016 0.011 0.016 0.013 0.009 derscores a pivotal advancement in algorithmic efficacy of training-sampling approach as opposed to the model-optimum approach, markedly enhancing responsiveness and resilience of GMS to uncertainties. To comprehensively assess the quality of generated samples, we randomly sample 1000 configurations and evaluate them by three metrics: 1) precision accuracy (Accu) and the mean squared errors (MSE) of matching the requisite capacity, 2) diversity \u2013 duplication rate (DR) of the generated configurations that exist in the training data, and 3) fidelity Fr\u00b4 echet Inception Distance (FID) measures perceptual quality and fidelity of generated samples as compared to the distribution of training data. The performance of diffusion models with and without guidance is listed in Table 2. Table 2: Model performance with (top) and without (bottom) guidance. Metric 0 30 60 90 120 150 180 240 270 300 Accu (%) 100 72.1 56.3 75.1 80.7 39.6 54.6 66.5 91.3 98.5 MSE 0 15.8 27.8 14.4 19.5 27.9 34.2 23.3 10.5 3.6 DR (\u2030) 0 1 0 0 0 0 0 2 2 13 FID (10\u22126) 6.7 11.7 23.0 32.7 26.4 27.8 12.8 5.3 1.9 2.0 Accu (%) 98.1 38.4 70.4 36.6 59.6 17.1 49.7 52.1 55.2 50.8 MSE 7.9 24.2 28.5 27.5 37.0 42.1 44.7 44.8 40.3 45.8 DCR (\u2030) 2 1 0 0 0 0 0 0 2 13 FID (10\u22126) 22.7 26.3 26.6 26.3 20.7 20.8 22.5 21.6 17.7 23.5 In contrast to the unguided model, the proposed model exhibits notably enhanced accuracy (7/10 above 65%) and low MSE (9/10 below 30), highlighting its high precision in generating samples to specified requirements. Notably, both models yield a low DR of less than 13\u2030, emphasizing decision diversity by randomly sampling from implicit knowledge. Low FID 11 \funderscores the model\u2019s ability to closely match the patterns and distributions of the training data and reproduce key features to generate realistic decisions. Comparatively, the FID score in scenarios with guidance is much lower in the extreme capacities, easily distinguishable in patterns, but higher in capacity 90-150 due to high similarities in the corresponding configurations. Overall, these precise, high-fidelity, and diverse decisions showcase the resilience and creativity of GMS in accommodating uncertainties and diverse objectives. Fig. 5 illustrates the dynamic interaction between humans and assets in GMS, refining system configurations (marked by shapes) and schedules (marked by colors). Using the ChatGPT API, human textual inquiries were transformed into class label c to guide the diffusion model in the sampling process. Analyzing the top 5 decisions based on the best fitness, our findings underscore the remarkable capability of GMS to harmonize the system with diverse objectives, including capacity, human skills, and involved assets. This interactive process centralizes humans in decision-making, cultivating a synergistic collaboration between humans and autonomous assets for continuous exploration and refinement, ultimately shaping GMS to align with varied objectives, constraints, and human desires in real-time. 5. Conclusions The study introduces GMS to harness the increasing autonomy in manufacturing assets to address uncertainties, human desires, and emerging production objectives. GMS signifies a paradigm shift in decision-making from model-optimization to training-sampling. In an industrial use case, our findings highlight that GMS consistently outperforms existing approaches in decision times, diversity, and quality, highlighting its resilience and creativity. GMS adeptly adjusts configuration and schedule to human inquiries and additional objectives, fostering human-centric decision-making for collaborative exploration and continuous refinement. Future studies could explore diverse scenarios, including decisions (e.g., diagnosis, quality control) and performance metrics (e.g., carbon emissions, human well-being) while incorporating more complex human inquiries through embeddings rather than fixed classes. 12"
|
| 20 |
+
}
|
title_10K/test_title_short_2405.00966v1.json
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00966v1",
|
| 3 |
+
"title": "Efficient Compression of Multitask Multilingual Speech Models",
|
| 4 |
+
"abstract": "Whisper is a multitask and multilingual speech model covering 99 languages.\nIt yields commendable automatic speech recognition (ASR) results in a subset of\nits covered languages, but the model still underperforms on a non-negligible\nnumber of under-represented languages, a problem exacerbated in smaller model\nversions. In this work, we examine its limitations, demonstrating the presence\nof speaker-related (gender, age) and model-related (resourcefulness and model\nsize) bias. Despite that, we show that only model-related bias are amplified by\nquantization, impacting more low-resource languages and smaller models.\nSearching for a better compression approach, we propose DistilWhisper, an\napproach that is able to bridge the performance gap in ASR for these languages\nwhile retaining the advantages of multitask and multilingual capabilities. Our\napproach involves two key strategies: lightweight modular ASR fine-tuning of\nwhisper-small using language-specific experts, and knowledge distillation from\nwhisper-large-v2. This dual approach allows us to effectively boost ASR\nperformance while keeping the robustness inherited from the multitask and\nmultilingual pre-training. Results demonstrate that our approach is more\neffective than standard fine-tuning or LoRA adapters, boosting performance in\nthe targeted languages for both in- and out-of-domain test sets, while\nintroducing only a negligible parameter overhead at inference.",
|
| 5 |
+
"authors": "Thomas Palmeira Ferraz",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.SD",
|
| 13 |
+
"eess.AS"
|
| 14 |
+
],
|
| 15 |
+
"label": "Original Paper",
|
| 16 |
+
"paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
|
| 17 |
+
"gt": "Efficient Compression of Multitask Multilingual Speech Models",
|
| 18 |
+
"main_content": "Introduction 1.1 Motivation Over the past three years, the field of Natural Language Processing (NLP) has been revolutionized by the introduction of large pre-trained models, often referred to as \"foundation models.\" These models, both for text and speech, are trained on vast amounts of unlabeled data and can subsequently be fine-tuned for specific tasks using limited labeled data. Multilingual foundation models have garnered significant attention due to their ability to handle hundreds of languages within a single model. However, they face a challenge known as the curse of multilinguality: in order to maintain high performance across all supported languages, these models require an increase in the number of parameters, leading to larger memory requirements and slower inference times. This can render the use of such models impractical in certain scenarios. To address this issue, research has been conducted on model compression techniques, although these methods may inadvertently exacerbate biases present in the model. This internship focuses on OpenAI\u2019s Whisper, a family of multilingual multi-task speech models known for their impressive performance in speech recognition. These models exhibit robustness when transcribing speech recorded under various conditions, surpassing the capabilities of previous models. However, there remain important questions to explore regarding Whisper and its multitask learning approach. Although the model presents exceptional capability for transcribing and translating English, its performance in other languages indicates a decline in multilingual capabilities as the model size decreases. Additionally, we aim to investigate how this multilingual architecture handles biases related to different speakers, including gender, age, and accent. These questions drive our research to enhance the understanding of Whisper\u2019s capabilities and limitations. 1.2 Internship Objectives This internship has three main objectives: (1) Conduct a comprehensive analysis of bias within the Whisper model family, with a specific focus speaker-related (gender, age, accent) and modelrelated (model size, resourcefulness, similar languages) biases; \f2 INTRODUCTION (2) Explore how light compression techniques, such as quantization, may either mitigate or exacerbate any identified biases within the Whisper models; (3) Propose a better compression approach that effectively reduces any disparities found in the models. 1.3 Contributions of this work This work offers two significant contributions. Firstly, it provides a comprehensive analysis of the biases present in the Whisper model and examines how quantization impacts these biases. Secondly, it introduces an alternative model compression method called DistilWhisper, which enhances the performance of smaller Whisper models. Additionally, all models and code developed in this research will be made available as open-source resources. The structure of this report is as follows: Chapter 2 provides essential fundamentals and a comparison with related work to establish a foundational understanding. Chapter 3 details the experimental setup and results of the investigation into bias when quantizing Whisper. This investigation leads to the proposal of DistilWhisper, in Chapter 4, a novel parameter-efficient distillation approach that leverages small pre-trained models. Chapter 5 covers the validation of the proposed approach, as well as some interesting analysis. Finally, Chapter 6 summarizes the primary findings and conclusions of this work. 1.4 About NAVER LABS Europe NAVER LABS is the R&D subsidiary of NAVER, Korea\u2019s leading internet company and the part of NAVER responsible for creating future technology. Its world-class researchers in Korea and Europe create new connections between people, machines, spaces and information by advancing technology in AI, robotics, autonomous driving, 3D/HD mapping and AR. NAVER LABS Europe is the biggest industrial research lab in artificial intelligence in France and a hub of NAVER\u2019s global AI R&D Belt, a network of centers of excellence in Korea, Japan, Vietnam, USA & Europe. The scientists at NAVER LABS Europe conduct fundamental and applied research in machine learning (optimization, robotics), computer vision, natural language processing and UX and ethnography. The site is located in Grenoble, France. \fBACKGROUND AND RELATED WORK 3 2 Background and Related Work 2.1 State of the Art for Automatic Speech Recognition Current ASR approaches primarily involve adapting pre-trained Transformer stacks (Vaswani et al., 2017), which are initially trained through self-supervised learning (SSL) on unlabeled audio data. These pre-trained models can vary in their use of pre-text tasks (e.g., wav2vec 2.0 (Baevski et al., 2020), HuBERT (Hsu et al., 2021), WavLM (Chen et al., 2022)) and the range of languages they cover (e.g., XLSR-53 (Conneau et al., 2021), XLS-R (Babu et al., 2022), MMS (Pratap et al., 2023), Google-USM (Y. Zhang et al., 2023)). This development of models has also seen the introduction of monolingual and multilingual SSL benchmarks. Examples of such benchmarks include SUPERB for English (Yang et al., 2021), LeBenchmark (Evain et al., 2021) for French, and ML-SUPERB (Shi et al., 2023), which covers 143 languages. In contrast to this line of research, the Whisper model relies on weak supervision, meaning it is trained solely on weakly labeled data (without self-supervision). Nevertheless, with an ample amount of data, the Whisper model achieves competitive results when compared to monolingual (Gandhi et al., 2022; Radford et al., 2023) and multilingual (Pratap et al., 2023) SSL models. More details about Whisper can be found on Section 2.6. For broader ASR benchmarks, facilitating comparisons between SSL pretraining and multitasking weakly-supervised training, the ESB benchmark from HuggingFace (Gandhi et al., 2022) for English is an illustrative example. 2.2 Domain Adaptation Domain adaptation consist in the process of adapting a pre-existing trained model to a new domain or task with minor weight adjustments, rather than retraining the entire model from scratch. In the past, this adaptation was primarily carried out through full fine-tuning, where all the model\u2019s weights were updated. In the case of Transformerbased models, it is also common to proceed adaptation choosing to update only specific layers, usually the final ones (Laskar et al., 2022). More recently, the practice of domain adaptation has seen the emergence of Adapterbased techniques, initially proposed by Houlsby et al. (2019). Adapters are lightweight modules commonly used in both NLP and Speech to adapt pre-trained models to new tasks or domains. In speech-related tasks, Adapter-based fine-tuning has found applications in speech translation (Antonios et al., 2022; Gow-Smith et al., 2023; Le et al., 2021), domain adaptation (Thomas et al., 2022; Tomanek et al., 2021), and other \f4 BACKGROUND AND RELATED WORK tasks. They have demonstrated comparable performance to standard fine-tuning while utilizing only a fraction of trainable parameters. Furthermore, there are efforts to adapt Whisper models to specific tasks using LoRA adapters (e.g. Arabic dialect identification (Radhakrishnan et al., 2023), spoken language understanding (M. Wang et al., 2023), emotion recognition (Feng & Narayanan, 2023)). This technique is elaborated in Section 2.2.1. Additionally, some work involves full fine-tuning for task adaptation (e.g child spoken language understanding (Jain et al., 2023)). In contrast to adapters and full fine-tuning, our work introduces gated Language-specific layers into the Whisper model and presents a parameter-efficient Knowledge Distillation approach. These innovations enhance the model\u2019s robustness to out-of-domain data. 2.2.1 Low-rank Adapters (LoRA) Low-rank Adapter (LoRA) fine-tuning, as proposed by Hu et al. (2022), is a technique designed to reduce memory requirements for domain adaptation. This is achieved by introducing new trainable parameters into a pre-trained neural network while keeping the original pre-trained model weights fixed. These introduced parameters take the form of trainable rank decomposition matrices, and they are inserted between specific layers or blocks of the model. This approach significantly reduces the number of parameters that need to be fine-tuned when adapting the model for specific downstream tasks. For example, when fine-tuning a multilingual multi-task model for a single language and task, LoRA adapters help streamline the adaptation process. The key assumption behind LoRA is that weight matrix updates in Transformer-based models exhibit a low \"intrinsic rank\" when undergoing full fine-tuning. This means that a pre-trained weight matrix, denoted as W0 \u2208Rd\u00d7k, can be effectively represented using a low-rank matrix decomposition, denoted as W0 + \u2206W = W0 + BA, where B \u2208Rd\u00d7r, A \u2208Rr\u00d7k, and the rank r \u226amin(d, k). Importantly, during LoRA fine-tuning, the W0 part remains fixed (frozen) and does not receive gradient updates, while A and B become sets of trainable parameters. h = W0x + \u2206Wx = W0x + BAx (2.1) One significant advantage of this approach is that it allows for parallel computation during the forward pass. Specifically, the forward pass output h can be efficiently computed \fBACKGROUND AND RELATED WORK 5 in parallel, and then the partial results are summed coordinate-wise, as presented in Equation 2.1. 2.3 Quantization Quantization is a well-established technique in the field of Deep Learning, employed to increase the efficiency of neural networks. Historically, neural networks were often trained using low-precision numerical representations (Hubara et al., 2017). However, a recent trend, particularly in NLP , involves post-training quantization. This technique entails applying quantization to models after they have been trained with regular, higher precision. This approach has gained traction as it offers the dual benefits of reducing inference latency and model size. Post-training quantization has found widespread use in various domains, including machine translation and language models (Bondarenko et al., 2021; Liang et al., 2021; Menghani, 2023; Wu et al., 2020). Quantized NLP models have yielded promising results, making it an appealing approach. One of the most widely adopted techniques for post-training quantization in both NLP and speech communities is the LLM.int8() algorithm (Dettmers et al., 2022). This method implements quantization in the feed-forward and attention projection layers of the Transformer architecture. The method has two parts: vector-wise quantization and mixed precision decomposition. In the vector-wise quantization, it is determined conversion constants that allow for the recovery of original numbers from 8-bit to 16-bit floating-point representations. This enables matrix multiplication to be carried out in the lower 8-bit precision. Moreover, in the mixed precision decomposition, it identifies potential outliers that could be adversely impacted by reduced precision and then executes this part of the matrix multiplication in 16-bit precision. While initially designed for decoder-only large language models (LLMs), this quantization method, along with its 4-bit variation (Dettmers & Zettlemoyer, 2023), has gained widespread adoption for various Transformer-based models. It has been made readily available in the Transformers library by Hugging Face (Wolf et al., 2020), contributing to its popularity. Additionally, it is becoming common to combine this quantization technique with domain adaptation methods. For instance, the QLoRA (Dettmers et al., 2023) method incorporates LoRA adapters on top of a quantized Transformer model. \f6 BACKGROUND AND RELATED WORK 2.4 Knowledge Distillation Knowledge distillation (KD) has been initially proposed by Hinton et al. (2015) to distill knowledge from ensemble of models into a single model. Over time, KD has evolved to distill knowledge from a large teacher model into smaller student models (Mohammadshahi et al., 2022; Sanh et al., 2020; Shen et al., 2023). Knowledge distillation can be approached in two primary ways: representation matching or distribution matching. In this work, our focus is on distribution matching. Traditional distribution matching knowledge distillation methods involves minimizing the Kullback\u2013Leibler (KL) divergence between a teacher model and a student model. This is mathematically represented by Equation 2.2: JKL = DKL(p\u2225q\u03b8) = EY\u223cp \u0014 log p(Y) q\u03b8(Y) \u0015 (2.2) where p is the teacher distribution, q\u03b8 is the student distribution, and Y is sampled from the teacher distribution. However, learning based on KL divergence at the sequence level can often lead to the student distribution becoming overly smooth, as it attempts to cover the entire support of the teacher distribution. This behavior arises due to the asymmetric nature of the KL divergence, a phenomenon sometimes referred to as the mode-averaging problem, as demonstrated by (Wen et al., 2023). Recent research (Go et al., 2023; Wen et al., 2023) have shown that symmetric divergences, such as the Jensen-Shannon (JS) divergence, exhibit fewer borderline behaviors and tend to yield improved results in sequence-level distillation. Traditional JS divergence is expressed in Equation 2.3: JJS = DJS(p\u2225q\u03b8) = 1 2EY\u223cp h log p(Y) m(Y) i + 1 2EY\u2032\u223cq\u03b8 h log q\u03b8(Y\u2032) m(Y\u2032) i (2.3) where p is the teacher distribution, q\u03b8 is the student distribution, Y and Y\u2032 are sampled from the teacher\u2019s and student\u2019s distributions and compared with their average m(\u00b7) = 1 2p(\u00b7) + 1 2q\u03b8(\u00b7). 2.5 Datasets for Multilingual ASR Here we present two widely used massively-multilingual datasets that will be used in this work: CommonVoice 13.0 and FLEURS. \fBACKGROUND AND RELATED WORK 7 2.5.1 CommonVoice 13.0 The CommonVoice 13.0 (CV-13) corpus (Ardila et al., 2020), represents the latest iteration of a massively multilingual collection of transcribed speech. It serves as a valuable resource for research and development in the field of speech technology. While primarily designed for Automatic Speech Recognition (ASR) applications, this dataset also finds utility in other domains, such as language identification. The utterances comprising this dataset are sourced from Wikipedia articles and supplemented with utterances contributed by language communities. These are subsequently narrated by contributors through Mozilla\u2019s website or iPhone app. To ensure data quality, contributions undergo validation by other volunteers, with only validated data being incorporated into the train, validation, and test subsets splits of the dataset. As of the current version, the dataset encompasses a rich tapestry of 110 languages, though the number of utterances per language varies significantly. 2.5.2 FLEURS The FLEURS (Conneau et al., 2023) is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark (Goyal et al., 2022), with approximately 12 hours of speech supervision per language. It was meant for few-shot learning on a variety of speech tasks, including Automatic Speech Recognition, Speech Language Identification, Speech Translation and Retrieval. The creation of this dataset involved the recording of all the publicly available sentences from FLoRes-101 (from dev and devtest split subsets). Each sentence was recorded by three paid native-speaker experts per language. Subsequently, these spoken sentences underwent a thorough evaluation by paid evaluators to ensure the overall quality and accuracy of the recorded content. The dataset is unbalanced as not all the sentences were validated, but most part of the languages have between 2400 and 3300 utterances on the train split, with an average 12 seconds per audio sample. 2.6 The Whisper Model In this section we present Whisper (Radford et al., 2023), the base model for the studies conducted in this work. \f8 BACKGROUND AND RELATED WORK \u22ef \u22ef 2\u00a0\u00d7 Conv1D + GELU \u22ee cross attention Log-Mel Spectrogram ~ SOT EN TRANSCRIBE 0.0 The quick Tokens in Multitask Training Format Transformer Encoder Blocks Transformer Decoder Blocks EN 0.0 The quick brown \u22ee \u22ee next-token prediction Sinusoidal Positional Encoding Learned Positional Encoding Multitask training data (680k hours) Sequence-to-sequence learning Multitask training format English transcription Any-to-English speech translation Non-English transcription No speech \ud83d\udde3\ufe0f \u00a0 \u201cAsk not what your country can do for \u22ef\u201d \ud83d\udcdd\u00a0\u00a0Ask not what your country can do for \u22ef \ud83d\udde3\ufe0f \u00a0 \u201cEl r\u00e1pido zorro marr\u00f3n salta sobre \u22ef\u201d \ud83d\udcdd\u00a0 The quick brown fox jumps over \u22ef \ud83d\udde3\ufe0f \u00a0\u201c\uc5b8\ub355 \uc704\uc5d0 \uc62c\ub77c \ub0b4\ub824\ub2e4\ubcf4\uba74 \ub108\ubb34\ub098 \ub113\uace0 \ub113\uc740\u00a0\u22ef\u201d \ud83d\udcdd\u00a0\u00a0\uc5b8\ub355 \uc704\uc5d0 \uc62c\ub77c \ub0b4\ub824\ub2e4\ubcf4\uba74 \ub108\ubb34\ub098 \ub113\uace0 \ub113\uc740\u00a0\u22ef \ud83d\udd0a\u00a0(background music playing) \ud83d\udcdd\u00a0 \u2205 PREV special tokens text tokens timestamp tokens START OF TRANSCRIPT LANGUAGE TAG NO SPEECH EOT TRANSCRIBE TRANSLATE begin time NO TIMESTAMPS \u22ef end time text tokens begin time end time text tokens text tokens Voice activity detection (VAD) Custom vocabulary / prompting Time-aligned transcription Text-only transcription (allows dataset-specific fine-tuning) X\u00a0\u2192 English Translation\u00a0 previous text tokens X\u00a0\u2192 X Transcription\u00a0 Language identification MLP self attention MLP self attention MLP self attention MLP cross attention self attention MLP cross attention self attention MLP cross attention self attention TRANSCRIBE Figure 1 The Whisper model architecture (Source: Radford et al. (2023)) 2.6.1 Overview Whisper is designed to serve as a versatile end-to-end Automatic Speech Recognition (ASR) model suitable for a wide range of applications and languages. When it comes to ASR, previous research has predominantly focused on two key approaches: large-scale Unsupervised Learning (Y. Wang et al., 2022) and Supervised Learning as discussed in Section 2.1. In the case of large-scale Unsupervised Learning, models benefit from training on vast, low-cost, and unlabeled datasets, which helps in building a high-quality encoding component. However, these models generate output that is not directly usable for ASR applications and requires further fine-tuning. On the other hand, Supervised Learning approaches utilize pretrained models that can be directly used for ASR tasks. However, they often struggle to generalize when faced with shifts in the data distribution, primarily due to the limited size of the datasets they were originally trained on. Additionally, creating large-scale human labeled datasets for these models can be prohibitively expensive. \fBACKGROUND AND RELATED WORK 9 Whisper takes a unique approach by introducing Weakly Supervised Learning, striking a balance between data quality and quantity. The Whisper training dataset is curated by collecting pairs of audio and corresponding transcripts from the internet (mainly YouTube videos). After some minimal processing, that included employing language identification with the model proposed by Valk and Alum\u00e4e (2021), this dataset comprises a substantial 680, 000 hours of highly diverse audio content. Notably, it encompasses 96 languages besides English, with approximately 17.2% of the dataset consisting of audio and transcript pairs in the same language (ASR). Additionally, around 18.4% of the pairs have English-translated transcripts. This unique approach provides Whisper with several advantages. Firstly, the Whisper encoder benefits from the rich and diverse dataset, making it perform exceptionally well, similar to Unsupervised settings. Secondly, Whisper is trained with relatively clean labels, allowing it to be used in a Zero-Shot manner without the need for extensive finetuning. 2.6.2 Architecture The architecture of Whisper consists of the original Transformer architecture (Vaswani et al., 2017) preceded by dimension reduction layer called stem. The architecture is visually depicted in Figure 1. Stem The stem comprises a pair of 1-dimensional Convolution Layers, each accompanied by GELU activations. Both convolution layers employ filters of size 3 and produce d output channels. The value of d varies across different sizes of the Whisper architectures. The first convolution layer operates with a stride of 1, while the second employs a stride of 2 (effectively reducing the length of the input sequence by half). Consequently, the output of the stem consists of a sequence of 1500 elements, each with dimension d. As the self-attention layers in a Transformer exhibit quadratic complexity concerning the sequence length, for a fixed hidden representation size of d, the stem significantly reduces the computational complexity by a factor of 4. Transformer In their work, Radford et al. (2023) primarily highlights the impact of scaled Weak Supervision on ASR system performance, with less emphasis on architectural modifications. The base architecture employed for Whisper is the encoder-decoder Trans\f10 BACKGROUND AND RELATED WORK former, which is renowned for its scalability and reliability in several sequence-tosequence tasks. However, the Whisper Transformer does introduce a few key modifications compared to the original Transformer architecture. Sinusoidal encodings are added to the input representations of the encoder, while the positional encodings in the decoder are learned. Additionally, GELU activation functions are used instead of ReLU, and these activations are applied following the residual blocks. Moreover, a normalization layer is included in the encoder\u2019s output. Furthermore, Whisper offers a range of five different architecture sizes, as detailed in Table 1. These varying sizes cater to different requirements and performance needs, allowing for flexibility in ASR tasks. Model Layer (L) Width (d) Parameters Tiny 4 384 39M Base 6 512 74M Small 12 768 244M Medium 24 1024 769M Large 32 1280 1550M Table 1 Architectural specifications for the Whisper model family. L denotes the number of layers per block, indicating that, for example, the tiny model with L = 4 consists of 4 transformer layers in the encoder and 4 in the decoder. Tokenization To tokenize transcripts, the Whisper model employs the BPE (Byte Pair Encoding) tokenizer originally introduced in GPT-2 by Radford et al. (2019). When dealing with languages other than English, the tokenizer is adapted by refining it until the vocabulary size matches that of English. 2.6.3 Multitasking Whisper is trained and operates as a multitask model, capable of handling various sub-tasks within a single end-to-end architecture. These sub-tasks encompass Voice Activity Detection, Language Identification, Text Alignment, Transcription, Translation, and more. To delineate each task and the expected format of the subsequent predictions, specific tokens are employed, as delineated in Table 2. These tokens are positioned at the start of the output sequence, providing task context (see Figure 1). Token generation follows an auto-regressive process, reliant on prior tokens. For ex\fBACKGROUND AND RELATED WORK 11 ample, when the detected language is French, the model computes the likelihood of token w at position k\u2032, as illustrated in Equation 2.4: P(wk\u2032 = w| . . . , <|fr|>, |transcribe|, . . . , wk\u2032\u22121, X) (2.4) Consequently, the generated tokens will probably only belong to the French vocabulary as they have higher conditional probabilities compared to ones belonging to other languages. Tasks Tokens Language Identification <|LANGUAGE|> e.g. <|en|>, <|gl|>, <|fr|>, <|fa|>, etc. Voice Activity Detection <|nospeech|> Transcribe <|transcribe|> Translate <|translate|> Alignment <|notimestamps|> Table 2 Subset of special tokens associated with Whisper\u2019s multitasks. For Language Identification, each language is specified with a token, and a single token is added to the sequence. This token is required. For Voice Activity Detection, only when the audio does not contain clear speech that its corresponding token is present in the output. The tasks Transcribe and Translate are mutually exclusive, but one of them is required. Additionally, certain special tokens can be predefined to simplify predictions. In our work, we specifically enforce transcription and language tokens, thereby eliminating dependency on Language Identification quality for under-represented languages. Tasks not pertinent to our study are also disregarded. \f12 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS 3 Bias Analysis on Quantized Speech Models In this chapter, we aim at addressing the two first objective of the internship: understand the bias presented on Whisper models, and investigate how these are impacted by the employment of quantization. 3.1 Experimental Setup 3.1.1 Dataset preparation In our research, we employed the two widely recognized datasets described in Section 2.5: FLEURS and Common Voice 13.0 (CV-13). These datasets provide valuable speaker-related information, including gender, language group (in the case of FLEURS), accent (exclusive to CV-13), and age (exclusive to CV-13). Building upon the information available in FLEURS, we curated a gender-balanced benchmark, which we refer to as Balanced-FLEURS. The primary goal here was to mitigate the influence of confusion variables such as sentence complexity and gender imbalance (where certain languages exhibit a higher percentage of speakers from one gender). To achieve this, we mixture the train, validation, and test sets of FLEURS, meticulously filtering them to ensure that each sentence was narrated by both a male and a female speaker. Meanwhile, we also ran a Voice Activity Detection model on the dataset, as we encountered a notable number of empty audio files in Spanish, Norwegian, and Malay1. We include in the experiments only the languages in which we were able to find at least 200 utterances. In addition to Balanced-FLEURS, we made use of the Common Voice 13.0 dataset, specifically its validation set. In this case, we leveraged gender and age information. While we attempted to incorporate accent information in our study as well, we encountered challenges in aggregating a sufficiently large dataset, even after merging the train, test, and validation splits. Consequently, we do not report our results with respect to accents. 3.1.2 Resourcefulness categorization In the course of our experiments, we have introduced a resourcefulness classification system specifically tailored to weakly-supervised speech models, with a primary focus 1 We have reported this issue to the Google Team via HuggingFace, listing all problematic files. The corresponding issue can be found here: https://huggingface.co/datasets/google/fleurs/discussions/16#6442a217f8b647fa4f50c489 \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 13 on the transcription task (ASR). This categorization is designed to group languages based on the amount of training data used in the model pre-training. The classification involves clustering languages into categories with similar amounts of training data, and the intervals used for this classification can be found in Table 3. Resourcefulness ASR Training data (h) Super High-Resource \u22655000 High-resource [1000, 5000) Mid-to-high-resource [500, 1000) Low-to-mid-resource [100, 500) Low-resource [10, 100) Extremely Low-Resource (0, 10) Table 3 Proposed Language resourcefulness categorization for Weakly-supervised ASR models It is worth noting that our proposed classification system has a limitation in the context of Whisper. Specifically, it does not account the volume of training data available for the speech translation task. While this data does not directly impact the quality of generated text data for a language (since in Whisper, translation data available is to English only), it does play a role in enhancing the model\u2019s speech encoding capabilities. 3.2 Bias evaluation on Whisper In this section, we present preliminary experiments conducted on the Whisper model. Our aim here is to investigate whether bias exists in the original versions of Whisper. To achieve this, we compare Whisper\u2019s performance on the validation split of CV-13 and on Balanced-FLEURS. Our analysis involves an aggregate approach, where we average the metrics across languages. Figures 2 (Balanced-FLEURS) and 3 (CV-13) showcase the Word Error Rate (WER) performance across the languages covered in the two datasets for whisper-large-v2. These results reveal a clear correlation between performance and resourcefulness, with lower resource languages (Low and Extremely Low-Resource) consistently exhibiting poorest performance. Naturally, the impact varies among languages, possibly due to their complexity or the amount of training data available for closely-related languages. These findings collectively suggest a bias linked to resourcefulness. \f14 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 2 Performance across languages on whisper-large-v2 on Balanced-FLEURS. Languages are ranked on x-axis based its performance. Figure 4 illustrates the average relative difference between male and female speakers for Balanced-FLEURS on whisper-large-v2. This metric, already employed is previous similar study by Boito et al. (2022), is relevant here as the sentences are consistently the same across genders. Meanwhile, Figure 5 displays the absolute difference (following Costa-juss\u00e0 et al. (2022)) in WER between male and female speakers on CV-13. In both cases, the results show varying degrees of gender bias across different languages. Remarkably, these biases are consistent across the different datasets, implying that each language possesses its unique bias, likely attributed to the quality and diversity of its training data. While the model does exhibit gender bias, it is essential to note that, for the most part, this bias remains within a maximum average WER difference of 3 for the majority of languages (in the case of CV-13). Figure 6 extends the analysis by presenting WER performance across different languages on Balanced-FLEURS, mirroring Figure 2. However, this time, we consider all available model sizes within the Whisper family. Languages are ranked by resourcefulness. These results unveil two significant findings: (i) the performance trend aligns across nearly all languages, suggesting a consistent ranking of languages based on performance across all models; and (ii) notably, a clear correlation emerges between smaller model sizes and reduced performance, with the model curves closely overlapping. This phenomenon likely stems from the curse of multilinguality, wherein less resourceful languages exhibit larger performance disparities among model sizes. Addi\fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 15 Figure 3 Performance across languages on whisper-large-v2 on CV-13. Languages are ranked on x-axis based its performance. Figure 4 Average relative WER difference between male and female voice for Balanced-FLEURS. Languages are ranked on x-axis based its relative difference and resourcefulness. tionally, it\u2019s worth noting the differences between large and large-v2 models. Although both models share the same size, the former benefits from more extensive training, additional optimization steps, and data augmentation techniques. Finally, these findings collectively shed light on bias associated with architecture size, despite models being trained with the same dataset. \f16 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 5 Absolute WER difference between male and female voice for CV-13. Languages are ranked on x-axis based its absolute difference. Figure 6 Performance across languages and across different whisper sizes on Balanced-FLEURS. Languages are ranked on x-axis based its resourcefulness. 3.3 Bias evaluation on quantized Whisper Now, we delve into the quantized version of Whisper. In this set of experiments, we apply the LLM.int8() method (Dettmers et al., 2022) (described in Section 2.3) to Whisper. The primary objective of this study is to investigate whether the biases observed in the original Whisper model persist, diminish, or intensify after quantization. In essence, we seek to understand what model features may be forgotten due to quantization. In contrast to the previous section, our analysis here adopts a sentence-level approach. We compare the model\u2019s performance on individual sentences before and after quantization. To ensure a fair evaluation, we exclude sentences with initial Word Error Rate (WER) values greater than or equal to 100. For this sentence-level analysis, we create histograms based on the absolute difference in WER before and after compression. We categorize sentences into three groups: those that worsened (WER increased by \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 17 more than 5), those that remained similar (WER difference less than 5), and those that improved (WER reduced by more than 5). Figure 7 Histogram of performance degradation by quantization per gender on Balanced-FLEURS Figure 8 Histogram of performance degradation by quantization per gender on CV13 Figures 7 (Balanced-FLEURS) and 8 (CV-13) present histograms categorized by gender for the whisper-large-v2 model. Figure 3 displays histograms categorized by age group for CV-13. The data clearly indicates that quantization equally impacts all genders and age groups, implying that gender and age biases are kept unchanged after quantization. In figures 10 (Balanced-FLEURS) and 11 (CV-13), we illustrate histograms categorized by language resourcefulness for whisper-large-v2. Here, a distinct pattern emerges: lower-resource languages are more significantly affected by quantization. While almost all sentences in super high-resource languages maintain their performance, approximately 25% of sentences in extremely low-resource languages are impacted (in the case of Balanced-FLEURS). Consequently, quantization amplifies the resourcefulness bias. \f18 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS Figure 9 Histogram of performance degradation by quantization per age group on CV-13 Figure 10 Histogram of performance degradation by quantization per resourcefulness group on Balanced-FLEURS Lastly, in figure 12 (Balanced-FLEURS) and ?? (CV-13), we present histograms considering all available model sizes within the Whisper family, grouped by model size. The results highlight significant differences in how quantization affects models of varying sizes. While a small proportion of sentences are impacted for whisper-large-v2, there is a striking contrast, with almost half of the sentences affected in the case of whisper-tiny. This highlights that the bias related to architecture size is significantly amplified by quantization. This last finding indicates that smaller models are generally more susceptible to the effects of quantization. This observation is particularly concerning as many parameterefficient domain adaptation methods in use today in NLP and Speech involve applying quantization first, without considering the model size. This calls for practitioners to \fBIAS ANALYSIS ON QUANTIZED SPEECH MODELS 19 Figure 11 Histogram of performance degradation by quantization per resourcefulness group on CV-13 Figure 12 Histogram of performance degradation by quantization per model size on Balanced-FLEURS exercise caution when adapting pre-trained models to avoid the addition of unintended bias. \f20 BIAS ANALYSIS ON QUANTIZED SPEECH MODELS 3.4 Summary of the main findings Here we present the key takeaways from this chapter. First, Whisper exhibits certain speaker-related biases, such as gender and age. These biases are kept unchanged after applying quantization to the model. On the other hand, biases associated with the model itself (model-related bias), including language resourcefulness and architecture size, are amplified by quantization. Overall, Low-resource languages are the most adversely affected by quantization. Moreover, there is a clear pattern on the architecture size, with smaller models experiencing more significant performance degradation compared to larger ones. This is concerning as current parameter-efficient approaches (such as QLoRA presented on Section 2.3) mostly apply quantization first, regardless of the model size. This presents a significant challenge: Can we enhance the performance of smaller models for languages where they currently perform poorly, even though the best model performs well? We aim at searching an alternative to quantization to reduce the model size. \fDISTILWHISPER 21 4 DistilWhisper One prominent observation is the significant Automatic Speech Recognition (ASR) performance gap between the whisper-large-v2 model and its counterparts of smaller sizes, especially when applied to a diverse set of languages. This gap in performance is noticeable across a wide spectrum of languages, that include the low-resource ones, but also many midand high-resource languages. As our earlier analysis, outlined in Chapter 3, revealed, the \"lower\" resource languages are also the most affected by lightweight compression techniques. This phenomenon is often referred to as the curse of multilinguality (as discussed in related works by Arivazhagan et al. (2019), Conneau et al. (2020), and Goyal et al. (2021)). It stems from the inherent challenge that arises when attempting to cover an extensive array of languages within a single model the performance inevitably suffers unless the model is significantly scaled up. This leads us to the central question that has motivated our research: Can we improve the performance of smaller models for languages in which they currently perform poorly, but the best model performs well? A common approach to address this challenge of achieving efficient inference could be distilling knowledge from a larger multilingual teacher model into a smaller pre-existing one, as highlighted in prior works such as the ones done by Sanh et al. (2020) and Mohammadshahi et al. (2022). However, when it comes to applying such knowledge distillation (KD) to whisper-large-v2, which represents the best and largest Whisper model, we face a significant hurdle. This is because we need access to information that is not readily available, such as comprehensive training data spanning all tasks and languages, and its original learning objective, in order to maintain the original model\u2019s robustness. Recent research findings, exemplified by works like Pfeiffer et al. (2022) and Pratap et al. (2023), have demonstrated an alternative solution to the curse of multilinguality. This approach involves equipping moderately sized models with language-specific (LS) modules. This sparse architectural design permits the extension of model parameters through additional modules as more languages are incorporated into the model. Consequently, it ensures consistent performance across languages without incurring substantial additional computational costs during inference. In light of the overarching goal to enhance model performance for various languages within the constraints of limited model capacity, our work introduces the DistilWhisper approach. We incorporate conditional language-specific routing (CLSR) modules, as described by B. Zhang et al. (2021), into a smaller version of Whisper. We then opti\f22 DISTILWHISPER Decoder CLSR Layer Cross-Attention Self-Attention Encoder CLSR Layer Self-Attention whisper-large-v2 LKD LCLSR whisper-small + CLSR Fine-tuning dataset LK x12 Fine-tuned\u00a0 Language-specific Layers Shared all ca cs uk ... g g x12 Frozen\u00a0 g Figure 13 The DistilWhisper optimization approach (left), and its architecture (right). The feed-forward is replaced by a CLSR module, where the LS gates (g) learn to alternate between the pre-trained frozen multilingual representation and the LS layer. mize these modules jointly through ASR fine-tuning and knowledge distillation from a larger Whisper model (whisper-large-v2). For a visual representation of our architecture, please refer to Figure 13, and in the subsequent sections, we delve into the key components of our approach. Following, in this chapter, we detail the elements that make up our approach. Then, in the next chapter (Chapter 5), we will present how we validate this approach and its results following the DistilWhisper approach presented here. 4.1 Conditional Language-Specific Routing We extend Conditional Language-Specific Routing (CLSR) modules proposed by B. Zhang et al. (2021), and commonly used in Multilingual Neural Machine Translation, for the first time to the speech domain. This module, which introduces sparsity to the Transformer architecture, learns a hard binary gate g(\u00b7) for each input token by using its hidden embedding zl. These decisions enable a layer to selectively guide information through either a LS path denoted as hlang or a shared path referred to as hshared, as in Eq. 4.1: CLSR(zl) = g(zl) \u00b7 hlang(zl) + (1 \u2212g(zl)) \u00b7 hshared(zl). (4.1) In contrast to the original CLSR, in this work we use language-specific gates as shown in Figure 13, instead of sharing them across languages. This allows us to train languagespecific components individually (i.e. in parallel), and then only load the relevant modules at inference. Moreover, our approach also differs from the original CLSR by the positioning: supported by previous work (Pfeiffer et al., 2022; B. Zhang et al., 2021), we limit CLSR to the feed-forward network (correspondent to the feature domain of the Transformer architecture), which we also replace entirely by the CLSR module, reducing the increment in the number of parameters. \fDISTILWHISPER 23 Following the proposal from B. Zhang et al. (2021), each gate g(.) is made by a twolayer bottleneck network, which is summed to a increasing zero-mean Gaussian noise during training to discretize it: g(zl) = \u03c3(G(zl) + \u03b1(t) \u00b7 N(0, 1)), (4.2) with G(zl) = ReLU(zlW1 + w2), (4.3) where \u03c3(\u00b7) is the logistic-sigmoid function, and W1 and w2 are trainable parameters. \u03b1 is linearly increased along with training steps t. At inference time, we adopt hard gating: g(zl) = \u03b4(G(zl) \u22650), (4.4) where \u03b4(\u00b7) is a Dirac measure. 4.2 DistilWhisper approach Figure 13 presents our proposed DistilWhisper architecture. Our student is enriched with CLSR modules at each feed-forward for each language. These all experts in each CLSR layer are equally initialized from the frozen weights of the corresponding feed-forward layer. At training time, for each language the model updates only the corresponding language-specific experts and gates. At inference time, the model loads the shared layers (multilingual) and the Language-Specific experts and gates for the languages of interest, resulting in a limited parameter overhead. We highlight that the use of CLSR modules brings more flexibility to our architecture when compared to adapters, as it allows for routing at the token-level. This makes this approach more capable of leveraging pre-existing knowledge (shared frozen module), activating the Language-Specific path only when this is likely to increase performance. 4.3 DistilWhisper optimization The optimization of our DistilWhisper architecture consist of a standard cross-entropy loss, along with two new elements: gate budget loss, and knowledge distillation. Following we detail these new elements. 4.3.1 Gate budget loss Following B. Zhang et al. (2021), when learning CLSR module parameters, in addition to standard cross-entropy loss LCE, we optimize a gate budget loss Lg to balance \f24 DISTILWHISPER models\u2019 usage of language-specific and shared modules. It relies on the gate g(.) activation values for a pair (audio, text) (X, Y ) in a batch B, which is expressed by: G(X,Y ) = X x\u2208X X m\u2208Menc gm(x) + X y\u2208Y X m\u2208Mdec gm(y) (4.5) where Menc and Mdec are respectively the sets of encoders and decoders layers, and gm(.) = 1 when LS expert is selected in the layer m, or gm(.) = 0 otherwise. The average of this gate usage, representing the amount of language-specific experts used for the model in the batch, is constrained to a budget b. So the final gate budget loss is expressed by: Lg = \f \f \f \f \f P (X,Y )\u2208B G(X,Y ) P (X,Y )\u2208B(|X||Menc| + |Y ||Mdec|) \u2212b \f \f \f \f \f (4.6) For regularization, also it is used a skip gate probability (s), that randomly choose a proportion s of the gates to be closed (use only shared part) during training. 4.3.2 Knowledge Distillation For Knowledge Distillation (KD), following recent research (Go et al., 2023; Wen et al., 2023), we employ Jensen\u2013Shannon divergence (JS), whose loss is detailed in Eq 4.7: LKD = 1 2EY\u223cp h log p(Y) m(Y) i + 1 2EY\u2032\u223cq\u03b8 h log q\u03b8(Y\u2032) m(Y\u2032) i (4.7) where p is the teacher distribution, q\u03b8 is the student distribution, Y and Y\u2032 are sampled from the teacher\u2019s and student\u2019s distributions and compared with their average m(\u00b7) = 1 2p(\u00b7) + 1 2q\u03b8(\u00b7). 4.3.3 Final Learning Objective The final learning objective the leverages the dataset labels using cross-entropy loss LCE, but also enforces the use of a specific budget via gate budget loss Lg and mirrors the behavior of the teacher with the knowledge distillation loss LKD.Thus, CLSR modules parameters are learned to minimize final loss expressed as: L = LCE + Lg + \u03b2LKD (4.8) where \u03b2 is a constant defined based on the quality of the teacher, but can also be scheduled or learned (with the add of new constraints for its magnitude). \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 25 5 Experiments and Results on DistilWhisper In the former chapter we presented the DistilWhisper approach. In this chapter we present how we validate our architecture and the method as a whole, showing that our approach is able to outperform both classical fine-tuning and adapters on whisper-small, providing better generalization through light-weight ASR fine-tuning and Knowledge Distillation of the teacher model. Code and models produced in this studied will soon be made available on Hugging Face and Github. 5.1 Experimental Setup In this section we overview our validation setup, that includes choosing the data we use for training and evaluating models, as well as which languages and baselines to consider. We also discuss some code implementation details. 5.1.1 Datasets In order to validate the proposed architecture, we make use of a sample of two widely used massively-multilingual datasets: CommonVoice 13.0 and FLEURS. More details about these datasets are presented on Section 2.5. In our experiments, we applied downsampling to both the train and validation sets of CV-13, ensuring an equal allocation of training data for each selected language in each experiment. For our primary experiment, we employed 10,000 utterances for training (approximately 14 hours of audio data) and 1,000 for validation. Additionally, we explored variations in dataset size, using downsampled sets of 3,000 and 28,000 utterances in scalability experiments. The selection of data for downsampling was guided by the number of up-votes received by annotators. Notably, we did not apply downsampling to the test set. For most part of our experiments, FLEURS serves as an invaluable resource for conducting out-of-domain evaluations. It offers a favorable degree of language overlap with the CommonVoice 13.0 dataset (CV-13), making it a suitable choice for comparative analysis. Notably, FLEURS provides an effective out-of-domain setting in the context of ASR evaluation. For instance, while the average number of tokens per sample in CV-13 is 36, FLEURS exhibits a substantially higher average of 97 tokens per sample. \f26 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.1.2 Language Selection In this work we focus on bridging the performance gap for a subset of under-performing languages of the whisper-small model through light-weight ASR fine-tuning and Knowledge Distillation of the whisper-large-v2 model, as proposed in chapter 4. For validating our method, we consider all Whisper languages with a WER gap of more than 11 between large and small models on CV-13. For our validation experiments we then narrow this list considering: 1) minimum amount of 10k utterances; 2) an overlap with the FLEURS dataset for out-of-domain evaluation. For scalability experiments we loose the first requirement so we can include more diverse set of languages, considering a minimum amount of 3k utterances. We also experiment with the languages in a setting with 28k utterances. Resourcefulness ASR Train data (h) Languages per setting 3k 10k 28k High-resource [1000, 5000) ca, fi, id, pl ca, pl ca Mid-to-high-resource [500, 1000) uk, vi uk Low-to-mid-resource [100, 500) cs, hu, ro, th, ta cs, hu, th, ta ta, th Low-resource [10, 100) bg, hi, sk, sl Extremely Low-Resource (0, 10) gl gl Table 4 Languages used in the experiments for validation of DistilWhisper grouped by resourcefulness. The final list of languages is: Bulgarian (bg), Catalan (ca), Czech (cs), Finnish (fi), Galician (gl), Hindi (hi), Hungarian (hu), Indonesian (id), Polish (pl), Romanian (ro), Slovak (sk), Slovenian (sl), Tamil (ta), Thai (th), Ukranian (uk), and Vietnamease (vi).2 These languages belong to 7 distinct language sub-families and exhibit significant variation in terms of their representation within the Whisper training data. This variation extends from a substantial 4,300 hours for certain languages, such as Polish (pl), to a mere 9 hours for languages like Galician (gl). For a detailed overview of these languages and their distribution across the three dataset sizes (3k, 10k, 28k), categorized by their resourcefulness (following the classification proposed on Section 3.1.2), please refer to Table 4. Additionally, Table 5 organizes these languages into groups based on their respective sub-families. 2 Although Arabic would also qualify considering our criteria, we find that the dialect from FLEURS differs from the ones present on CV-13. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 27 Sub-families Languages per setting 3k 10k 28k Slavic (Indo-European) bg, cs, pl, sk, sl, uk cs, pl Romance (Indo-European) ca, gl, ro ca, gl ca Finno-Ugrian (Uralic) fi, hu hu Austroasiatic id, vi Dravidian ta ta ta Tai (Kra\u2013Dai) th th th Indo-Iranian (Indo-European) hi Table 5 Languages used in the experiments for validation of DistilWhisper grouped by language sub-families. 5.1.3 Models and Baselines In our evaluation, we assess our approach in comparison to several baseline models. These include the whisper-small model, serving as our pre-trained student and starting point, and the whisper-large-v2 model, acting as the teacher model, and ultimately, as the target goal. Additionally, we explore two fine-tuning (FT) approaches for the student model: standard fine-tuning, where all model weights are updated, and LoRA adaptation, which focuses on refining the feed-forward layer. Moreover, we delve into the effects of the Conditional Language-Specific Routing (CLSR) layer independently, without knowledge distillation (KD), referred to as CLSR-FT. This allows us to isolate the influence of KD from the impact of the CLSR layer on the model\u2019s overall robustness. 5.1.4 Implementation details We conducted our experiments using the Transformers library (Wolf et al., 2020) and leveraged the pre-trained weights of both whisper-small and whisper-large-v2 models, sourced from HuggingFace3 4. Unless where stated different, our training protocol consisted of ten epochs, utilizing a learning rate of 10\u22124 with linear decay, a one-epoch warm-up phase, a batch size of 16, and a label smoothing factor of 0.1. For LoRA adaptation, we tested two scenarios: 1) We first adopted the hyperparameters proposed by M. Wang et al. (2023), notably r = 32, which is the most commonly 3 https://huggingface.co/openai/ 4 https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013 \f28 EXPERIMENTS AND RESULTS ON DISTILWHISPER used for this type of adapters; 2) We increase the hidden dimension of the adapters to r = 64, so the size of the adapters are comparable to the Language-specific modules on DistilWhisper. In the case of training the CLSR, we set the gate budget (b) to 0.5 and the skip-gate probability (s) to 0.2. For knowledge distillation (KD), we employed the Jensen\u2013Shannon divergence (JS) with a temperature (\u03c4) of 1, unless when stated in contrary. This was weighted such that the learning objective (L) consisted of the cross-entropy loss (LCE), the gate loss (Lg), and twice the KD loss (2LKD): L = LCE + Lg + 2LKD. We reported the normalized Word Error Rate (WER) using the Whisper normalization method, with a slight modification to prevent the splitting of numbers and Latin-scripted text into individual characters in languages that do not employ space delimitation (e.g., Thai). Further details, including the modified normalization method, implementation scripts, and model weights, will soon be made available on GitHub and HuggingFace. Throughout our experiments, we selected the best-performing model based on its WER performance on the downsampled CV-13 validation set. 5.2 DistilWhisper versus other adaptation approaches Table 6 presents the results for our first experiment. The top portion presents whisper-large-v2 (upper bound) and whisper-small (lower bound) pre-trained scores, which should not be directly compared to the other adaptation techniques (middle and bottom), as these models were not trained on CV-13 (full out-of-domain setting). The middle portion presents standard fine-tuning (FT) and LoRA adaptation at the feed-forward layers (LoRA-FT). Our results are presented in the bottom: CLSR-FT corresponds to the setting without LKD, while DistilWhisper is the complete setting in which both CLSR and KD losses are leveraged. For whisper-small, we observe that both the standard fine-tuning method (FT) and the LoRA Adapters (LoRA-FT) approaches (middle portion of Table 6) demonstrate the capacity to enhance performance on the in-domain test set (CV-13). However, as anticipated, employing FT leads to a decline in performance on the out-of-domain test set, with an average increase of 1.6. This is likely attributed to catastrophic forgetting, resulting in a tendency to overly specialize in the specific domain. In contrast, LoRAFT represents a more lightweight adaptation technique that preserves the pre-trained representation. Remarkably, it exhibits improvements in performance on both the indomain (average decrease of 12.8) and out-of-domain (average decrease of 5.6) test sets when compared to whisper-small. Notably, experimenting with a larger hidden \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 29 Common voice 13.0 (in-domain for FT only) Model # params avg ca th ta hu cs pl gl uk whisper large-v2 1.5 B 14.9 16.9 9.3 17.3 18.6 14.5 8.1 19.0 15.6 whisper-small 244 M 31.4 30.1 20.3 30.1 45.5 38.6 18.8 35.7 32.3 +FT 244 M 22.0 19.0 10.9 17.3 30.4 29.2 21.4 19.3 28.8 +LoRA-FT (r=32) 256 M 18.6 15.7 9.2 15.3 30.5 25.0 15.4 12.8 24.8 +LoRA-FT (r=64) 267 M 18.6 15.5 9.2 15.5 30.6 25.2 15.4 13.0 24.6 +CLSR-FT 269 M 16.4 13.9 7.4 13.6 24.9 20.9 16.0 11.2 23.5 DistilWhisper 269 M 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 FLEURS (out-of-domain) Model # params avg ca th ta hu cs pl gl uk whisper large-v2 1.5 B 12.6 5.6 12.6 19.3 17.9 14.4 5.9 16.8 8.3 whisper-small 244 M 29.2 14.6 22.7 36.2 42.9 40.3 18.2 33.5 24.8 +FT 244 M 30.8 19.1 28.2 31.6 51.3 38.9 26.1 23.2 27.9 +LoRA-FT (r=32) 256 M 23.6 15.5 17.6 25.5 38.5 33.4 18.5 17.7 22.3 +LoRA-FT (r=64) 267 M 23.6 15.7 17.6 25.7 38.2 33.9 18.5 17.3 22.1 +CLSR-FT 269 M 23.6 15.5 15.7 23.2 37.6 31.2 22.9 16.9 25.9 DistilWhisper 269 M 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 Table 6 WER (\u2193) for the 10k setting with dataset averages (avg) for baselines (top), adaptation approaches (middle), and our method (bottom) for in-domain (CV-13, FT only) and out-of-domain (FLEURS, all) test sets.Best results for whisper-small in bold. dimension (r) for the LoRA adapters did not yield any perceptible improvement on the average. Our approach, DistilWhisper, yields notable enhancements in performance. When compared to whisper-small, it achieves a substantial improvement on in-domain data, with an average decrease of 15.3. This improvement is also evident when compared to LoRA-FT, where an average decrease of 2.2 is observed. Additionally, DistilWhisper exhibits superior adaptability in out-of-domain scenarios when contrasted with the original whisper-small, resulting in an average increase of 6.4. Furthermore, it demonstrates more effective out-of-domain adaptation capabilities in comparison to LoRA-FT, with an average increase of 0.8. We observe that both versions of our approach, with and without KD, successfully outperform all other adaptation approaches (FT, LoRAFT) for in-domain and out-of-domain in all languages but two (pl and uk) (bottom portion of Table 6). These findings highlights the robustness of our approach, showcasing that the proposed architecture with the addition of CLSR layers on Whisper provides a strong solution. Notably, all of these improvements are achieved with a mere 25 million parameter overhead during inference (10 % of the original model size). \f30 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.3 Impact of knowledge distillation In this analysis, we compare the two versions of our approach: one entails optimizing a lightweight CLSR-based architecture without Knowledge Distillation (CLSR-FT), while the other incorporates Knowledge Distillation loss (DistilWhisper). Across the examined languages, we observe some interesting trends. Firstly, when considering in-domain performance, as shown in Table 6, the DistilWhisper model exhibits a slightly increase in average performance of 0.3 on the WER. The performance is superior in all languages but Polish and Galician. However, when it comes to out-of-domain scenarios, DistilWhisper consistently outperforms CLSRFT across all languages, resulting in an average improvement of 0.8 on the WER. This observation confirms our initial hypothesis that the inclusion of Knowledge Distillation leverages the robustness imparted by the teacher model, preventing overspecialization in the CV-13 domain. Collectively, these results underscore the effectiveness of our proposed architecture. Notably, we managed to bridge the out-of-domain performance gap between large-v2 and small by a substantial 39%, reducing it from 16.6 to 10.2 (average decrease of 6.5). All of this was achieved with only a modest 10% parameter overhead during inference (25 million parameters). 5.4 DistilWhisper Scalability In the previous sections we showed that our architecture improves scores for both indomain and out-of-domain datasets, compared to other adaptation approaches. In this section we investigate the effectiveness of our method with respect to the amount of data available for training. For this, we select a subset of languages for which we find more training data available on CV-13 (ca, th, ta). Table 7 presents results for our approach in lower-resource training settings (3k utterances; approx. 4 hours), and higher-resource settings (28k utterances; approx. 40 hours). 10k results as well as the results for whisper-large-v2 and whisper-small are repeated from Table 6. We observe that, as expected, increasing the amount of trainable examples leads to superior ASR performance for both approaches, with the leveraging of KD (DistilWhisper) being consistently superior to CLSR-FT and getting closer to close the out-of domain performance gap. For the 28k setup, we are able to reduce the out-of-domain WER gap between whisper-large-v2 and whisper-small by 75.8%, from 12.0 to 2.9. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 31 FLEURS CV-13 Train FLEURS CV-13 (out-of-domain) (in-domain) size avg avg ca ta th ca ta th whisper large-v2 12.5 14.5 5.6 19.3 12.6 16.9 17.3 9.3 whisper-small 24.5 26.8 14.6 36.2 22.7 30.1 30.1 20.3 +LoRA-FT (r=64) 3k 22.7 17.0 17.7 28.6 21.7 19.4 19.0 12.5 +CLSR-FT 3k 20.4 15.2 17.8 25.4 17.9 19.2 16.7 9.7 DistilWhisper 3k 20.2 14.8 17.2 25.7 17.6 18.9 15.9 9.6 +LoRA-FT (r=64) 10k 19.7 13.4 15.7 25.7 17.6 15.5 15.5 9.2 +CLSR-FT 10k 18.1 11.6 15.5 23.2 15.7 13.9 13.6 7.4 DistilWhisper 10k 17.4 11.2 15.4 21.6 15.1 13.8 12.5 7.2 +LoRA-FT (r=64) 28k 17.2 11.1 13.6 23.0 15.1 12.5 13.5 7.3 +CLSR-FT 28k 15.6 9.7 13.5 19.6 13.8 11.5 11.3 6.2 DistilWhisper 28k 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 Table 7 WER (\u2193) for different training data sizes (3k, 10k, and 28k utterances) for both in-domain (CV-13) and out-of-domain (FLEURS) test sets. Best results in bold. Furthermore, our approach demonstrates commendable robustness in relation to the quantity of trainable examples. Even with as few as 3,000 utterances (equivalent to 4 hours of training data), we are able to reduce the WER performance gap by 35.8% in out-of-domain data. This suggests that our method holds promise in enhancing ASR performance for low-resource languages, where training data availability is limited. Across all three settings, our approaches consistently outperform LoRA Adapters by a significant margin. Additionally, it is worth noting that, in nearly all cases within these settings, the inclusion of knowledge distillation proved more beneficial than fine-tuning alone, reinforcing the findings discussed in Section 5.3. 5.5 Gate Activation Analysis To better understand how the model uses routing mechanism, we analyze gate activation statistics on the experiment discussed on Section 5.4 for both CLSR-FT and DistilWhisper. This results are presented on Figure 14. Firstly, we observe a tendency for the models to rely more heavily on the newly introduced Language-Specific modules in out-of-domain scenarios. This could be attributed to the greater complexity and larger sentence sizes prevalent in the FLEURS dataset. \f32 EXPERIMENTS AND RESULTS ON DISTILWHISPER 3k 10k 28k 30 40 50 LS Activation (%) Catalan 3k 10k 28k Thai FLEURS CV-13 3k 10k 28k T amil CLSR-FT DistilWisper Figure 14 Ratio of LS layers chosen by the models (CLSR-FT and DistilWhisper) depending on (1) amount of training data; (2) in (CV-13) or out-of-domain (FLEURS); (3) language. Also, as expected, enlarging the training dataset consistently results in more reliable Language-Specific modules, leading to increased utilization of these modules. The only exception for this is Thai at the 28k setup with CLSR-FT, and this might be due to dataset quality and requires further investigation The comparison of the three languages reveals that Catalan displays a notably higher reliance on Language-Specific routes. This characteristic might be linked to the superior data quality available for Catalan in CV-13, where a substantial number of contributors have contributed to the dataset. Also, the distilled version uses more LS modules, probably because the teacher whisper-large-v2 is a really good model for this language. Now for languages with a weaker teacher (Thai, Tamil) we observe that the model may receive contradictory signals at lower-resource settings (3k, 10k), leading to less Language-Specific routing usage with Knowledge Distilation. However, in the higher resource setting (28k), KD usage leads systematically to more reliable LanguageSpecific module and therefore higher LS routing. Finally, we observe a common trend across the three languages models tend to employ more Language-Specific routes when learning with Knowledge Distillation (DistilWhisper vs. CLSR-FT). This suggests that KD imparts valuable information and enhances the out-of-domain generalization capabilities of the learned Language-Specific representation. \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 33 Common voice 13.0 (in-domain for FT only) Model avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 17.0 19.9 16.9 14.5 14.4 19.0 24.6 18.6 8.5 8.1 15.8 31.9 20.6 17.3 9.3 15.6 whisper-small 34.2 44.8 30.1 38.6 30.5 35.7 43.6 45.5 22.5 18.8 33.2 42.0 45.5 30.1 20.3 32.3 +CLSR-FT 22.9 26.1 19.2 25.7 25.1 15.3 18.8 31.6 19.2 18.3 23.4 36.6 28.6 16.7 9.7 29.5 DistilWhisper 22.6 25.9 18.9 26.2 24.8 14.7 18.3 31.0 18.6 18.6 21.5 36.8 27.7 15.9 9.6 30.0 FLEURS (out-of-domain) Model avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 13.7 14.6 5.6 14.4 9.7 16.8 23.8 17.9 7.1 5.9 14.4 11.7 23.1 19.3 12.6 8.3 whisper-small 32.8 39.9 14.6 40.3 26.8 33.5 47.9 42.9 18.6 18.2 34.6 35.8 54.5 36.2 22.7 24.8 +CLSR-FT 29.2 43.8 17.8 35.4 33.7 19.8 22.8 40.1 19.0 21.9 33.4 35.3 50.8 25.4 17.9 21.6 DistilWhisper 29.2 42.8 17.2 35.6 32.0 18.7 21.8 41.1 19.1 21.9 33.1 35.2 50.5 25.7 17.6 25.9 FLEURS (out-of-domain) CV-13 (in-domain) High Mid-to-high Low-to-mid Low Extremely Low High Mid-to-high Low-to-mid Low Extremely Low whisper large-v2 7.1 8.3 15.7 18.3 16.8 12.0 15.6 15.1 24.3 19.0 whisper-small 19.6 24.8 35.3 44.5 33.5 25.5 32.3 33.5 44.0 35.7 +CLSR-FT 23.1 21.6 30.4 38.2 19.8 20.4 29.5 21.4 27.5 15.3 DistilWhisper 22.5 25.9 30.6 37.6 18.7 20.2 30.0 20.8 27.2 14.7 Table 8 WER (\u2193) for the 3k setting with dataset averages (avg) for baselines (top), and our method (bottom) for in-domain (CV-13, FT only higher portion) and out-of-domain (FLEURS, all middle portion) test sets. On the lower portion, the same results are grouped by resourcefulness. Best results for whisper-small in bold. \f34 EXPERIMENTS AND RESULTS ON DISTILWHISPER 5.6 Considerations on the Resourcefulness Our observations so far indicate that both versions of our approach, with and without knowledge distillation (KD), demonstrate consistent outperformance over all other adaptation methods (FT and LoRA-FT). This improvement holds true for both in-domain and out-of-domain scenarios across all languages, with only two exceptions on the 10k setting (Polish and Ukrainian), as indicated in the lower portion of Table 6. The challenges encountered in these two languages can be attributed to their higher resource status, with Polish being a high-resource language and Ukrainian categorized as midto-high resource, as detailed in Table 4. In order to deepen this analysis, we conducted experiments across a broader range of languages, widening to those with a minimum of 3,000 utterances available for training. The outcomes of these experiments are presented in Table 8, where we have also aggregated the results into resourcefulness clusters (in the lower portion) based on the classification provided in Table 4. Examining the results, we observed that more substantial out-of-domain improvements are seem in languages with lower resource availability (Low-to-mid, Low and Extremely low-resource clusters). This aligns with the initial motivation behind our work, which aimed to address the curse of multilinguality. We expect that lower resource languages experience a more significant impact from this phenomenon during the pre-training of whisper-small. Consequently, they significantly benefit more from the integration of language-specific modules in the feature domain. In contrast, for languages with higher resource availability, further enhancements may be necessary, such as adjustments to attention weights (corresponding to the time domain). This is due to the fact that the original model already performs reasonably well. Additionally, achieving better out-of-domain performance may require a larger training dataset. This is exemplified by the case of Catalan presented in Table 7. In this case, CLSR modules yielded superior performance than original whisper-small only in the case trained with 28,000 utterances, losing to its starting point for 3,000 and 10,000 training utterances. 5.7 Effect of temperature and distillation loss In this set of experiments, our goal is to examine the impact of the chosen distillation optimization on the results. We start by exploring the effect of temperature. Temperature plays a crucial role in determining the learning behavior of the model. A lower temperature, like 1, tends to make the learning focus primarily on replicating the first \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 35 option from the teacher\u2019s logits for each token. Conversely, a higher temperature, such as 3 or 4, encourages the learning to take into account the other options, thereby mitigating the cost from incorrect predictions. However, this approach may lead to over-smoothing of the distribution and a reduced ability to effectively rank similar logits. Common voice 13.0 (in-domain) avg ca th ta hu cs pl gl uk JS w/ \u03c4 = 1 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 JS w/ \u03c4 = 3 16.3 14.1 7.5 13.1 23.5 21.1 16.2 11.6 23.6 FLEURS (out-of-domain) avg ca th ta hu cs pl gl uk JS w/ \u03c4 = 1 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 JS w/ \u03c4 = 3 23.4 17.0 15.6 21.5 36.0 31.4 22.4 16.8 26.2 Table 9 WER (\u2193) for the 10k setting with dataset averages (avg) for JS loss with temperatures 1 and 3, for in-domain (CV-13, FT only higher portion) and out-ofdomain (FLEURS, all lower portion) test sets. Best results in bold. Tables 9 and 10 present the results of comparing different temperatures (1 or 3) with the Jensen\u2013Shannon loss for both the 10k and 28k settings. These results reveal that using a temperature of 1 generally results in improved in-domain and out-of-domain performance compared to a temperature of 3. However, for Tamil and Hungarian, temperature 3 showed better out-of-domain performance. These results suggest that whisper-large-v2 serves as an effective teacher, justifying the use of a temperature of 1. Nevertheless, the optimal temperature value may vary depending on the quality of the teacher model for each specific language. FLEURS CV-13 FLEURS (out-of-domain) CV-13 (in-domain) avg avg ca ta th ca ta th JS w/ \u03c4 = 1 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 JS w/ \u03c4 = 3 16.3 9.7 14.8 20.1 14.1 11.8 11.3 5.9 KL w/ \u03c4 = 1 15.6 10.8 14.6 18.7 13.3 14.9 11.3 6.2 KL w/ \u03c4 = 3 16.5 9.7 15.8 19.8 14.0 12.2 11.1 5.9 Table 10 WER (\u2193) for different training data sizes (3k, 10k, and 28k utterances) for JS and KL losses for temperatures 1 and 3 for both in-domain (CV-13) and out-ofdomain (FLEURS) test sets. Best results in bold. \f36 EXPERIMENTS AND RESULTS ON DISTILWHISPER Table 10 also compares the use of the Jensen\u2013Shannon (JS) loss with the traditional Kullback\u2013Leibler (KL) loss discussed in Section 2.4, specifically for the 28k setting. Once again, the results favor a temperature of 1 in both cases, with a slight advantage for the JS loss against KL, primarily driven by Catalan out-of-domain performance. This advantage is more pronounced in in-domain performance. These findings indicate the presence of the mode-averaging problem introduced in Section 2.4, although they are not definitive. They raise questions about whether these behaviors change when working with larger or smaller fine-tuning datasets and different levels of language resourcefulness. Unfortunately, due to time constraints, we could not explore these aspects in this study, leaving them as potential directions for future research. 5.8 Multi-domain training In our final experiment, we delve into the impact of incorporating the train split of FLEURS dataset into our training data in the previously explored settings. The objective here is to use the validated architecture to generate models that would be more beneficial to the scientific community. In real-world scenarios, the models developed here are likely to be utilized in domains other than FLEURS or CV-13, so the hypothesis is that training on more than one dataset yields a better model. Common voice 13.0 Model Train data avg ca th ta hu cs pl gl uk whisper large-v2 14.9 16.9 9.3 17.3 18.6 14.5 8.1 19.0 15.6 whisper-small 31.4 30.1 20.3 30.1 45.5 38.6 18.8 35.7 32.3 DistilWhisper CV10k 16.1 13.8 7.2 12.5 24.1 19.9 16.1 11.6 23.2 +CLSR-FT CV10k + F 15.5 15.1 6.8 12.4 21.9 18.4 16.3 11.3 22.2 DistilWhisper CV10k + F 14.6 13.2 6.4 11.6 21.6 15.3 15.8 11.2 21.6 FLEURS Model Train data avg ca th ta hu cs pl gl uk whisper large-v2 12.6 5.6 12.6 19.3 17.9 14.4 5.9 16.8 8.3 whisper-small 29.2 14.6 22.7 36.2 42.9 40.3 18.2 33.5 24.8 DistilWhisper CV10k 22.8 15.4 15.1 21.6 37.2 29.8 21.4 16.7 25.1 +CLSR-FT CV10k + F 17.2 11.8 10.1 16.0 28.1 23.2 17.1 12.9 18.7 DistilWhisper CV10k + F 16.7 11.9 9.4 14.6 27.7 22.1 17.7 12.7 17.3 Table 11 WER (\u2193) for the setting trained with 10k from CV-13 and FLEURS with dataset averages (avg) for baselines (top), adaptation approaches (middle), and our method (bottom) CV-13 and FLEURS test sets (both in-domain). Best results for whisper-small in bold. Table 11 showcases the outcomes of training the model in a setting involving 10k sentences from CV-13 along with the entire FLEURS train split. In this setting, we once \fEXPERIMENTS AND RESULTS ON DISTILWHISPER 37 again experiment with CLSR fine-tuning. For reference, the table also presents results from section 5.2. The results reaffirm better performance for the setting with Knowledge Distillation compared to CLSR-FT. More significantly, the results demonstrate a substantial improvement within the domain when FLEURS is incorporated as part of the training dataset. Training with FLEURS reduces the WER on CV-13 by 1.5. This improvement is likely due to FLEURS\u2019 greater sentence complexity and larger average token count per line, contributing to enhanced training data diversity. In table 12, we repeat the same experiment using settings with 3k and 28k sentences from CV-13, both added to the full FLEURS dataset. The results allow us to draw the same conclusions: the addition of out-of-domain training data (FLEURS) results in superior in-domain generalization on CV-13. Nevertheless, it is evident that the size of the training data remains a limiting factor, as CV3k+F (approximately 6k sentences) was insufficient to surpass CV10k alone, and similarly for CV10k+F (around 13k sentences) in comparison to CV28k alone. In this section, we have presented the best models attainable for each setting using these two datasets. These models will be made open-source, and we hope they contribute to the development of speech recognition applications in these languages. \f38 EXPERIMENTS AND RESULTS ON DISTILWHISPER Common voice 13.0 Train data avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 17.0 19.9 16.9 14.5 14.4 19.0 24.6 18.6 8.5 8.1 15.8 31.9 20.6 17.3 9.3 15.6 whisper-small 34.2 44.8 30.1 38.6 30.5 35.7 43.6 45.5 22.5 18.8 33.2 42.0 45.5 30.1 20.3 32.3 DistilWhisper CV3k 22.6 25.9 18.9 26.2 24.8 14.7 18.3 31.0 18.6 18.6 21.5 36.8 27.7 15.9 9.6 30.0 DistilWhisper CV3k + F 19.3 21.8 15.0 21.7 22.4 14.2 15.8 26.4 17.0 17.2 18.0 29.3 22.9 13.4 7.8 27.0 FLEURS Train data avg bg ca cs fi gl hi hu id pl ro sk sl ta th uk whisper large-v2 13.7 14.6 5.6 14.4 9.7 16.8 23.8 17.9 7.1 5.9 14.4 11.7 23.1 19.3 12.6 8.3 whisper-small 32.8 39.9 14.6 40.3 26.8 33.5 47.9 42.9 18.6 18.2 34.6 35.8 54.5 36.2 22.7 24.8 DistilWhisper CV3k 29.2 42.8 17.2 35.6 32.0 18.7 21.8 41.1 19.1 21.9 33.1 35.2 50.5 25.7 17.6 25.9 DistilWhisper CV3k + F 18.3 21.0 12.3 24.2 19.7 13.9 13.5 29.0 13.0 16.6 21.4 19.4 27.5 15.1 10.3 18.1 Training FLEURS CV-13 FLEURS CV-13 Data avg avg ca ta th ca ta th whisper large-v2 12.5 14.5 5.6 19.3 12.6 16.9 17.3 9.3 whisper-small 24.5 26.8 14.6 36.2 22.7 30.1 30.1 20.3 DistilWhisper CV28k 15.4 9.3 13.1 19.2 14.0 11.3 10.9 5.7 DistilWhisper CV28k + F 11.4 9.0 10.8 14.2 9.4 10.9 10.5 5.6 Table 12 WER (\u2193) for the 3k setting with dataset averages (avg) for baselines (top), and our method (bottom) for in-domain (CV-13, FT only higher portion) and out-of-domain (FLEURS, all middle portion) test sets. On the lower portion, the same results are grouped by resourcefulness. Best results for whisper-small in bold. \fCONCLUSION 39 6 Conclusion This internship focused on investigating bias on Whisper, a family of large speech models, specifically examining speaker-related (gender, age, accent) and model-related (model size, resourcefulness, similar languages) biases. Additionally, we explored whether these biases are mitigated or exacerbated by quantization and proposed an alternative compression approach. Our findings revealed that Whisper exhibits both speaker-related and model-related biases. Speaker-related biases are kept unchanged after quantization, while modelrelated biases are amplified by this compression technique. Low-resource languages are particularly more affected, and smaller models experience significant performance degradation. This is concerning because current parameter-efficient approaches typically apply quantization uniformly across models, introducing unintended bias. To address this challenge, we introduced DistilWhisper, a parameter-efficient distillation approach that enhances the performance of whisper-small by transferring the robustness of whisper-large-v2 into a smaller model. This is achieved by incorporating language-specific gated modules and jointly optimizing ASR fine-tuning and knowledge distillation losses. Our results consistently showed performance improvements across various languages and test sets, with minimal parameter increase during inference. We believe this approach will democratize the use of Whisper models, making them accessible to a wider audience of researchers and practitioners. This approach was organized as a paper submitted to the conference ICASSP 2024 (Ferraz et al., 2024). Code and models produced in this study will be made available soon on Hugging Face and Github. 6.1 Future Work There are several promising directions for future research in this area. Firstly, it would be beneficial to expand upon the analysis presented in Chapter 3, including an investigation into other quantization methods, such as 4-bit quantization. Exploring these methods across various model families would help determine if the conclusions drawn here are applicable more broadly. This could present an important contribution to the community and ensure the correct usage of these techniques. Additionally, further research into the DistilWhisper approach could yield valuable insights. Examining the effects of several hyperparameters, such as gate budget, KD loss weight, and temperature, would provide a deeper understanding of the approach\u2019s \f40 CONCLUSION behavior. This exploration could help find the best setting for optimal performance of the approach. Furthermore, it would be valuable to assess the impact of the proposed approach in multitasking beyond transcription (ASR), particularly in speech translation. Investigating whether language-specific paths can enhance translation performance to English, and exploring the potential for achieving new zero-shot capabilities in many-to-many translation scenarios, could open up exciting possibilities for the field. \fIV REFERENCES"
|
| 19 |
+
}
|
title_10K/test_title_short_2405.00970v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00970v1",
|
| 3 |
+
"title": "How Can I Get It Right? Using GPT to Rephrase Incorrect Trainee Responses",
|
| 4 |
+
"abstract": "One-on-one tutoring is widely acknowledged as an effective instructional\nmethod, conditioned on qualified tutors. However, the high demand for qualified\ntutors remains a challenge, often necessitating the training of novice tutors\n(i.e., trainees) to ensure effective tutoring. Research suggests that providing\ntimely explanatory feedback can facilitate the training process for trainees.\nHowever, it presents challenges due to the time-consuming nature of assessing\ntrainee performance by human experts. Inspired by the recent advancements of\nlarge language models (LLMs), our study employed the GPT-4 model to build an\nexplanatory feedback system. This system identifies trainees' responses in\nbinary form (i.e., correct/incorrect) and automatically provides template-based\nfeedback with responses appropriately rephrased by the GPT-4 model. We\nconducted our study on 410 responses from trainees across three training\nlessons: Giving Effective Praise, Reacting to Errors, and Determining What\nStudents Know. Our findings indicate that: 1) using a few-shot approach, the\nGPT-4 model effectively identifies correct/incorrect trainees' responses from\nthree training lessons with an average F1 score of 0.84 and an AUC score of\n0.85; and 2) using the few-shot approach, the GPT-4 model adeptly rephrases\nincorrect trainees' responses into desired responses, achieving performance\ncomparable to that of human experts.",
|
| 5 |
+
"authors": "Jionghao Lin, Zifei Han, Danielle R. Thomas, Ashish Gurung, Shivang Gupta, Vincent Aleven, Kenneth R. Koedinger",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.HC"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "LLM Fairness",
|
| 16 |
+
"gt": "How Can I Get It Right? Using GPT to Rephrase Incorrect Trainee Responses",
|
| 17 |
+
"main_content": "Introduction One-on-one tutoring has been recognized as a highly effective strategy for enhancing student learning, with substantial evidence supporting its impact [1, 2]. However, there are significant challenges associated with the scalability of one-on-one tutoring, primarily due to the scarcity of skilled tutors, including certified teachers and paraprofessionals. This shortage has left an estimated 16 million students in the United States in need of individualized support, as highlighted by [1]. In response to this shortage, there has been a strategic shift towards effectively training novice tutors, including community volunteers, retired individuals, and college students, to fulfill tutoring role [2]. The growing demand for skilled tutors has resulted in the development of various professional development programs tailored to the unique needs of nonprofessional and novice tutors [2]. Driven by this need, researchers have explored the use of online scenario-based training to simulate real-life tutoring scenarios for novice tutors [3] and pre-service teachers [4]. Fig. 1 illustrates a scenario on Giving Effective Praise. It demonstrates how tutors can fail to appropriately acknowledge the student\u2019s efforts by providing outcome-based praise as opposed to effort-based praise. For instance, saying \u201cKevin, good job getting the problem correct!\u201d fails to acknowledge the student\u2019s efforts and persistence. As indicated in previous research [5, 6], the availability of realtime explanatory feedback within the scenario-based training lessons can help tutors provide effective praise. Particularly, real-time feedback on learners\u2019 errors, similar to the feedback received while engaging in the deliberate practice of responding to Fig. 1: An example of a trainee (i.e., novice tutor) incorrectly responding to an openended question on how to best reply to a student by giving effective praise. In this particular example, the trainee is praising the student for getting the problem correct, which is achievement or outcomes-based praise and not based on effort. 2 \fsituational judgment tests, is described as a favorable learning condition and can lead to better learning outcomes [7, p. 5]. While the benefits of real-time explanatory feedback in enhancing tutor learning outcomes are well-documented, crafting such feedback presents substantial challenges due to its labor-intensive nature. Traditionally, providing this level of specialized training, replete with personalized explanatory feedback, warrants a substantial investment of effort and time. The process of providing personalized feedback to novice tutors requires considerable time and effort from skilled tutors to ensure feedback effectiveness and relevance. Moreover, beyond the substantial investment of time and effort, the feasibility of scaling such training protocols to meet the high demand across educational settings significantly compounds the challenge. However, recent breakthroughs in large language models (LLMs) offer a promising avenue for streamlining this process. Models such as the Generative Pre-trained Transformer (GPT) could potentially automate the generation of personalized, real-time feedback for tutors [6, 8]. This automation not only has the potential to alleviate the resource burden but also to enhance the specificity and precision of the feedback by accurately identifying the personalized needs of the tutors [6]. Currently, the quality of automated explanatory feedback is lacking, with many systems failing to provide learners with accurate feedback on their constructed responses [5, 6]. We argue that the quality of feedback for tutor training can be further improved. Inspired by the feedback research [9\u201311], where learners interpret performance-related information to enhance their understanding, we postulate that presenting desired tutoring responses within feedback to novice tutors can enhance the effectiveness of the training. However, rephrasing incorrect tutor responses into the correct or desired form often necessitates a substantial investment of time and effort from experienced tutors\u2014hence introducing scalability constraints associated with tutor training. Thus, we aim to explore approaches to improve our ability and accuracy in providing tutors with explanatory feedback while also mitigating the time and effort requirements of human graders by automating the process of generating explanatory feedback and correction to their responses. The automation requires the development of classification systems that can effectively analyze tutor responses or, in other words, classification systems that determine the correctness of tutor responses to scenario-specific requirements of the learners. However, there is useful learner information within appropriate classified incorrect responses. These incorrect learner-sourced responses can be used to provide tutors corrective, explanatory feedback by taking an incorrect response and rephrasing or modifying it to make it a desired, or correct, response. Research supports when learners are given specific feedback related to their responses, such as taking incorrect tutor responses and personalizing them by making them correct, they gain a better understanding of their learning [12, 13]. We aim to explore how GPT models can serve as supplementary tools to deliver synchronous feedback to tutors on their responses of how to best respond to specific training scenarios (e.g., praising a student for effort) leveraging useful tutor incorrect responses. We propose two Research Questions: RQ1: Can a large language model accurately identify trainees\u2019 incorrect responses where trainees failed to effectively guide students in specific training scenarios? 3 \fRQ2: Can GPT-4 be harnessed to enhance the effectiveness of trainees\u2019 responses in specific training scenarios? We initially developed a binary classifier to determine tutor\u2019s correct and incorrect responses from three training lessons: Giving Effective Praise, Reacting to Errors, and Determining What Students Know. We employed zero-shot and few-shot learning approaches to classify the trainees\u2019 responses. Our result demonstrated that the fiveshot learning approach achieved acceptable performance in identifying the incorrect responses. Building upon the results of RQ1, we selected the incorrect responses identified by our optimal few-shot learning classifier, which is further used for the RQ2. We explored the idea of rephrasing incorrect trainees\u2019 responses to determine if we can prompt GPT-4 to effectively make them correct. An example of an incorrect response from the lesson Giving Effective Praise is shown in Fig. 1), e.g., \u201cKevin, good job getting the problem correct!\u201d. Through extensive experiments, we obtained an effective prompt to secure the rephrased responses presented in an accurate form with minimal changes of the words from the original incorrect responses. Building upon the result from RQ1 and RQ2, we build a feedback system to provide explanatory feedback to the incorrect trainee\u2019s response shown in Fig. 2. Fig. 2: Explanatory feedback for novice tutor responses. 2 Related Work 2.1 Significance of Feedback on Learning Feedback plays a crucial role in improving the students\u2019 learning outcomes and performance [9, 10, 14]. In the field of feedback research, theoretical models have been developed to explain the impact of feedback on learning and to identify the core principles that underpin effective feedback design. Hattie and Timperley [10] defined feedback as the information about the correctness of a learner\u2019s actions or decisions, along with explanations about why those actions or decisions are right or wrong, underlines the significance of feedback. As emphasized in their work [10], the influence of feedback on learning varies based on the type and timing of its delivery. Effective feedback should assist learners in understanding the rationale behind the feedback, which is crucial for deeper learning [9]. Moreover, including the correct answer within the feedback substantially enhances its efficacy by offering learners the information needed to correct their errors [11]. This is especially relevant when learners 4 \fanswer open-ended questions, as simply knowing that their response is incorrect may not suffice to improve their understanding [11]. By presenting the correct answer (or correct responses to open-ended question) in the feedback, learners can compare their responses with the correct responses, identify areas for improvement, and gain guidance on how to approach similar questions in the future [12, 13]. To help learners identify their misconception in the open-ended question, we posit that it is necessary to include the correct responses in the feedback. However, providing timely explanatory feedback faces challenges since crafting effective explanatory feedback is often timeconsuming and labor-intensive nature [6, 8, 14]. To address this issue, it is necessary to develop automated feedback generation system. 2.2 Feedback Generation The development of automated feedback has received significant attention from educational researchers [5, 6, 8, 15, 16]. For example, Ontask [15] is a rule-based feedback provision system designed to assist instructors in delivering personalized feedback based on specific conditions of learners (e.g., the duration spent on the learning system). Additionally, Demszky et al. [16] developed a feedback system that automatically delivers explanatory feedback to instructors via email within two to four days after their tutoring sessions. Their study results [16] indicate that timely explanatory feedback enhanced learners\u2019 satisfaction. Lin et al. [5] used sequence labeling techniques to provide automated explanatory feedback, which demonstrated the potential of the large language models on identifying the effective components of feedback. Despite demonstrating the effectiveness of automated feedback systems, the provision of feedback with correct responses to open-ended question is still under-explored, which are needed to advance feedback systems 2.3 Using Large Language Models for Feedback Generation Inspired by recent research on using large language models for feedback generation [5, 6, 17\u201320], we posit that GPT-based large language models hold potential for advancing the development of automated feedback. For example, Dai et al. [8] investigated the capability of GPT-3.5 model (ChatGPT) to generate feedback for students\u2019 writing assignment and they [8] found that GPT-3.5 could produce feedback that was more readable than that of human instructors. Subsequently, Dai et al. [20] found that GPT-4 outperformed both GPT-3.5 and human instructors in providing effective feedback based on the feedback attributes proposed by [10]. Then, Hirunyasiri et al. [6] leveraged the GPT-4 model to provide timely feedback for human tutors\u2019 training. Their results [6] indicated that GPT-4 outperformed human educational experts in identifying a specific tutoring practice, giving effective praise. While these studies have demonstrated the feasibility of GPT-based models in feedback generation, none have ventured into generating explanatory feedback with correct responses to open-ended questions. Given that GPT-4 has shown remarkable performance on various educational tasks (e.g., generating high-quality answer responses for middle school math 5 \fquestions [17] and providing feedback for multiple-choice questions at the middleschool math level [18]), our study also leveraged the GPT-4 model to further explore its capabilities in automatically generating explanatory feedback. 3 Method 3.1 Data We developed an online learning platform1 to facilitate training for the novice tutors in the form of brief scenario-based lessons. Within the scope of this study, we refer to the novice tutors participating in the training activities as trainees. Aligning with previously demonstrated competencies of effective tutoring [21], each lesson presents scenario-based questions to facilitate an authentic and contextually relevant tutor learning opportunity. These scenarios challenged the tutors to apply their knowledge and skills by simulating real-world tutoring situations (see Fig. 1). We examined the trainees\u2019 performance and understanding across three lessons: Giving Effective Praise, Reacting to Errors, and Determining What Students Know. These lessons are based on the skillsets that were identified to be crucial for tutors in previous work [3, 21]. Each lesson consisted of two scenarios. Across all trainees, we collected 410 responses: 140 responses from the 70 trainees who took the Giving Effective Praise lesson, 118 responses from Reacting to Errors (59 trainees), and 152 responses from Determining What Students Know (76 trainees). Before analysis, we removed 10, 4, and 13 responses respectively from each lesson because they were either empty or contained incoherent or meaningless content (e.g., \u201cad;fajkl\u201d, \u201ctest test test\u201d or \u201cI have no idea\u201d), resulting in a total of 383 analyzed responses. We also collected demographic information about the trainees, including their experience as tutors, as presented in Table 1. For each lesson, tutors provided self-reported demographic details, including information regarding their race, gender, age, and tutoring experience. 3.2 Annotation for Trainee\u2019s Responses In the lesson Giving Effective Praise, trainees practice their skills in engaging students by offering effort-based praise. The praise provided by trainees should effectively acknowledge students\u2019 efforts and aim to enhance their motivation and desire to keep learning. A tutoring scenario was depicted where a student was struggling to persevere on an assignment (See the scenario in Table 2). The tutor trainee\u2019s responses were expected to show the components of effective praise as suggested by research recommendation [3]. Effective praise should be: 1) timely, positive, and sincere, 2) highlighting what student did well during the tutoring, 3) genuine and avoiding generic comments like \u201cgreat job\u201d, and 4) focus on the learning process rather than on the student or the outcome. In short, correct praise responses should be supportive, positive, encouraging, and acknowledging the student\u2019s effort during the learning process. In Table 2, we demonstrate some praise responses with an explanation of the rationale for labeling responses as either Correct or Incorrect. 1https://www.tutors.plus/solution/training 6 \fTable 1: Demographic information of participants Demographic Categories Giving Effective Praise (n = 70) Reacting to Errors (n = 59) Determining What Students Know (n = 76) Gender Female 27 23 32 Male 34 30 34 Non-binary 1 1 1 Unknown 8 5 9 Age 18-24 8 8 11 25-34 11 6 9 35-50 12 9 9 51-64 21 22 25 65+ 12 12 16 Unknown 6 2 6 Ethnicity Asian 12 10 18 White 34 30 34 Others 8 8 9 Unknown 16 11 15 Tutoring Experience* Level 1 4 3 4 Level 2 14 11 14 Level 3 22 18 24 Level 4 20 21 24 Level 5 5 4 5 Unknown 5 6 5 Tutoring Experience*: The tutors were asked to rate their prior tutoring experience on a five-point Likert scale, where Level 1 indicated a tutor with limited experience, and Level 5 signified an expert tutor. In the lesson Reacting to Errors, trainees practice their skills in responding to student errors. Trainees employ various pedagogical strategies aimed towards addressing gaps in the learners\u2019 knowledge through constructive feedback. Instead of overt criticism, the emphasis is on fostering a positive approach to errors. This approach seeks to shift students\u2019 perception towards errors by underscoring their importance in the learning process. A tutoring scenario was depicted where a student made a mistake in solving a problem (See the scenario in Table 3). The tutor trainee\u2019s responses to students\u2019 errors should help students develop their critical thinking skills and encourage students to correct their mistakes. According to [3], to effectively respond to students\u2019 errors, one should: 1) indirectly inform students about their mistake in the problemsolving process, 2) guide the student towards self-correction, and 3) show praise for the student\u2019s effort or attempt. Responses that directly highlight the student\u2019s error or inform the student what to do are not desired in the tutoring practice [3]. In Table 3, we demonstrated some responses of reacting to errors with the explanation of the rationale for labeling responses as either Correct or Incorrect. 7 \fTable 2: Examples of correct and incorrect trainee responses for the lesson Giving Effective Praise with annotation rationale. Scenario You\u2019re tutoring a student named Kevin. He is struggling to understand a math problem. When he doesn\u2019t get the answer correct the first time, he wants to quit. After trying several different approaches, Kevin gets the problem correct. As Kevin\u2019s tutor, you want him to continue working through solving more problems on his math assignment. Trainee Response Interpretation You are making steady progress and it is good to see the results of your efforts Correct Response The response expresses the sense of positive and sincere. The praise focuses on the student\u2019s perseverance and acknowledges the students for working hard and the process of learning. You did a great job, well done! Incorrect Response This response is sincere and positive but the praise does not focus on student learning efforts or learning actions (e.g., demonstrated the problem-solving procedural). Table 3: Examples of both correct and incorrect trainee responses for the lesson Reacting to Errors with annotation rationale. Scenario Imagine you are a mentor to a student, Aaron, who has a long history of struggling with math. Aaron is not particularly motivated to learn math. He just finished a math problem adding a 3-digit and 2-digit number and has made a common mistake (shown below). Trainee Response: Interpretation: Lucy, very well, but I have to point that we have another way of doing the math problem, we can repeat the math together, what do you think? Correct Response This response avoids using direct words about the student\u2019s mistake and implicitly clears up the misconception. Instead, the response encourages the student to make another attempt instead of explicit answers This is very close! I see one issue, can you walk me through the how you worked through the problem? Incorrect Response This response asks the student to walk through the steps but it still uses the word \u201cissue\u201d, which may be frustrating. In the lesson Determining What Students Know, this lesson is designed to enhance the tutor trainees\u2019 skills in discerning the current knowledge level of the students by distinguishing what the students have comprehended and what still needs to be learned. A tutoring scenario was depicted where a student was given a math problem they did not know how to solve (see the scenario in Table 4). The tutor trainee\u2019s responses were used to gauge the student\u2019s prior knowledge at the start of the session and provide instruction based on what students already know as a launching point for 8 \fthe rest of the session. According to [3], effective response of determining what students know should be: 1) prompting students to demonstrate what they have already done or explain what they know, 2) presenting in an open-ended form and avoiding asking student\u2019s understanding of specific knowledge concept, 3) guiding the tutoring conversation to locate the student\u2019s misunderstanding, 4) providing instructional support to help students find the correct answer. To summarize, correct response of determining what students know should assess a student\u2019s prior knowledge, guide the conversation to catch student\u2019s misconceptions or errors and support productive struggle. In Table 4, we demonstrated some responses of determining what students know with the explanation of the rationale for labeling responses as either Correct or Incorrect. Table 4: Examples of both correct and incorrect trainee responses for the lesson Determining What Students Know with annotation rationale. Scenario You are working with a student named Cindy on her math homework. She is having trouble solving a geometry problem dealing with triangles. She shows you the following diagram displaying a triangle and states that she has to determine the value of angle x. Cindy says, \u201dI don\u2019t know what to do.\u201d Trainee Response: Interpretation: What have you tried so far? Correct Response This response asks an open-ended question to understand what students have tried so far and gauge the student\u2019s knowledge Do you know what PEMDAS means? Incorrect Response Tutors\u2019 responses can contain questions to students but they must be open-ended and non-specific to assess student\u2019s knowledge of an individual knowledge component. 3.3 Identifying desired trainee responses One of the motivations for this study is the creation of a classifier capable of discerning desired attributes in a tutor\u2019s responses to scenario-based prompts. The goal is to determine whether the tutors can adapt to the specific scenarios and integrate scenario-specific instructional practices when supporting the learners. For instance, should a trainee fail to acknowledge the learner\u2019s effort when working on an activity requiring effective praise, the classifier would categorize the tutor\u2019s feedback as Incorrect (less desirable). Identifying these scenarios presents an opportunity to personalize training activities for trainees, enhancing their ability to learn from and rectify specific instructional methodologies. In addressing RQ1, we first employed two expert raters, both specialists in educational instruction and feedback, to annotate trainees\u2019 responses as either Correct (desirable) or Incorrect (less-desirable). Using Cohen\u2019s \u03ba, we determined inter-rater 9 \freliability, obtaining scores of 0.85, 0.81, and 0.64 for Giving Effective Praise, Reacting to Errors, and Determining What Students Know, respectively. These scores of inter-rater reliability are considered sufficient [22]. Disagreements between the raters prompted input from a third expert to ensure consistency in annotations. Then, recognizing the typical need for a large amount of data when training classifiers from scratch for natural language processing tasks, we turned to recent advances in machine learning. As documented in [23, 24], zero-shot and few-shot learning methods can effectively discern patterns in datasets, even when they are limited or absent. These methods leverage the inherent capability of pre-trained models, which is crucial for ensuring classification performance and generalizability. The principle mirrors human cognition, as explored in [23, 24], where individuals apply their generalized knowledge to identify unfamiliar objects or concepts. Further details of these methods are described below: \u2022 Zero-shot Learning: In zero-shot learning, the classifier is trained to perform tasks for which it has seen no labeled examples at all. This is achieved by transferring knowledge from related tasks and using semantic relationships between classes. The model\u2019s prior knowledge, often in the form of embeddings or representations that capture semantic meanings, is crucial for it to make predictions in unseen classes [24]. \u2022 Few-shot Learning: In few-shot learning, the classifier is trained to perform tasks using a limited amount of labeled data. The underlying principle is to leverage the knowledge acquired by the model from previous and related tasks to facilitate effective generalization to a new task, even when provided with minimal data. The prior knowledge enables the classifier to adapt to new tasks with only a few examples [23]. Additionally, given that our classifier is designed to categorize trainees\u2019 responses into two categories (i.e., correct or incorrect), the few-shot learning with two classification categories is commonly termed \u201ctwo-way few-shot learning\u201d. For instance, a two-way two-shot contains two correct responses and two incorrect responses. Upon a thorough review of existing literature [25], we found that most studies implemented few-shot learning with the number of shots less than or equal to five. In line with this consensus, our study also sets five shots as the maximum threshold for the number of shots. As described, both zero-shot and few-shot learning methods rely on a robust pretrained model. These pre-trained models, having been exposed to extensive training corpora, inherently possess base knowledge that allows them to discern generalized patterns even from minimal datasets. Inspired by the effectiveness of GPT-4 models in existing educational tasks [6, 17, 18], we adopted the state-of-the-art GPT-4 model [26] as the foundational model for conducting binary classification of trainees\u2019 responses. A GPT prompt is a sentence or phrase provided to the GPT model to produce a response [8, 27]. Our prompt strategies are detailed in Table 5. The prompt strategies are in the form of Chat-Completion, which refers to the generated response produced by the GPT-4 model during a conversation. When a user provides a prompt, GPT-4 processes the prompt and generates a relevant response, known as the \u201cCompletion\u201d. The Chat-Completion is set up to generate the label for each trainee\u2019s response. For Zero-shot implementation, as presented in Table 5, 10 \fTable 5: Prompt strategies for a binary classifier. We used Chat-Completion to process the trainees\u2019 responses in batch Zero Shot Few-shot Role Content Role Content System \u201cYou are a binary classifier.\u201d System \u201cYou are a binary classifier.\u201d User {Lesson Principle} + \u201cAccording to the lesson principle, please determine if the following response contains\u201d + {Lesson Name} + \u201cplease respond YES; if not, please respond NO.\u201d User {Lesson Principle} + \u201cAccording to the lesson principle, please determine if the following response contains\u201d + {Lesson Name} + \u201cplease respond YES; if not, please respond NO.\u201d Assistant \u201cSure, please enter the response from tutor\u201d Assistant \u201cPlease provide some examples of correct and incorrect response\u201d User {Textual response} User {Correct example} + {Incorrect example} Assistant \u201cSure, please enter the response from tutor\u201d User {Textual response} the Chat-Completion has three different chat roles: System, User, and Assistant. The role of System represents the assigned default character for the machine. In our case, GPT-4 facilitates the role of a \u201cbinary classifier\u201d. The role of User represents human input. The role of Assistant denotes a machine-generated response, which is to frame the prompting process as a conversation. Compared to the Zero-shot learning approach, the few-shot learning approach provides a limited number of correct and incorrect examples for the GPT-4 model to understand the classification patterns (Table 5). Subsequently, our proposed prompt requires specific inputs from the User. The input of {Lesson Principle} is based on the principles of a correct response from the lesson materials created by Thomas et al. [3]. The input of {Textual response} is the trainee\u2019s response. As there are three distinct lessons, the input of {Lesson Name} in the instruction prompt is substituted with the appropriate lesson name. 3.4 Enhancing the trainee responses by GPT models To explore RQ2, we used the GPT-4 model to rephrase incorrect responses into correct forms effectively. We designed the prompt strategies presented in Table 6. For the Zero-shot learning, we assigned a role with GPT-4 to rephrase the trainee\u2019s response (i.e., \u201cYou are rephrasing tutor\u2019s response\u201d). For the role of User, similar to RQ1, we used {Lesson Principle} to enable GPT-4 to understand the correct form of tutor responses. To effectively rephrase the trainees\u2019 responses, we believe that providing context about the scenario in which the responses were given might lead GPT-4 to generate more accurate rephrased outputs. Thus, in the prompt, we also added the input of {Lesson Scenario}, which was the actual text of the scenario-based question, as demonstrated in Table 2, 3, & 4. In the context of the few-shot learning approach, we supplied two examples of rephrased incorrect responses in their correct forms provided in the training lessons to help the GPT-4 model infer the rephrasing rules (see Table 5). The GPT-4 Chat-Completion is presented in Table 6. 11 \fTable 6: Prompt strategies for binary classifier. We used Chat-Completion to process the trainees\u2019 responses in batch Zero Shot Few-shot Role Content Role Content System \u201cYou are rephrasing tutor\u2019s response.\u201d System \u201cYou are rephrasing tutor\u2019s response.\u201d User {Lesson Principle}+\u201cThe provided response attempts to answer to the following scenario.\u201d+{Lesson Scenario}+\u201cPlease rephrase the tutor\u2019s response according to the principle mentioned above to create a better example of\u201d+{Lesson Name}+\u201cRetain words and ideas from the tutor\u2019s response. Limit changes to the original tutor\u2019s response to a minimum. Maintain the same length as the original tutor\u2019s response. Please rephrase as less words as possible from the original tutor\u2019s response. Highest priority is to make sure to follow the principle of the correct response when rephrasing.\u201d User {Lesson Principle}+\u201cThe provided response attempts to answer to the following scenario.\u201d+{Lesson Scenario}+\u201cPlease rephrase the tutor\u2019s response according to the principle mentioned above to create a better example of\u201d+{Lesson Name} + \u201cRetain words and ideas from the tutor\u2019s response. Limit changes to the original tutor\u2019s response to a minimum. Maintain the same length as the original tutor\u2019s response. Please rephrase as less words as possible from the original tutor\u2019s response. Highest priority is to make sure to follow the principle of the correct response when rephrasing.\u201d Assistant \u201cSure, please enter the response\u201d Assistant \u201cPlease provide some examples of how you will rephrase the given incorrect response to make it correct\u201d User {Textual response} User {Rephrased examples} Assistant \u201cSure, please enter the response\u201d User {Textual response} 3.5 Evaluation approach Evaluation for RQ1. We employ both the F1 score and the Area under the ROC curve (AUC) for evaluating the performance of our classification model. Furthermore, given our specific focus on identifying incorrect feedback, we incorporate two additional metrics: the Negative Predictive Value (NPV) and the True Negative Rate (TNR). These measures are crucial for determining the model\u2019s efficacy in minimizing false negatives and minimizing such errors is critical, as a false identification can result in incorrect feedback. Incorrect feedback can further undermine the training\u2019s effectiveness, potentially eroding trust and changing how trainees engage with the training activities. We provide the formulas for NPV and TNR in equations 1 and 2, respectively. Both NPV and TNR are metrics that range from 0 to 1, with higher values signifying a model\u2019s enhanced capability to correctly identify true negative instances. Negative Predictive V alue (NPV ) = True Negative True Negative + False Negative (1) True Negative Rate (TNR) = True Negative True Negative + False Positive (2) 12 \fEvaluation for RQ2. After rephrasing the trainee\u2019s responses, we evaluate the accuracy and quality of the rephrased responses. In order to achieve this, we first utilized the most effective binary classifier developed in RQ1 to classify the rephrase responses. Then, we compared the number of correct responses in rephrased responses and correct responses in original responses. Specifically, we wanted to investigate the extent to which the GPT-4 model has the capability to improve the accuracy of the trainee\u2019s responses. When the number of correct labels in rephrase responses is more than the correct responses in the original responses, it indicates that the GPT-4 model has the ability to accurately rephrase the trainee\u2019s responses and the classifier developed in RQ1 generally satisfied with the rephrased result. Additionally, we aim to compare the quality rephrased responses by GPT-4 with the ones by human expert. To do so, we first hired three experienced human tutors who completed the training for the three lessons. These three experts were asked to rephrase the incorrect responses based on the research recommendation provided in the lessons. Afterward, we invited a fourth human educational expert to assess the quality of rephrased responses in two dimensions: Accuracy and Responsiveness. The dimension of Accuracy was used to measure the correctness of the rephrased responses. Regarding the dimension of Responsiveness, it evaluates how the rephrased response selectively changes some words to improve the trainee\u2019s original response, while largely preserving the original words and ideas from the trainee\u2019s response. In our study, we designed the question for evaluating Accuracy by asking \u201cThe rephrased response is a better example of {Lesson Name} than the trainee\u2019s response\u201d and the question for evaluating Responsiveness by asking \u201cThe rephrased response changes some words to improve the trainee\u2019s response, but otherwise keeps words and ideas from the trainee\u2019s response\u201d. The educational expert answered the questions by using the five-point Likert scale (i.e., Strongly Disagree to Strongly Agree). 4 Results 4.1 Results for RQ1: Binary Classifier for Correct Responses For RQ1, we explored the zero-shot and few-shot approaches to train a binary classifier using the GPT-4 model, as detailed in Sec. 3.3. The classifier\u2019s performance is presented in Table 7. For the lesson Giving Effective Praise, the zero-shot approach resulted in an F1 score of 0.761 and an AUC of 0.743. When leveraging a two-way few-shot learning approach, we observed an improvement in the performance. The F1 scores remained consistently high, ranging from 0.856 to 0.872, with the 3-shot model achieving the peak performance. In parallel, the AUC scores were also robust, varying from 0.851 to 0.865, with the 5-shot model outperforming the others. Despite these improvements, the NPV and TNR metrics showed greater variability. The NPV spanned from 0.8 to 0.88, with the 3-shot model again taking the lead, whereas the TNR fluctuated between 0.744 to 0.851, with the 5-shot configuration achieving the strongest performance. For the lesson on Reacting to Errors, the performance of the zero-shot learning approach resulted in an F1 score of 0.767 and an AUC of 0.768. It is worth noting 13 \fTable 7: Classification performance of the responses from three lessons Lessons Metrics Zero Shot Two-way Few-shot 1-shot 2-shot 3-shot 4-shot 5-shot Giving Effective Praise F1 0.761 0.870 0.845 0.872 0.856 0.860 AUC 0.743 0.858 0.836 0.863 0.851 0.865 NPV 0.666 0.841 0.853 0.881 0.841 0.800 TNR 0.680 0.787 0.744 0.787 0.787 0.851 Reacting to Errors F1 0.767 0.779 0.821 0.840 0.823 0.867 AUC 0.768 0.778 0.819 0.838 0.822 0.866 NPV 0.911 0.892 0.866 0.857 0.823 0.880 TNR 0.585 0.622 0.736 0.792 0.792 0.830 Determining What Students Know F1 0.660 0.712 0.718 0.747 0.798 0.805 AUC 0.668 0.712 0.719 0.748 0.799 0.806 NPV 0.630 0.714 0.733 0.733 0.818 0.821 TNR 0.828 0.714 0.786 0.785 0.771 0.786 Note: AUC represents Area under the ROC Curve; NPV represent Negative Predicted Value; TNR represents True Negative Rate. that the zero-shot learning approach had an impressive NPV score of 0.911, the highest NPV score for feedback from Reacting to Errors activity, indicating the model\u2019s robustness in identifying true negative outcomes. When utilizing two-way few-shot learning approaches, the 5-shot learning approach presented the highest F1, AUC, and TNR scores at 0.867, 0.866, and 0.83, respectively. Lastly, for the lesson on Determining What Students Know, the zero-shot learning approach resulted in an F1 score of 0.66 and AUC of 0.668, the lowest across the three lessons. Interestingly, the zero-shot model had a higher TNR score of 0.828, indicating that the model was adept at identifying true negative cases for this lesson. The performance across the F1, AUC, and NPV metrics presented a general uptick with the adoption of the two-way few-shot learning method, with the 5-shot variant demonstrating the highest enhancements, reflected by F1, AUC, and NPV scores of 0.805, 0.806, and 0.821, respectively. 4.2 Results for RQ2: Using GPT-4 to Rephrase Incorrect Responses For RQ2, we examine the application of GPT-4 in transforming trainees\u2019 incorrect responses into a preferred format that exemplifies effective feedback, thereby demonstrating the correct manner to meet learner needs through feedback revision. To accomplish this, we utilized the most effective binary classifier identified from RQ1, the 5-shot classifier, to pinpoint incorrect responses within the three lessons. The identified responses were then compared with the responses identified by the expert human raters as described in Sec. 3.3. The intersection of the responses identified as incorrect by both the classifier and the human rates resulted in 36 responses for Giving Effective Praise, 42 responses for Reacting to Errors, and 53 responses for Determining What Students Know. The overlap between the five-shot classifier and human raters was 14 \f85%, 83%, and 78.6% for Giving Effective Praise, Reacting to Errors, and Determining What Students Know, respectively, as indicated by the TNR scores for the 5-shot approach shown in Table 7. As each training activity across the three lessons contained two paired examples to illustrate effective feedback in each scenario, we utilized the two paired examples per lesson to take a two-shot learning approach in exploring the effectiveness of GPT-4 in rephrasing student feedback. In this section, we report on the accuracy and responsiveness of the rephrased trainee responses by comparing the responses generated using zero-shot and two-shot GPT-4 models with responses rephrased by humans across the three lessons. The responses were assessed using a five-point Likert scale, i.e., Strongly Disagree (represented by -2), Disagree (represented by -1), Neutral (represented by 0), Agree (represented by 1), and Strongly Agree (represented by 2), as described in Sec. 3.5. Given the ordinal nature of Likert scale data, we utilize the Mann-Whitney U test, a non-parametric statistical method, to ascertain if the accuracy and responsiveness of the rephrased responses are statistically different. First, we examined the accuracy and responsiveness of the rephrased trainee responses for the lesson Giving Effective Praise, as presented in Fig. 3. We observed a higher median accuracy score of 1 for responses rephrased by GPT-4 (both Zeroshot and Few-shot) whereas the human rephrased responses received a median score of -1. As shown in Table 8, the accuracy scores of the rephrased responses generated using both GPT models (zero-shot and few-shot) were significantly higher than the responses rephrased by the humans (p < 0.001) indicating that the GPT-4 models Fig. 3: Distribution of accuracy and responsiveness scores from the lesson Giving Effective Praise Table 8: Statistics for rephrased responses from the lesson Giving Effective Praise. Metrics Zero-shot Few-shot Human Mean Accuracy 0.61 0.92 -0.36 Mean Responsiveness 0.22 0.44 0.44 words/response (mean) 17.28 21.72 12.28 words/response (SD) 6.28 18.94 5.88 15 \fwere more effective at rephrasing the responses to the desired format in comparison to humans. While we did not observe a significant difference in the accuracy of the two GPT-based models, we observed a higher variance in the score of the zero-shot approach in comparison to the accuracy scores for the two-shot approach. When analyzing the responsiveness of the rephrased responses, we did not observe a significant difference between the responsiveness score of the GPT-4 rephrased responses and human rephrased responses; however, the human rephrased responses had a higher variance in comparison to the responsiveness scores of GPT-4 rephrased responses. The result demonstrated that the few-shot learning approach performed significantly better than the human in terms of the accuracy of the rephrased responses, while there was no significant difference in the responsiveness of the rephrased responses between the rephrased responses from the humans and the GPT-4 models. It indicated the effectiveness of few-shot learning on rephrasing the incorrect trainees\u2019 responses on the lesson of Giving Effective Praise. Similarly, we evaluated the rephrased responses provided by both GPT-4 models and human for the Reacting to Errors lesson, presented in Fig. 4. The GPT-4-generated responses achieved a median accuracy score of 1, outperforming the human-revised responses, which held a median score of 0. Upon examining the rating further, as presented in Table 9, the accuracy of responses rephrased using the few-shot approach was significantly higher than those rephrased by humans (p < 0.01). Even the zero-shot rephrased responses were more accurate than human alterations (p < 0.05). As for the responsiveness, most of the scores from the GPT-revised and human-revised responses were clustered between 0 and 1, with no significant difference in responsiveness between them. Additionally, the table also indicated that the average word count per response remained consistent between the GPT and human revisions, demonstrating that the GPT models, especially the few-shot approach, are adept at effectively rephrasing incorrect responses to Reacting to Errors without extensive modification to the original wording and sentence structure provided by the trainees. Fig. 4: Distribution of accuracy and responsiveness scores from the lesson Reacting to Errors 16 \fTable 9: Statistics for rephrased responses from the lesson Reacting to Errors. Metrics Zero-shot Few-shot Human Mean Accuracy 0.62 0.86 0.07 Mean Responsiveness 0.17 0.17 0.21 words/response (mean) 15.79 15.40 15.26 words/response (SD) 6.93 7.24 5.35 Finally, our evaluation of the rephrased responses from the lesson Determining What Students Know, as illustrated in Fig. 5 and Table 10, revealed no significant difference in the dimensions of accuracy and responsiveness across the three approaches. Notably, unlike the accuracy in the other two chapters, the responsiveness scores from the few-shot method were marginally higher than those rephrased by humans (p = 0.08), indicating comparable performance between the automated few-shot and zero-shot approaches and human expertise. At the same time, no statistical significance was observed across conditions for responsiveness. Interestingly, it was in the Determining What Students Know lesson that the classification model had its weakest performance among the three lessons. Fig. 5: Distribution of accuracy and responsiveness scores from the lesson Determining What Students Know Table 10: Statistics for rephrased responses from the lesson Determining What Students Know. Metrics Zero-shot Few-shot Human Mean Accuracy 0.68 0.70 0.85 Mean Responsiveness 0.28 0.30 0.06 words/response (mean) 22.72 20.83 20.09 words/response (SD) 18.51 16.12 8.10 17 \f5 Discussion Providing explanatory feedback is a fundamental requirement for delivering personalized feedback to learners. Our study explored the use of large language models (GPT-4 model) to automate the facilitation of explanatory feedback to novice tutors, where the main findings can be summarized in two folds: Firstly. GPT-4 models, especially for the few-shot approach, have the potential to accurately identify the correct and incorrect trainees\u2019 responses, which can be used to provide corrective feedback when training novice tutors on the scenario-based tasks. Our results indicate that despite a limited number of samples, the GPT-4 model can accurately identify the incorrect trainees\u2019 responses across three different tutor training lessons (i.e., Giving Effective Praise, Reacting to Errors, and Determining What Students Know). By comparing the classification performance with zero-shot learning, the few-shot learning approach, especially with increasing shots, generally tends to improve the model\u2019s classification performance. This improvement suggests that more examples might increase GPT\u2019s capability to recognize the many different ways to express a target concept like effort-based praise (e.g., \u201cGood effort on solving the problem\u201d), and distinguish it from a related concept, like outcome-based praise (e.g., \u201cGood job\u201d). The implications of this finding is profound, especially when considered alongside existing research on neural network learning in humans. Previous research [28] has illustrated that both the quantity and diversity of examples play a significant role in the learning process, with optimal outcomes achieved through exposure to a range of examples that are internally diverse yet distinct from other categories. Applying this principle to the context of LLM training suggests a strategy where examples within a category (e.g., praising effort) are maximally diverse, whereas examples across categories are closely aligned (e.g., comparing praise for effort with praise for outcomes). Pursuing this line of inquiry in future research could yield valuable insights into the mechanisms underpinning effective learning in both human and artificial neural networks. By systematically exploring the interplay between example diversity and learning efficacy, we can refine our understanding of how best to structure training data for LLMs like GPT-4, ultimately enhancing their utility in educational applications. Secondly, the capability of GPT-4, particularly when employing the few-shot learning approach, extends to effectively rephrasing trainees\u2019 incorrect responses into a desired format. Notably, GPT-4\u2019s performance in rephrasing incorrect responses to correct ones is on par with, and sometimes surpasses, that of experienced human tutors. This proficiency likely stems from GPT-4\u2019s advanced understanding of context and language nuances [26], enabling it to reconstruct trainees\u2019 incorrect responses to align more closely with the desired responses. The practical implications of the GPT4\u2019s capabilities are significant. The classified and rephrased responses generated by GPT-4 can be integrated into template-based feedback systems. Such integration facilitates the provision of real-time and explanatory feedback to novice tutors (or trainees) during their training sessions. 18 \f5.1 Implications The incorporation of the binary classifier and its generalizability in terms of performance holds significant implications for providing explanatory feedback. The classified results (i.e., correct or incorrect responses) on trainee\u2019s responses can be further integrated into the provision of corrective feedback as shown in Fig. 2. Specifically, by identifying the incorrect responses, our feedback system can use the template-based feedback to provide suggestions for trainees to consider, i.e., \u201cAI-generated feedback suggests that your response could focus more on praising the student for their efforts in the learning process. Providing corrective feedback is essential in the learning process for tutor training since it can assist the tutors in identifying their errors and improving the quality of their feedback [11]. Furthermore, this study demonstrated the potential of prompting GPT-4 models in rephrasing incorrect trainees\u2019 responses into the desired form. We measured the quality of rephrased responses from GPT-4 models and human experts in terms of their accuracy and responsiveness as described in Sec. 3.5. Based on our observations, the rephrased responses consistently rated higher in accuracy while the responsiveness of the GPT-4 generated responses were comparable to the human-generated responses. For instance, a tutor response classified as incorrect, \u201cGreat Job! But there is a tiny mistake, can you tell what was it ?\u201d from the lesson Reacting to Errors was rephrased in the following manner: Zero-shot: \u201cGood effort, Aaron! There\u2019s a small detail that needs tweaking. Can you spot what it might be?\u201d Few-shot: \u201cGood effort! Let\u2019s take another look at this, can you walk me through your process?\u201d Human: \u201cGreat Job! But there is a tiny mistake, can you tell what was it by going over each step?\u201d The incorrect response used the word \u201cmistake\u201d in the response, which failed to reframe the mistakes as a learning opportunity effectively and is considered to be an ineffective tutoring strategy as indicated by [3, 21]. In contrast, both zero-shot and few-shot responses were more effective in accounting for the student\u2019s errors when rephrasing the feedback. The zero-shot rephrased response avoids using direct words about the student\u2019s mistake and implicitly mentions the misconception \u201cThere\u2019s a small detail that needs tweaking.\u201d whereas the few-shot approach encourages the student to revisit the problem Let\u2019s take another look at this which is also in line with the effective practice of reacting to errors [3, 21]. It should be noted that both zeroshot and few-shot rephrased responses also changed the praise on student\u2019s outcome (i.e., \u201cGreat Job\u201d) to student learning effort (i.e., \u201cGood effort\u201d). However, the human expert failed to rephrase the incorrect response into the desired form even though the expert changed the words from the original incorrect response. Additionally, when the accuracy of rephrased responses was rated at 1 (i.e., Agree on a five-point Likert scale) or 2 (i.e., Strongly Agree) but the responsiveness was rated from -2 (i.e., Strongly Disagree) to 0 (i.e., Neutral), the rephrased responses still were presented effective responses. For example, an example of incorrect response \u201cI think you are doing great.\u201d from Giving Effective Praise was rephrased as below: 19 \fZero-shot: \u201cYou\u2019re showing excellent effort in tackling your math work, Carla. Your perseverance is commendable.\u201d Few-shot: \u201cI see you\u2019re making good progress with your efforts. Keep it up!\u201d Human: \u201cYou are doing a great job!\u201d The incorrect response \u201cI think you are doing great.\u201d failed to praise student on their learning efforts or learning actions but on their learning outcome, which is considered ineffective praise as indicated in [3, 21]. Both zero-shot and few-shot rephrased responses were rated accuracy of 2 (i.e., Strongly Agree) but responsiveness of 0 (i.e., Neutral). Both shot and few-shot rephrased responses demonstrated praise on the student learning efforts as underlined in the examples, but both responses changed many words, which was not similar to the original incorrect responses. In comparison, the responsiveness of human rephrased responses was rated at 1 since there was only several words were changed from the original incorrect response. However, the human expert failed to revise the praise correctly, and the rephrased response was rated at -1 (i.e., Disagree). The rephrased praise still focused on the student learning outcome (i.e., great job!) rather than their learning efforts, which is not considered an effective response for praising student as indicated by [3]. As summarized by the evaluation results of both GPT-4 and human rephrased responses, we proposed a framework for determining the quality of the rephrased responses, shown in Fig. 6. Fig. 6: Framework for determining the quality of the rephrased responses. This framework (Fig. 6) aims to guide future work to understand the extent to which the rephrased responses are considered high quality. When the accuracy of the rephrased response is rated at 1 or 2, the rephrased responses are considered to be acceptable. Based on our observation, the optimal rephrased responses should be high in both accuracy and responsiveness (i.e., Excellent area in Fig. 6), which could guide the trainees to understand the desired form of the responses and also help them know where they did not perform well while providing their scenario specific feedback. Since the dimension of responsiveness aims to minimize the changes of words in the 20 \fresponses, we expect the trainees to be able to locate the parts of the sentence that are incorrect and rephrase them accordingly. Similarly, a high accuracy and lower responsiveness (i.e., Good area in Fig. 6) could guide the trainee to recognize the desired quality of the feedback. However, as shown in the above example, the low responsiveness of the rephrased responses is an indicator of the modifications required in the original incorrect responses, which may not be as helpful to the trainees if the rephrasing resulted in major structural and semantic changes that are harder to learn and retain. Finally, we defined responses in two areas as undesirable responses, as illustrated in Fig. 6. The undesirable responses, marked by a low accuracy score (\u22640), undermine the effectiveness of the feedback [3]. While the rephrased responses might demonstrate high responsiveness, the low accuracy of the response is still detrimental to its effectiveness and, as such, is not desirable. The rephrased feedback (\u201cYou are doing a great job!\u201d ), as presented above, is an example of a rephrased response with a low accuracy but high responsiveness score. 5.2 Limitations and Future Work Evaluating impact of proposed feedback system on tutoring practice. While our current findings demonstrated the potential of GPT models in providing explanatory feedback and appropriately rephrased responses, there is a need for a more comprehensive evaluation of such feedback\u2019s effectiveness in tutor training. In future work, we plan to investigate the influence of the feedback on tutor practice. Specifically, we will examine the direct effects of our feedback on tutors\u2019 skill acquisition, retention, and application in real-world tutoring scenarios. By conducting longitudinal studies with both control and experimental groups, we aim to gain a clearer understanding of the long-term advantages and possible challenges of our approach. Such insights will not only shed light on the efficacy of our feedback system but also inform potential refinements to enhance the training process for novice tutors. Using advanced prompt strategies for explanatory feedback. In our current study, we utilized zero-shot and few-shot prompt strategies to identify correct or incorrect trainees\u2019 responses (RQ1) and to rephrase these incorrect responses appropriately (RQ2). While our proposed prompting strategies demonstrated promising results, there is potential for further improvement. We are considering the adoption of more advanced prompt strategies. Two such strategies that have caught our attention are the Tree of Thoughts [29] and Graph of Thoughts [30]. These prompting strategies are expected to offer a more nuanced and structured way of understanding the task context and generating relevant information, potentially leading to more accurate and insightful results. A comprehensive exploration of these advanced prompting strategies is beyond the scope of our current study. Thus, in future work, we aim to delve deeper into these prompt strategies to investigate their efficacy and potentials on the improvement of the quality of explanatory feedback. Generalizability across other tutor training lessons. While our study demonstrated promising results on providing explanatory feedback primarily from three lessons, to further explore the efficacy of our feedback system, broader evaluations of the feedback system on other lessons are also important such as Using Motivational 21 \fStrategies and Ensuring Conceptual Understanding. All the lessons on our platform introduces tutors to unique teaching scenarios and challenges. Ensuring that our feedback system is equally adept at handling the intricacies of each lesson is crucial for its overall success. Thus, it is important to evaluate the efficacy of our developed feedback system across all lessons, ensuring that the feedback provided is accurate, relevant, and conducive to the emerging tutor training process, continuously guiding tutors towards pedagogical excellence. Enhancing explanatory feedback through sequence labeling The primary objective of this study is to provide automatic explanatory feedback. We have demosntrated the demo of our developed explanaotry feedback system shown in Fig. 2. To further unlock the potential of automatic explanatory feedback, we propose a significant enhancement: the integration of sequence labeling method, as originally introduced in the work by [5, 31]. In their research, they employed a color-coded highlighting approach to distinguish between the effective and ineffective component of trainee\u2019s responses, aiming to facilitate a clearer comprehension of correctness or incorrectness. By incorporating this sequence labeling approach in the provision of explanatory feedback, we expect that the feedback can demonstrate more corrective information fostering a deeper understanding among trainees regarding the construction of effective responses. Enhancing trainee response evaluation beyond binary classification. Our study leveraged GPT-4\u2019s capabilities to categorize trainee responses into binary classes: correct or incorrect. However, this dichotomous approach may be overly simplistic and potentially limiting for real-world applications where a more nuanced understanding is required. Acknowledging this, we recognize the necessity of developing a more granular evaluation scale. A tiered ranking system, perhaps on a fiveor ten-point scale, could provide a more detailed and effective assessment of trainee responses, aligning more closely with the complexities of real-world scenarios. This insight highlights a limitation in our current methodology and underscores the potential for future research to explore more sophisticated classification frameworks that can capture the varied spectrum of trainee performance more accurately. Strategies for safeguarding privacy information in real-world tutoring. Our study observed that responses from trainee tutors across three different lessons often included the use of student names, as in \u201cKevin, good job getting the problem correct!\u201d This pattern suggests a tendency among some tutors to personalize their feedback by mentioning students by name during actual tutoring sessions. To further evaluate the practices of novice tutors within real-world tutoring contexts, it is necessary to colect and archive transcripts of tutoring dialogues in our database. To protect data privacy, we intend to anonymize any sensitive information, such as names, locations, and ages, contained within these transcripts. Enhancing automated explanatory feedback quality through human-in-theloop design. In our future work, we aim to explore the enhancement of automated explanatory feedback quality through the incorporation of a human-in-the-loop design. This approach will involve integrating human interaction directly into the feedback loop, enabling a ranking system where responses generated by Large Language Models 22 \f(LLMs) are reviewed and prioritized based on human judgment. Such a mechanism is expected to provide stronger signals to the AI, guiding it towards producing outputs that are more aligned with human expectations. Crowd sourcing the evaluation of rephrased responses from trainees. Inviting educational experts to evaluate the quality of rephrased responses is often timeconsuming and impractical, especially when dealing with a large volume of tutor responses. To address this issue, we suggest a crowd-sourcing approach for rating the rephrased responses. we plan to include the question (shown in Table 11) into the lesson and invite tutor trainees to answer the question. Table 11 presents the Scenario question and a response from a previous trainee which was identified an incorrect response. We will employ the large language models to rephrase the incorrect trainee\u2019s response and also keep the original incorrect response in the question. The new trainees are invited to rate the quality of responses based on the accuracy, responsiveness in a five-point scale. We also incorporate the original response for trainee to rate their scores. Since our developed binary classifier was not perfect, misclassified incorrect might exist, we also want the trainee\u2019s to provide their rating on the original responses. By doing so, we can obtain their ratings of rephrased responses and we expect our trainees can obtain better understanding about the presence of the effective form of responses in different training lessons. Table 11: Sample question for crowd sourcing the ratings of rephrased responses from the trainee tutors. Scenario: What exactly would you say to Cindy to begin helping her solve the math problem? Response: Are you familiar with the definition and notation for angle and side congruence? Rephrased Responses Accuracy Responsiveness 1. How would you define when angles or sides in a triangle are congruent? What does that mean to you? 4 3 2. Can you explain your understanding of angle and side congruence, and their notations? 4 3. What do you understand about the concept of congruence in relation to sides and angles? 4. Are you familiar with the definition and notation for angle and side congruence? Explanatory feedback for the synchronous tutoring session. Our study demonstrated the capability of GPT-4 models to provide explanatory feedback and adeptly rephrasing tutor responses into a desired format. As shown in Sec. 4.2, our proposed few-shot learning approach could achieve performance comparable to human experts in rephrasing responses appropriately, which could help reduce the use of inappropriate instructional responses during the student learning process. Given our current findings, we expect the integration of our developed explanatory feedback system into synchronous text-based online tutoring could facilitate the tutoring process. Previous studies [32\u201334] have emphasized the importance of showing effective responses to students. Given the growing demand for qualified tutors, our feedback 23 \fsystem, when integrated with synchronous tutoring platforms, can equip novice tutors to deliver timely and appropriate instructional feedback. To assess the influence of our exploratory feedback system on tutoring, We recommend conducting randomized controlled experiments to examine the efficacy of our feedback system further. In the experiment setup, tutors in experimental group will use our explanatory feedback system to provide instructional response, whereas the tutors in the control group will follow business-as-usual tutoring. The investigation aims for a comprehensive understanding of the system\u2019s strengths and areas needing improvement. 6 Conclusion We aimed to provide automatic explanatory feedback to enhance tutor training. Our study explored the potential of GPT-4 model in delivering real-time explanatory feedback for open-ended questions selected from three tutor training lessons. We first prompted the GPT-4 model to act as a binary classifier to identify incorrect tutor responses. With well-designed prompting strategies, the GPT-4 model, using a fewshot approach, accurately identified incorrect trainee responses across all three lessons we examined. We then used the GPT-4 model to rephrase incorrect responses into the desired responses. Our results demonstrated that the quality of rephrased responses provided by GPT-4, using a few-shot approach, achieved performance comparable to that of human experts. These results indicate that our proposed automatic explanatory feedback system shows promise in providing real-time feedback. Our study sheds light on the development of feedback provision for learners. By integrating our feedback system, we expect it can facilitate the tutor training process and further alleviate the the challenges associated with recruiting qualified tutors. Acknowledgments. This work is supported by funding from the Richard King Mellon Foundation (Grant #10851). Any opinions, findings, and conclusions expressed in this paper are those of the authors. We also wish to express our gratitude to Dr. Ralph Abboud for his invaluable guidance and recommendations, and to the members of Ken\u2019s lab for their insightful feedback on this work. Special thanks to Ms. Jiarui Rao for her assistance in verifying the rating scheme. Declarations Ethics Approval The study presented in this paper obtained the Institutional Review Boards (IRB) approval from Carnegie Mellon University. Conflicts of Interest The authors have no relevant financial or non-financial interests to disclose, nor conflicting interests nor competing interests."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.00972v1.json
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00972v1",
|
| 3 |
+
"title": "CACTUS: Chemistry Agent Connecting Tool-Usage to Science",
|
| 4 |
+
"abstract": "Large language models (LLMs) have shown remarkable potential in various\ndomains, but they often lack the ability to access and reason over\ndomain-specific knowledge and tools. In this paper, we introduced CACTUS\n(Chemistry Agent Connecting Tool-Usage to Science), an LLM-based agent that\nintegrates cheminformatics tools to enable advanced reasoning and\nproblem-solving in chemistry and molecular discovery. We evaluate the\nperformance of CACTUS using a diverse set of open-source LLMs, including\nGemma-7b, Falcon-7b, MPT-7b, Llama2-7b, and Mistral-7b, on a benchmark of\nthousands of chemistry questions. Our results demonstrate that CACTUS\nsignificantly outperforms baseline LLMs, with the Gemma-7b and Mistral-7b\nmodels achieving the highest accuracy regardless of the prompting strategy\nused. Moreover, we explore the impact of domain-specific prompting and hardware\nconfigurations on model performance, highlighting the importance of prompt\nengineering and the potential for deploying smaller models on consumer-grade\nhardware without significant loss in accuracy. By combining the cognitive\ncapabilities of open-source LLMs with domain-specific tools, CACTUS can assist\nresearchers in tasks such as molecular property prediction, similarity\nsearching, and drug-likeness assessment. Furthermore, CACTUS represents a\nsignificant milestone in the field of cheminformatics, offering an adaptable\ntool for researchers engaged in chemistry and molecular discovery. By\nintegrating the strengths of open-source LLMs with domain-specific tools,\nCACTUS has the potential to accelerate scientific advancement and unlock new\nfrontiers in the exploration of novel, effective, and safe therapeutic\ncandidates, catalysts, and materials. Moreover, CACTUS's ability to integrate\nwith automated experimentation platforms and make data-driven decisions in real\ntime opens up new possibilities for autonomous discovery.",
|
| 5 |
+
"authors": "Andrew D. McNaughton, Gautham Ramalaxmi, Agustin Kruel, Carter R. Knutson, Rohith A. Varikoti, Neeraj Kumar",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.LG",
|
| 13 |
+
"physics.chem-ph",
|
| 14 |
+
"q-bio.QM"
|
| 15 |
+
],
|
| 16 |
+
"label": "Original Paper",
|
| 17 |
+
"paper_cat": "LLM AND Agent",
|
| 18 |
+
"gt": "CACTUS: Chemistry Agent Connecting Tool-Usage to Science",
|
| 19 |
+
"main_content": "Introduction Large Language Models (LLMs) are foundation models that are combined under a single paradigm to support various tasks or services. Despite being trained on vast corpora of data, these transformer-based LLMs have a limited understanding of the curated or parsed text.[Chiesurin et al., 2023]. Current research has revealed the possibility of augmenting LLMs with tools that aid in efficiently solving various problems and tasks [Mialon et al., 2023, Xu et al., 2023, Qin et al., 2023]. Previous work has also shown that providing specific prompts, curated towards a specific task, can enhance the time and quality of the text generated by the models [Cai et al., 2023]. Combining these two approaches is the Tool Augmented Language Model (TALM) framework, detailed in Parisi et al. [2022], which outperforms existing models on the tasks it is configured for. However with any of these approaches, although the generated answers may \[email protected] arXiv:2405.00972v1 [cs.CL] 2 May 2024 \fCACTUS A PREPRINT appear correct, LLMs fail to reason or demonstrate subject knowledge as is typically demonstrated by humans [Huang et al., 2023, Kambhampati, 2024]. Mistakes made by the model due to the statistical relationships it learned from data might appear in a similar way across different applications [Bommasani et al., 2021]. If foundation models become integrated with important systems that leverage the foundation model\u2019s ability to quickly adapt to many different tasks and situations, failures could result in significantly unwanted outcomes. The resourceful LLMs like GPT4 [OpenAI, 2023], LLaMA [Touvron et al., 2023], Gemma [Team et al., 2024], MPT [Team, 2023], Falcon [Almazrouei et al., 2023], and Mistral [Jiang et al., 2023] show improved performance over a range of activities [Chiang et al., 2024, Zheng et al., 2023, Hendrycks et al., 2020]. Despite these strides, the inherent limitations of such models become apparent when faced with challenges that require access to dynamic, real-time, or confidential data, which remain inaccessible within their static training datasets. This gap underscores a critical need for LLMs to evolve beyond their current capacities, leveraging external APIs to fetch or interact with live data, thereby extending their utility in real-world applications [Parisi et al., 2022]. In the domain-specific applications, particularly within the chemical, biological and material sciences, the limitations of LLMs are even more pronounced. The intricate nature of chemical data coupled with the dynamic landscape of drug discovery and development, presents a complex challenge that pure computational models alone cannot address effectively. Recognizing this, the integration of cheminformatics tools with the cognitive and analytical ability of LLMs offers a promising pathway. At the forefront of this transformation are Intelligent Agents, autonomous entities capable of designing, planning, and executing complex chemistry-related tasks with exceptional efficiency and precision [Boiko et al., 2023]. These systems are not only capable of utilizing a variety of LLMs for specific tasks but also adept at employing APIs and internet search tools to gather relevant material and data. For example, integrating an Agent into large, tool-based platforms such as KNIME [Berthold et al., 2007] or Galaxy [Goecks et al., 2010] could form a natural language interface between the user and their analysis. By acting as intermediaries, these Agents could significantly streamline the process of scientific discovery and autonomous experimentation with or without human in the loop. Towards that end and taking inspiration from ChemCrow[Bran et al., 2023], an LLM-assisted chemistry synthesis planner, we have developed an Intelligent Cheminformatics Agent focused on assisting scientists with de novo drug design and molecular discovery. Cheminformatics focuses on storing, retrieving, analyzing, and manipulating chemical data. It provides the framework and methodologies to connect computational linguistics with chemical science. This synergistic approach aims to leverage the strengths of both domains by facilitating a more comprehensive and effective exploration of therapeutic compounds, streamlining the drug development process, and ultimately accelerating the discovery from conceptualization to clinical application. In this work, we developed CACTUS (Chemistry Agent Connecting Tool Usage to Science) an LLM-powered agent that possesses the ability to intelligently determine the most suitable tools for a given task and the optimal sequence in which they should be applied, effectively optimizing workflows for chemical research and development. The implications of these intelligent agents are far-reaching. They enable the autonomous operation of complex tasks from data analysis to experimental planning, hypothesis generation, testing, and push the boundaries of what can be achieved through computational chemistry. The synergistic relationship between human intelligence, artificial intelligence, and specialized software tools holds the potential to transform the landscape of drug discovery, catalysis, material science, and beyond. This relationship and combination of domains makes the molecular discovery process more efficient, accurate, and innovative. As we stand on the precipice of this new era in cheminformatics, the integration of LLMs and computational tools through intelligent agents like CACTUS promises to unlock a future where the limits of scientific discovery are bound only by the depths of our imagination. 2 Methods Tool-augmented language models consist of two major components: external tools and language models. This section will discuss the approaches used to implement the language model agent and provide a focused look at the tools used. We will also go into great detail about the strategies used when prompting our agent and how we performed benchmarking. Each of these steps is a critical component of forming a complete intelligent agent able to solve a wide range of problems with the added ability of quick model swapping. 2.1 The Agent An important consideration when building a TALM is the framework in which it will be implemented. We have selected the commonly used open-source platform, LangChain [Chase, 2022], for this purpose. This framework simplifies the integration of prompts with LLMs through a comprehensive set of pre-built Python modules known as \"chains\". It also provides convenient integration with popular LLM hosting/inference platforms such as the OpenAI API and HuggingFace Transformers [Wolf et al., 2020]. CACTUS utilizes LangChain\u2019s implementation of a custom MRKL 2 \fCACTUS A PREPRINT agent [Karpas et al., 2022] which can be broken into 3 parts: tools, LLMChain, and agent class. The tools in this instance are a collection of cheminformatics helper functions that wrap well-known Python libraries into well-described tools for an agent to use. These tools are explained in much more detail in Section 2.2. The LLMChain is a LangChain specific feature that helps chain the tools and the agent together. This is the prompt provided to the LLM when running any inference and helps to instantiate the model and parse the user input. In CACTUS, we provide a prompt that guides the agent to answer cheminformatics questions by describing the typical steps involved in answering such questions. The last requirement for CACTUS is the agent class. These are also LangChain implemented functions that are used to interpret the user input after the initial prompt and make decisions on which actions to take to best solve the question. CACTUS sticks with a general purpose implementation of the zero-shot agent class that uses the ReAct [Yao et al., 2022] framework to determine which tool to use from the tool\u2019s description. This combination of tools, LLMChain, and zero-shot agent makes CACTUS an extensible LLM tool that can quickly integrate new tools to solve a range of cheminformatics questions. Here, we introduce mathematical formulation to describe the key components and processes of the CACTUS framework: Let\u2019s consider T = t1, t2, . . . , tn the set of cheminformatics tools available to CACTUS as discussed above, where each tool ti is a function that takes an input xi and produces an output yi: ti(xi) = yi (1) The LLMChain is represented as a function L that takes a user input u and a set of tools T as input, and outputs a sequence of actions A = a1, a2, . . . , am: L(u, T) = A (2) Each action ai in the sequence A corresponds to the application of a specific tool tj on an input xj, resulting in an output yj: ai = tj(xj) = yj (3) The zero-shot agent class is modeled as a function Z that takes the user input u, the set of tools T, and the LLMChain output A as input, and produces a final output o: Z(u, T, A) = o (4) The final output o is the result of executing the sequence of actions A determined by the LLMChain, given the user input u and the available tools T. Here, The ReAct framework used by the zero-shot agent class was represented as a function R that takes the user input u, the set of tools T, and the tool descriptions D = d1, d2, . . . , dn as input, and outputs the most appropriate tool tk to use: R(u, T, D) = tk (5) This combination of cheminformatics tools, LLMChain, and zero-shot agent makes CACTUS an extensible LLM tool that can quickly integrate new tools to solve a range of cheminformatics questions. Figure 1: General workflow of the CACTUS Agent that details how the LLM interprets an input to arrive at the correct tool to use to obtain an answer. Starting from the user input, CACTUS follows a standard \"Chain-of-thought\" reasoning method with a Planning, Action, Execution, and Observation phase to obtain an informed output 3 \fCACTUS A PREPRINT 2.2 Cheminformatics Tools For the purpose of creating a robust LLM agent able to answer a variety of cheminformatics questions, CACTUS includes a wide range of tools integrating common functions found in Python libraries such as RDKit [Landrum et al., 2013] and SciPy [Virtanen et al., 2020], along with interfaces to databases such as PubChem [Kim et al., 2023], ChEMBL [Davies et al., 2015], and ZINC [Irwin et al., 2020]. These tools allow for a chat-based analysis of molecules starting with a SMILES string and ending with information such as molecular descriptors, similarity, or absorption, distribution, metabolism, and excretion (ADME) attributes. The model consists of ten different tools providing information on various descriptors for any given chemical compound used as input. Table 1 contains a list of currently available tools that can assist in obtaining different physio-chemical properties and molecular descriptors of the input chemical compounds. This includes molecular weight, log of the partition coefficient (LogP), topological polar surface area (TPSA), quantitative estimate of drug-likeness (QED), and synthetic accessibility (SA) of the input chemical compounds. Moreover, using the BOILED-Egg method, CACTUS can also estimate the pharmacokinetic properties like blood-brain barrier permeability and gastrointestinal absorption of any given chemical compound [Daina and Zoete, 2016]. Our model also implements drug-likeness, PAINS, and Brenk filters to identify structural and toxicity alerts. All these tools in our model will assist in identifying and screening both currently available and new lead compounds. Currently restricted to using a simple SMILES as input, future releases will allow for varied user input (compound name, molecular formula, InChI key, CAS number, SMILES, ChEMBL ID, or ZINC ID) where the agent will first convert it to SMILES notation, and then used as input for the available tools. Tool Description MolWt Float [0, \u221e] Molecular weight LogP Float [\u2212\u221e, \u221e] Predicted partition coefficient TPSA Float [0, \u221e] Topological Polar Surface Area QED Float [0, 1] Quantitative Estimate of Druglikeness SA Float [1, 10] Synthetic Accessibility BBB Permeant String [Y es, No] Is in \"yolk\" of BOILED-Egg model GI Absorption String [Low, High] Is in \"white\" of BOILED-Egg model Druglikeness Boolean Passes Lipinski Rule of 5 Brenk Filter Boolean Passes Brenk filter PAINS Filter Boolean Passes PAINS filter Table 1: Cheminformatics tools currently supported by CACTUS. These tools provide a comprehensive assessment of a molecular and physicochemical properties. Apart from conversions between different molecular representations, all tools require input in the SMILES format. By leveraging these tools, CACTUS enables researchers to make informed decisions in the molecular discovery process and prioritize compounds with the most promising characteristics. 2.3 Prompting Strategy One important aspect investigated was the significance of the prompt for the agent. Through the LangChain implementation of LLM agents, there is a default prompt that provides a generic instruction of what tools are available and what the task of the LLM is. However, this is not necessarily primed for understanding domain-specific information. To test the hypothesis we ran 2 scenarios: one where we left the default prompt unchanged and only included tool descriptions (Minimal Prompt), and one where we modified the prompt to align the agent more with the domain of chemistry (Domain Prompt). The belief is that a domain aligned prompt will steer the LLM towards better interpretation of the questions being asked, and therefore be more effective in answering user queries. Since we were using a wide range of LLMs for testing, the minimal prompt also included model-specific tokens so that we weren\u2019t unfairly evaluating models against the domain prompt. 2.4 Benchmarking Evaluation of domain-specific TALMs can be a difficult task but we can follow the examples set by general benchmarking suites [Li et al., 2023, Farn and Shin, 2023, Gen, 2023, Xu et al., 2023]. Therefore, we rely on sets of questions that replicate the typical questions the agent would see and score how many the agent is able to answer correctly without requiring extra prompting effort from the user (i.e. having to rephrase the typed question to get a correct answer). To evaluate CACTUS we created sets of cheminformatics questions that test 3 sets of questions depending on the output of the tool. The first set is of qualitative questions, and is represented by questions that return answers like Yes/No, 4 \fCACTUS A PREPRINT or True/False. The second is quantitative, which represents tools that return numerical values to be interpreted by the agent. The third is a combination of both qualitative and quantitative which we call full or combined set. Table 2 highlights examples of questions passed as user-input to the CACTUS agent. The qualitative and quantitative datasets each contain 500 questions, and the combined dataset contains 1000. Most tests will be done on the combined dataset as we want to test the LLM agent\u2019s ability to perform a diverse set of tasks. Qualitative Questions Question Step Answer Does CCON=O pass the blood brain barrier? Use BBB Tool w/ SMILES Yes What is the GI absorption of C#C? Use GI tool w/ SMILES Low Quantitative Questions Question Step Answer What is the QED of CCCC=O? Use QED Tool w/ SMILES 0.44 What is the TPSA of C(CS)O Use TPSA Tool w/ SMILES 20.23 Table 2: Table demonstrating examples of the questions asked of the CACTUS agent in the cheminformatics benchmark used in this paper. 3 Results and Discussion The implementation of CACTUS represents a significant step forward in the field of cheminformatics, offering a powerful and flexible tool for researchers and chemists engaged in molecular discovery and drug design. The benchmarking studies conducted on various 7b parameter models demonstrate the robustness and efficiency of the CACTUS framework, highlighting its potential to streamline and accelerate the drug discovery process as an example. 3.1 Benchmarking and Performance Evaluation The performance of CACTUS was evaluated using a comprehensive set of 1000 questions, covering 10 different tools (Table 1, with and without the domain prompt on each 7b parameter model as shown in the Figure 2. Correct answers were scored as correct, while wrong answers, inability to converge on an answer, or inability to use the provided tool correctly were marked as incorrect. In this paper, we did not differentiate between incorrect tool usage and simply providing a wrong answer. Any answers that did not coherently address the question were considered incorrect. We accepted correct answers that contained additional formatted text after the correct answer, although this is not the preferred format. This additional information can be programmatically removed before returning the response to the user, or further prompts can be engineered to reduce additional text. Each type of question in the full question set was asked 100 times, resulting in 10 types of questions corresponding to the 10 tools provided in Table 1. This approach allowed us to identify which tools posed a greater challenge for the model, and where improvements to either the tool description or model prompt could be made. The results shown in Figure 2 highlight the importance of domain-specific prompting in improving the accuracy of the model\u2019s responses; particularly for qualitative questions. This finding aligns with recent research emphasizing the role of prompt engineering in enhancing the performance of language models [Liu et al., 2023]. In the progression of AI and its applications in scientific inquiry, it is crucial to analyze the comparative effectiveness of various models in handling domain-specific tasks. The benchmarking analysis presented in Figure 3 offers significant insights into the performance of different language models when prompted with both minimal and domain-specific information. A comprehensive review of the performance data across the full spectrum of question types reveals that Gemma-7b and Mistral-7b models showcase robustness and versatility, performing admirably regardless of the nature of the prompt. Their consistent accuracy across different types of questions ranging from physiochemical properties like druglikeness and blood-brain barrier permeability to more complex metrics like quantitative estimate of drug-likeness (QED) highlight their reliability for a broad range of inquiries within the domain of molecular science. In contrast, models like Falcon-7b exhibit a noticeable disparity between performances with minimal and domain prompts. This variability suggests that Falcon-7b, while capable, may require more fine-tuned prompting to leverage its full potential effectively. The substantial difference in performance based on the prompt type points to an intrinsic model sensitivity to input structure and content, which can be pivotal in crafting effective inquiry strategies. Furthermore, the successful 5 \fCACTUS A PREPRINT (a) Benchmark performance on the Gemma-7b model with a minimal prompt on each of the 10 question types. (b) Benchmark performance on the Gemma-7b model with a domain prompt on each of the 10 question types. Figure 2: Comparison of the Gemma -7b model with different prompting strategies on the full question set benchmark shows significant improvement in the qualitative question set when comparing the minimal prompt (Figure 2a) to the domain prompt (Figure 2b), while demonstrating similar performance in the quantitative question set. deployment of smaller models, such as Phi2 and OLMo-1b, on consumer-grade hardware (Figure 4) highlights the potential for democratizing access to powerful cheminformatics tools, enabling researchers with limited computational resources to harness the capabilities of CACTUS. 3.1.1 Open Source Models in Varied Settings This comprehensive model comparison and analysis has broader implications for the employment of open-source models in scientific environments. The ability of models to perform well with domain-specific prompts is particularly 6 \fCACTUS A PREPRINT encouraging, as it implies that with proper configuration, open-source models can be highly effective tools. The adaptability demonstrated by the Gemma-7b and Mistral-7b models indicates their potential for widespread applicability across various computational settings, from high-performance clusters to more modest research setups. Moreover, the ability to effectively prompt open-source models opens the door to their use in a variety of scientific contexts. It allows researchers to customize models to their specific domain, potentially bridging the gap between generalized AI capabilities and specialized knowledge areas. Figure 3: Comparison of model performance among 7B parameter models using minimal and domain-specific prompts. The Gemma-7b and Mistral-7b models demonstrate strong performance and adaptability across prompting strategies, highlighting their potential for widespread applicability in various computational settings, from high-performance clusters to more modest research setups. The flexibility and performance of these models have significant implications for scientific research, particularly in fields like synthetic organic chemistry and drug discovery. For researchers in these domains, the ability to utilize open-source models effectively can accelerate the discovery process, enhance predictive accuracy, and optimize computational resources. The insights from this benchmarking study provide a roadmap for selecting and tailoring models to specific research needs, thereby maximizing their utility in advancing scientific goals. The benchmarking study of the selected 7b parameter models serves as a testament to the progress in AI-driven research tools. It highlights the necessity of prompt optimization and the promise of open-source models in diverse scientific inquiries. The analysis underscores the potential of these models to become integral components in the computational chemist\u2019s toolkit, paving the way for innovative breakthroughs in molecular design and drug discovery. 3.2 Hardware Performance and Model Efficacy The deployment of CACTUS models through vLLM offers a significant advantage by optimizing performance across a variety of GPUs used for LLM inference. In our benchmarking studies we utilized three types of NVIDIA GPUs: the A100 80GB, V100, and RTX 2080 Ti. Our objective was to evaluate the performance of models under different combinations of model size, GPU type, and prompting strategy (minimal or domain-specific). The performance metric was determined by the inference speed in relation to the model\u2019s accuracy. Figure 4 shows the summary of LLMs deployed under different conditions (GPU hardware used, prompt, and benchmark set used) and how well they performed. The efficiency of these models across diverse hardware highlights their potential for widespread implementation in a range of research settings. 7 \fCACTUS A PREPRINT The models evaluated include Gemma-7b, Falcon-7b, MPT-7b, Llama2-7b, Mistral-7b, as well as two smaller models, Phi2 and OLMo-1b. The inclusion of these smaller models highlights the potential for successfully deploying models on local resources with limited computational power (e.g., consumer-grade GPUs like the RTX 2080 Ti) while still achieving accurate results. Overall, the model performance was found to be relatively quick on both the 500-question sets (Qualitative/Quantitative) and the 1000-question combined set (Full). One notable outlier was the Llama2-7b model with domain prompting, which took 185 minutes to complete the inference on the full dataset; however, its accuracy was similar to the minimally prompted version. This model is considered to be an outlier and therefor not included in Figure 4. A full list of the data used to plot these summary figures can be found in the Appendix. Figure 4: Comparison of model performance using accuracy and execution time as key metrics. The study evaluates various open-source models available on the HuggingFace including Gemma-7b, Falcon-7b, MPT-7b, Llama2-7b, and Mistral-7b, phi2 and olmo1b. Different combinations of conditions, such as model type (Vicuna, LLaMa, MPT), prompting strategy (minimal or domain-specific), GPU hardware (A100, V100, or consumer-grade), and benchmark size (small or large) were used to assess the model\u2019s capabilities. The most interesting outcome is that smaller models deployed on consumer grade hardware do not perform drastically worse than their larger parameter model counterparts. Looking at the performance of the Phi2 model (2.7B parameters), it quickly and accurately tackles the 500 question quantitative benchmark with similar performance regardless of the GPU used with the A100 80GB version unsurprisingly as the fastest. Another interesting outcome is the performance of the OLMo-1b parameter model on the combined question set and the RTX 2080 Ti GPU. While unable to obtain any correct answers for the minimal prompt, it jumps up to a surprising 52.2% accuracy when provided a domain prompt. These results are promising that these smaller models can be deployed locally by users and still be able to interpret questions, possibly by providing more specialized prompts. In general, inference time increased as question set size increased (e.g., from quantitative/qualitative to full) , while accuracy tended to decrease with longer inference times. Domain prompts achieved faster inference and accuracy than minimal prompts for models like Falcon-7b, MPT-7b, and Mistral-7b. However, there was an exception in the case of the Phi2 model on the full question set, where the minimal prompt resulted in faster inference but lower accuracy. The hardware performance analysis highlights the importance of considering the interplay between model size, GPU capabilities, and prompting strategies when deploying CACTUS models for molecular property prediction and drug discovery. The ability to achieve accurate results with smaller models on consumer-grade hardware opens up the possibility of wider adoption and accessibility of CACTUS for researchers with limited computational resources. Furthermore, the impact of domain-specific prompting on both inference speed and accuracy emphasizes the need for carefully designed prompts tailored to the specific application domain. As CACTUS continues to evolve and integrate with other computational tools and autonomous discovery platforms, optimizing hardware performance will remain a critical consideration. Future research should explore the development of more efficient algorithms and architectures (energy efficiency) for deploying CACTUS models on a variety of hardware configurations, ensuring that the benefits of this powerful tool can be realized across a wide range of research settings and computational resources. 3.3 Issues Encountered and Resolutions During the development and benchmarking of CACTUS agent using open-source models and the LangChain framework, several key challenges were identified. These issues, along with the solutions implemented, provide valuable insights for researchers and developers working on similar workflows. 8 \fCACTUS A PREPRINT One of the primary issues encountered was the slow inference speed when hosting open-source language models locally on machines utilizing CPUs. Most APIs quickly provide inference results when making calls and this is not something locally hosted models typically replicate well, especially when running on CPUs over GPUs. For this work, we initially used models from HuggingFace and deployed through the HuggingFace Pipelines python package. This allowed us to serve models, but the inference time was quite slow when wrapped in the LangChain agent. To address this, we began utilizing vLLM to host HuggingFace models instead. This substantially decreased our inference time, and allowed for API-like response times from models, even those hosted on less powerful consumer grade GPU hardware. The second major challenge was related to prompt engineering. Our results shown previously highlight that for some models the prompt has a great effect on not only the model accuracy, but the inference time. We spent a good amount of time trying to hone our prompting strategy to yield consistently accurate and efficient results with mixed effect. We ended up needing specialized prompts for each open-source LLM we used, as some were fine-tuned much differently than others and required a very specific prompt style to return usable results. These challenges highlight the need for continued research and development in the areas of model deployment and prompt engineering. Future work will be focused on optimizing the deployment of open-source models on various hardware configurations, including CPUs and GPUs, to ensure that CACTUS can be efficiently utilized across a wide range of computational resources. This may involve the development of novel algorithms and architectures that can better leverage the capabilities of different hardware setups, as well as the creation of more user-friendly tools and frameworks for model deployment and management. In terms of prompt engineering, the development of standardized prompt templates and best practices for prompt engineering in the context of molecular property prediction and drug discovery could help streamline the development process and improve the consistency of results across different models and datasets. 3.4 Future Outlook Molecular Design CACTUS has already demonstrated its potential in estimating basic metrics for input chemical compounds, but its future lies in its evolution into a comprehensive, open-source tool specifically designed for chemists and researchers working on therapeutic drug design and discovery. This will be achieved by the integration of physics-based molecular AI/ML models, such as 3D-scaffold [Joshi et al., 2021], reinforcement learning [McNaughton et al., 2022], and graph neural networks (GNNs) [Knutson et al., 2022] accompanied with molecular dynamics simulations, quantum chemistry calculations, and high-throughput virtual screening [Joshi et al., 2021, Knutson et al., 2022, Joshi et al., 2023, Varikoti et al., 2023, Joshi and Kumar, 2021]. Such capabilities are essential for accurately modeling molecular interactions and predicting the efficacy and safety of potential therapeutic agents [Jiang et al., 2021]. The development plan also includes implementing advanced functionalities for identifying compounds that exhibit structural and chemical similarities, as well as pinpointing key fragments crucial for biological activity. This feature will allow researchers to explore a vast chemical space more efficiently, identifying lead compounds with higher precision. These additions are expected to significantly accelerate and deepen the agent\u2019s ability to understand compound behaviors in 3D spaces and allow researchers to develop more comprehensive and effective workflows for drug discovery and materials design. Additionally, we plan to include tools that identify key fragments and compounds with similar structural and chemical features from the vast available chemical databases. Tools which can calculate physio-chemical, pharmacokinetic properties, and about sixty other descriptors will be added to the agent to identify quantitative structureactivity relationship (QSAR) and quantitative structure-property relationship (QSPR) to help us with screening the compounds and identifying toxic groups. Beyond these technical enhancements, there\u2019s a focus on making CACTUS more explainable and capable of symbolic reasoning. The aim is to address common criticisms of LLMs, particularly their struggle with reasoning and providing explainable outputs. By integrating more advanced symbolic reasoning capabilities, CACTUS will not only become more powerful in its predictive and analytical functions but also provide users with understandable, logical explanations for its recommendations and predictions. This feature would automate the process of predicting how small molecules, such as drug candidates, interact with targets like proteins, thereby providing invaluable insights into the potential efficacy of new compounds. The applications of CACTUS extend beyond drug discovery and can be leveraged in other domains such as chemistry, catalysis, and materials science. In the field of catalysis, CACTUS could aid in the discovery and optimization of novel catalysts by predicting their properties and performance based on their structural and chemical features [Goldsmith et al., 2018]. Similarly, in materials science, CACTUS could assist in the design of new materials with desired properties by exploring the vast chemical space and identifying promising candidates for further experimental validation [Agrawal and Choudhary, 2016]. 9 \fCACTUS A PREPRINT The future development of CACTUS is geared towards creating an intelligent, comprehensive cheminformatics tool for molecular discovery that not only aids in the identification and design of therapeutic drugs but also ensures a high degree of safety and efficacy. Through the integration of advanced computational techniques and models, alongside improvements in usability and explainability, CACTUS is set to become an indispensable resource in the quest for novel, effective, and safe therapeutic agents, as well as in the discovery and optimization of catalysts and materials. 4 Conclusions In this paper, we have introduced CACTUS, an innovative open-source agent that leverages the power of large language models and cheminformatics tools to revolutionize the field of drug discovery and molecular property prediction. By integrating a wide range of computational tools and models, CACTUS provides a comprehensive and user-friendly platform for researchers and chemists to explore the vast chemical space for molecular discovery and identify promising compounds for therapeutic applications. We assessed CACTUS performance using various open-source LLMs, including Gemma-7b, Falcon-7b, MPT-7b, Llama2-7b, and Mistral-7b, across a set of one thousand chemistry questions. Our findings indicate that CACTUS outperforms baseline LLMs significantly, with the Gemma-7b and Mistral-7b models achieving the highest accuracy regardless of the prompting strategy employed. Additionally, we investigated the impact of domain-specific prompting and hardware configurations on model performance, highlight the importance of prompt engineering and the potential for deploying smaller models on consumer-grade hardware without significant loss in accuracy. The ability to achieve accurate results with smaller models such Phi on consumer-grade hardware opens up the possibility of wider adoption and accessibility of CACTUS, even for researchers with limited computational resources. One of the key takeaways from the development and benchmarking of CACTUS is the importance of addressing the challenges associated with model deployment and prompt engineering. The solutions implemented in this work, such as the use of vLLM for hosting models and the development of tailored prompts for each open-source LLM, serve as a valuable foundation for future efforts in this field. As the field of AI continues to evolve rapidly, it is essential to keep abreast of new developments in language modeling and related technologies to further enhance the capabilities and performance of CACTUS. The development and benchmarking of CACTUS also highlighted key challenges in integrating open-source LLMs with domain-specific tools, such as optimizing inference speed and developing effective prompting strategies. We discussed the solutions implemented to address these challenges, including the use of vLLM for model hosting and the creation of tailored prompts for each LLM. Looking ahead, the future of CACTUS is incredibly promising, with the potential to transform not only drug discovery but also various other domains such as chemistry, catalysis, and materials science. The integration of advanced physicsbased AI/ML models, such as 3D-scaffold, reinforcement learning and graph neural networks, will enable a deeper understanding of compound behaviors in 3D spaces, leading to more accurate predictions of molecular interactions and the efficacy and safety of potential therapeutic agents. Moreover, the addition of tools for identifying key fragments, calculating molecular properties, and screening compounds for toxic groups will significantly enhance the efficiency and precision of the drug discovery process. The focus on improving the explainability and symbolic reasoning capabilities of CACTUS will address common criticisms of large language models and provide users with understandable, logical explanations for the tool\u2019s recommendations and predictions. As CACTUS continues to evolve and integrate with other computational tools and autonomous discovery platforms, it has the potential to revolutionize the way we approach drug discovery, catalyst design, and materials science. By leveraging the power of AI and machine learning, CACTUS can help researchers navigate the vast parameter spaces associated with complex chemical systems, identifying promising candidates for experimental validation and optimization. The future development of CACTUS is geared towards creating an intelligent, comprehensive cheminformatics tool that ensures a high degree of safety and efficacy in the identification and design of therapeutic drugs, catalysts, and materials for various application. Through the integration of advanced computational techniques and models, alongside improvements in usability and explainability, CACTUS is set to become an indispensable resource for researchers across various scientific disciplines. In summary, CACTUS represents a significant milestone in the field of cheminformatics, offering a powerful and adaptable tool for researchers engaged in drug discovery, molecular property prediction, and beyond. As we continue to advance AI-driven scientific discovery, agent like CACTUS will play a pivotal role in shaping the future of research, innovation, and human health. By embracing the potential of open-source language models and cheminformatics tools, we can accelerate the pace of scientific advancement and unlock new frontiers in the quest for novel, effective, and safe therapeutic agents, catalysts, and materials. 10 \fCACTUS A PREPRINT Code and Data Availability The code to run CACTUS and the associated benchmark data can be found on GitHub: https://github.com/pnnl/cactus. Acknowledgments This research was supported by the I3T Investment, under the Laboratory Directed Research and Development (LDRD) Program at Pacific Northwest National Laboratory (PNNL). The computational work was performed using PNNL\u2019s research computing at Pacific Northwest National Laboratory. The initial concept of integrating LLM and tools received support from the Exascale Computing Project (17-SC-20-SC), a collaborative effort of two U.S. Department of Energy organizations (Office of Science and the National Nuclear Security Administration) responsible for the planning and preparation of a capable exascale ecosystem, including software, applications, hardware, advanced system engineering, and early testbed platforms, in support of the nation\u2019s exascale computing imperative. PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL01830. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."
|
| 20 |
+
}
|
title_10K/test_title_short_2405.00977v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00977v1",
|
| 3 |
+
"title": "Distillation for Multilingual Information Retrieval",
|
| 4 |
+
"abstract": "Recent work in cross-language information retrieval (CLIR), where queries and\ndocuments are in different languages, has shown the benefit of the\nTranslate-Distill framework that trains a cross-language neural dual-encoder\nmodel using translation and distillation. However, Translate-Distill only\nsupports a single document language. Multilingual information retrieval (MLIR),\nwhich ranks a multilingual document collection, is harder to train than CLIR\nbecause the model must assign comparable relevance scores to documents in\ndifferent languages. This work extends Translate-Distill and propose\nMultilingual Translate-Distill (MTD) for MLIR. We show that ColBERT-X models\ntrained with MTD outperform their counterparts trained ith Multilingual\nTranslate-Train, which is the previous state-of-the-art training approach, by\n5% to 25% in nDCG@20 and 15% to 45% in MAP. We also show that the model is\nrobust to the way languages are mixed in training batches. Our implementation\nis available on GitHub.",
|
| 5 |
+
"authors": "Eugene Yang, Dawn Lawrie, James Mayfield",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.IR",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.IR",
|
| 11 |
+
"cs.CL"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Distillation",
|
| 15 |
+
"gt": "Distillation for Multilingual Information Retrieval",
|
| 16 |
+
"main_content": "INTRODUCTION We define Multilingual Information Retrieval (MLIR) as search over a multilingual collection of monolingual documents to produce a single ranked list [29, 42, 44, 49, 50]. The retrieval system must retrieve and rank documents based only on query relevance, independent of document language. This is challenging in part because cross-language systems may be unable to exploit surface forms. Our evaluation uses CLEF data [5] with English queries and French, German, Spanish, and English documents; CLEF data [2\u20135] with English queries and French, German, and Italian documents; and Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0431-4/24/07. https://doi.org/10.1145/3626772.3657955 TREC NeuCLIR data [26, 27] with English queries and Chinese, Persian, and Russian documents. Dual-encoder retrieval models such as ColBERT [23] that matches token embeddings, and DPR [22] that matches query and document embeddings, have shown good results in both monolingual [46] and cross-language [32, 37, 53, 56] retrieval. These approaches use pre-trained language models like multilingual BERT [10] and XLMRoBERTa [6] as text encoders to place queries and documents into a joint semantic space; this allows embedding distances to be calculated across languages. Multilingual encoders are generally trained monolingually on multiple languages [7, 10], which leads to limited cross-language ability. Therefore, careful fine-tuning, such as Translate-Train [37], C3 Pretraining [54] and Native-Train [38], are essential to be able to match across languages [32, 48, 53]. Generalizing from one to multiple document languages is not trivial. Prior work showed that Multilingual Translate-Train (MTT) [29] of ColBERT-X using training data translated into all document languages is more effective than BM25 search over documents translated into the query language. Searching translated documents with the English ColBERT model is even more effective than MTT, but incurs a high translation cost at indexing time compared to MTT\u2019s amortized cost of translating the training corpus. This work aims to develop MLIR training that produces more effective models than its monolingual English counterparts. Knowledge distillation has shown success monolingually [12, 43, 46], so we adapt this concept to train MLIR models. In TranslateDistill [53], a way to train CLIR ColBERT-X models, a teacher model scores monolingual training data using text in whichever language produces its best results. Then when training the student ColBERTX model, training data is translated into the languages that match the final CLIR task. That work showed that the student model is on par with or more effective than a retrieve-and-rerank system that uses that same teacher model as a reranker. We propose Multilingual Translate-Distill (MTD), a multilingual generalization of TranslateDistill. Instead of training with a single document language, we translate training passages into all document languages. This opens a design space of how to mix languages in training batches. This paper contributes (1) an effective training approach for an MLIR dual-encoder that combines translation and distillation; (2) models trained with MTD that are more effective than the previously reported state-of-the-art MLIR model, ColBERT-X trained with MTT; and (3) a robustness analysis of mini-batch passage mixing strategies. Models and implementation are available on Huggingface Models1 and GitHub.2 1https://huggingface.co/collections/hltcoe/multilingual-translate-distill66280df75c34dbbc1708a22f 2https://github.com/hltcoe/colbert-x arXiv:2405.00977v1 [cs.IR] 2 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Eugene Yang, Dawn Lawrie, and James Mayfield Q1 Q2 Q3 Q4 Q5 Q6 (a) Mix Passages Q1 Q2 Q3 Q4 Q5 Q6 (b) Mix Entries Q1 Q1 Q1 Q2 Q2 Q2 (c) Round Robin Entries Figure 1: Three language mixing strategies for Multilingual Translate-Distill. Each row indicates an entry with a query and a list of sampled passages in the training mini-batch. Circles, diamonds, and squares represent different document languages. 2 BACKGROUND An IR problem can be \u201cmultilingual\u201d in several ways. For example, Hull and Grefenstette [18] described a multilingual IR problem of monolingual retrieval in multiple languages, as in Blloshmi et al. [1], or alternatively, multiple CLIR tasks in several languages [3\u2013 5, 28, 36]. We adopt the Cross-Language Evaluation Forum (CLEF)\u2019s notion of MLIR: using a query to construct one ranked list across documents in several languages [41]. We acknowledge that this definition excludes mixed-language or code-switched queries and documents, other cases to which \u201cmultilingual\u201d has been applied. Prior to neural retrieval, MLIR systems generally relied on crosslanguage dictionaries or machine translation models [9, 24, 35]. Translating documents into the query language casts MLIR as monolingual in that language [14, 33, 44]. While translating queries into each document language is almost always computationally more economical than translating the documents, it casts the MLIR problem as multiple monolingual problems whose results must be merged to form the final MLIR ranked list [42, 49, 50]. Moreover, quality differences between translation models could bias results by systematically ranking documents in some languages higher [17, 29]. Recent work in representation learning for IR [12, 13, 45] and fast dense vector search algorithms [19, 21, 34] spawned a new class of models called dual-encoders. These models encode queries and documents simultaneously into one or more dense vectors representing tokens, spans, or entire sequences [22, 23, 30, 31]. While replacing the underlying language model with a multilingual one, such as multilingual BERT [10] and XLM-RoBERTa [7], produces systems that accept queries and documents in multiple languages, zero-shot transfer of a model trained only monolingually to a CLIR or MLIR problem is suboptimal; it leads to systems even less effective than BM25 over document translations [29, 37]. Therefore, designing an effective fine-tuning process for transforming multilingual language models into multilingual IR models is critical. Various retrieval fine-tuning approaches have been explored, such as contrastive learning [22, 23, 46], hard-negative mining [12, 16], and knowledge distillation [12, 43, 46]. Knowledge distillation has demonstrated more effective results in both monolingual and cross-language IR [32, 53] than the others. The recently proposed Translate-Distill approach decoupled the input languages of the teacher and student models. This allowed large English rerankers to train ColBERT-X for CLIR, leading to state-of-the-art CLIR effectiveness measured on the NeuCLIR 22 benchmark [26]. Recent work by Huang et al. [17] proposes a language-aware decomposition for prompting (or augmenting) the document encoder. In this work, we explore the simple idea of relying on translations of MS MARCO and distilling the ranking knowledge from a large MonoT5 model with mT5XXL underneath [20, 40, 52]. 3 MULTILINGUAL TRANSLATE-DISTILL Our proposed Multilingual Transalte-Distill (MTD) training approach requires a monolingual training corpus consisting of queries and passages; no relevance labels are required. 3.1 Knowledge Distillation To train a student dual-encoder model for MLIR, we first use two teacher models: a query-passage selector and a query-passage scorer. Following Yang et al. [53], the query-passage selector retrieves \ud835\udc58passages for each query. This can be replaced by any hardnegative mining approach [16, 43] or by adapting publicly available mined passages.3 The query-passage scorer then scores each querypassage pair with high accuracy. The scorer is essentially a reranker from which we would like to distill ranking knowledge implicit in an expensive model such as MonoT5 [40] that is generally too slow to apply by itself. The final product from the two teachers is a set of tuples, each containing a query, a passage, and the associated teacher score. We use these data to train the student dual-encoder model. Specifically, for each training mini-batch of size \ud835\udc5b, we select \ud835\udc5btraining queries and sample \ud835\udc5aretrieved passage IDs. To teach the student model to rank documents across languages, we translate each passage into all of the target languages. When constructing the mini-batch, we determine the language for each passage ID, which we discuss in more detail in the next section. Finally, the loss function is the KL divergence between the teacher and student scores on the query and the translated passages. 3.2 Language Mixing Strategies To train an effective ColBERT-X model for MLIR, each training batch must include documents in more than one language [29]. Training with MTD opens a design space for selecting languages for the mini-batch passages. We experiment with three mixing strategies (see Figure 1): Mix Passages. In each training batch entry, all passages are randomly assigned to one of the document languages. In this case, each language is equally likely to be present during training. Each language also has an equal probability of being assigned to any 3For example, https://huggingface.co/datasets/sentence-transformers/msmarco-hardnegatives. \fDistillation for Multilingual Information Retrieval SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 1: Collection Statistics CLEF NeuCLIR Subset[17] 2003 2022 2023 Languages de, fr, it de, fr, es, en zh, fa, ru # of Docs 0.24M 1.05M 10.04M # of Passages 1.90M 6.96M 58.88M # of Topics 113 60 41 65 Avg. Rel/Topic 40.73 102.42 125.46 67.77 passage in such a way that language representation is balanced, thus a language is just as likely to be assigned to a passage with a high score as a low score. This mixing method directly trains the student model to rank passages in different languages. Mix Entries. Alternatively, we can assign the same randomly selected language to all passages associated with a query. This method ensures the translation quality does not become a possible feature that the student model could rely on if there is a language with which the machine translation model struggles. While not directly learning MLIR, this model jointly learns multiple CLIR tasks with distillation and eventually learns the MLIR task. Round Robin Entries. To ensure the model equally learns the ranking problem for all languages, we experiment with training query repetition to present passages from all languages. In this case, the model learns the CLIR tasks using the same set of queries instead of a random subset when mixing entries. However, this reduces the number of queries per mini-batch given some fixed GPU memory size. Given this memory constraint, round robin may not be feasible if the number of document languages exceeds the number of entries the GPU can hold at once. 4 EXPERIMENTS We evaluate our proposed model on four MLIR evaluation collections: a subset of CLEF00-03 curated by Huang et al. [17]4; CLEF03 with German, French, Spanish, and English [5]; and NeuCLIR 2022 [26] and 2023 [27]. Collection statistics are summarized in Table 1. Queries are English titles concatenated with descriptions. We use MS MARCO [39] to train the MLIR ColBERT-X models with MTD, for which we adopt the PLAID-X implementation released by Yang et al. [53].5 We use the English ColBERTv2 model released by Santhanam et al. [46] that was also trained with knowledge distillation6 and MonoT5 with mT5XXL released by Jeronymo et al. [20]7 as query-passage selector and scorer, respectively. Both the selector and the scorer received English MS MARCO queries and passages to generate training teacher scores. To support MTD training, we translated the MS MARCO passages with Sockeye v2 [11, 15] into the document languages. Student ColBERT-X models are fine-tuned from the XLM-RoBERTa large models [7] using 8 NVidia V100 GPUs (32GB memory) for 200,000 gradient steps with a mini-batch size of 8 entries each associated 4The collection is reconstructed by using the author-provided document IDs, which excludes a large portion of unjudged documents. Documents added in subsequent years are also excluded. Thus some judged relevant documents are also excluded. 5https://github.com/hltcoe/ColBERT-X 6https://huggingface.co/colbert-ir/colbertv2.0 7https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k with 6 passages on each GPU. We use AdamW optimizer with a 5 \u00d7 10\u22126 learning rate and half-precision floating points. Documents are split into 180 token passages with a stride of 90 before indexing. The number of resulting passages is reported in Table 1. We index the collection with PLAID-X using one residual bit. At search time, PLAID-X retrieves passages, and document scores are aggregated using MaxP [8]. For each query, we return the top 1000 documents for evaluation. To demonstrate MTD effectiveness, we report baseline ColBERT models that are trained differently: English ColBERT [46], ColBERTX with Multilingual Translate-Train (MTT) [29], and ColBERT-X with English Distillation (ED). Since English ColBERT does not accept text in other languages, we index the collection with documents machine-translated into English (marked \u201cDT\u201d in Table 2). ColBERTX models trained with MTT use the training triples released by MS MARCO with hyperparameters similar to the MTD ones except for the number of queries per batch per GPU is increased to 32. Finally, the English Distillation models are only exposed to English queries and passages during fine-tuning instead of the translated text. It performs a zero-shot language transfer at indexing and search time. We also compare our models to the recently published KDSPD [17], which is a language-aware MLIR model that encodes the entire text sequence as a single vector. To provide a broader context, we report sparse retrieval baselines PSQ-HMM [9, 51, 55] and BM25 with translated documents, which are two strong MLIR baselines reported in NeuCLIR 2023 [27]. We report nDCG@20, MAP, and Recall at 1000 for the CLEF03 and NeuCLIR collections. To enable comparison to Huang et al. [17], we report nDCG@10, MAP@100, and Recall@100 on the CLEF00-03 subset. To test statistical superiority between two systems, we use a one-sided paired t-test with 95% confidence on the per-topic metric values. When testing for statistical \u201cequivalence\u201d where the null hypothesis is that the effectiveness of the two systems differ, we use a paired Two One-sided T-Tests (TOST) [25, 47] with a threshold of 0.05 and 95% confidence. 5 RESULTS Table 2 summarizes our experiments. ColBERT-X models trained with MTD are more effective than those with MTT across all four evaluation collections, demonstrating a 5% (CLEF03 0.643 to 0.675 with mix passages) to 26% (NeuCLIR22 0.375 to 0.474 with round robin entries) improvement in nDCG@20 and 15% (CLEF03 0.451 to 0.520 with mix passages) to 47% (NeuCLIR22 0.236 to 0.347 with mix entries) in MAP. MTD-trained ColBERT-X models over documents in their native form are significantly more effective than translating all documents into English and searching with English ColBERT. Since the languages in the two CLEF collections are closer to English than those in NeuCLIR, the ColBERT-X model trained with English texts (Row 5) still provides reasonable effectiveness using (partial) zero-shot language transfer during inference. MTD yields identical effectiveness to ED based on the TOST equivalence test in the two CLEF collections by measuring MAP (Table 2). In contrast, NeuCLIR languages do not benefit from this phenomenon. Instead, training directly with text in document languages enhances both the general language modeling and retrieval ability of the student models. In NeuCLIR 2022 and 2023, student ColBERT-X models \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Eugene Yang, Dawn Lawrie, and James Mayfield Table 2: MLIR system effectiveness. Numbers in superscripts indicate the system of the row is statistically better than the systems in the superscript with 95% confidence by conducting a one-sided paired t-test. Numbers in subscripts indicate the system of the row is statistically identical within 0.05 in value to the systems in the subscripts with 95% confidence by conducting paired TOSTs. Bonferroni corrections are applied to both sets of statistical tests. CLEF00-03 Subset [17] CLEF 2003 NeuCLIR 2022 MLIR NeuCLIR 2023 MLIR Measure nDCG MAP Recall nDCG MAP Recall nDCG MAP Recall nDCG MAP Recall Rank Cutoff 10 100 100 20 1000 1000 20 1000 1000 20 1000 1000 Baselines (0) KD-SPD[17] 0.416 0.220 0.469 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 (1) PSQ-HMM 0.5290 0.3390 0.6170 0.445 0.282 0.711 0.315 0.193 0.594 0.289 0.225 0.693 (2) DT \u00bb BM25 0.5680 0.38801 0.66201 0.6361 0.4531 0.8571 0.338 0.215 0.633 0.316 0.275 0.756 (3) DT \u00bb ColBERT 0.60901 4 0.42201 4 0.70001 4 0.6691 0.4971 0.88914 0.4031 0.28512 0.708124 0.3611 0.2981 4 0.7861 (4) ColBERT-X MTT 0.61301 3 0.41101 3 0.68701 3 0.6431 0.4511 0.8271 0.375 0.236 0.612 0.330 0.2811 3 0.760 (5) ColBERT-X ED 0.638012 8 0.45701234 678 0.73201234 678 0.69914 8 0.530124 678 0.920124 78 0.393 0.263 0.68714 0.3571 0.3171 0.827124 ColBERT-X MTD with Different Mixing Strategies (6) Mix Passages 0.66601234 78 0.47101234 578 0.74701234 578 0.6751 0.52014 57 0.90114 7 0.44412 0.3401245 78 0.7621245 78 0.4041245 78 0.36712345 78 0.86812345 78 (7) Mix Entries 0.674012345 68 0.46901234 568 0.74501234 568 0.6861 0.52214 56 0.911124 568 0.4611245 0.34712345 68 0.76812345 68 0.397124 68 0.37212345 68 0.87712345 678 (8) Round Robin Entries 0.65601234 567 0.47601234 567 0.751012345 567 0.69912 5 0.5351234 5 0.9221234 57 0.47412345 0.34112345 67 0.7611245 67 0.388124 67 0.34712345 67 0.8561234 67 Table 3: nDCG@20 on training with more languages Training Languages Evaluation Collection CLEF03 NeuCLIR Both Mix Passages CLEF 2003 0.675 0.688 0.694 NeuCLIR 2022 MLIR 0.437 0.444 0.431 NeuCLIR 2023 MLIR 0.377 0.404 0.406 Mix Entries CLEF 2003 0.686 0.679 0.680 NeuCLIR 2022 MLIR 0.424 0.461 0.445 NeuCLIR 2023 MLIR 0.359 0.397 0.379 trained with MTD (Rows 6 to 8) are 9% (NeuCLIR23 0.317 to 0.347 with round robin entries) to 32% (NeuCLIR22 0.263 to 0.347 with mix entries) more effective than ED (Row 5) by measuring MAP. 5.1 Ablation on Language Mixing Strategies Since the TOST equivalence tests show that the three mixing strategies demonstrate statistically similar MAP and Recall for all collections except for a few cases in CLEF 2003 (CLEF 2003 may be an outlier because it has English documents, a known source of bias in MLIR [29]). We conclude that MTD is robust to how languages are mixed during training as long as multiple languages are present in each training mini-batch [29]. Such robustness provides operational flexibility to practitioners creating MLIR models. Since passage translation might not be available for all languages, mixing passages allows selecting passages only from a subset of languages. Mixing entries also allows training entries to be filtered for specific languages if relevance is known to drop after translation. When evaluating with nDCG@20, the differences are larger but less consistent. For the two CLEF collections and NeuCLIR 2022, topics were developed for a single language before obtaining relevance judgments across all languages. These topics may not be well-attested in all document languages, resulting in some CLIR topics with few relevant documents. For these three collections, models trained with mixed CLIR tasks (mix and round-robin entries) are more effective at the top of the ranking. High variation among topics leads to inconclusive statistical significance results, suggesting opportunities for result fusion. NeuCLIR 2023 topics were developed bilingually, so topics are not socially or culturally tied to a single language; this leads to statistically equivalent nDCG@20 results. 5.2 Training Language Ablation Finally, we explore training with languages beyond the ones in the document collection. Table 3 shows MTD-trained models for CLEF 2003, NeuCLIR, and both on each collection. Due to GPU memory constraints, we exclude the round-robin strategy from this ablation. We observe that models trained with the mix passages strategy are more robust than the mix-entries variants when training on CLEF and evaluating on NeuCLIR and vice versa. This shows smaller degradation when facing language mismatch between training and inference. Surprisingly, training on NeuCLIR languages with the mix passage strategy yields numerically higher nDCG@20 than training on CLEF (0.675 to 0.688). When training both CLEF and NeuCLIR languages, effectiveness is generally worse than only training on the evaluation languages. This trend suggests the models might be facing capability limits in the neural model, or picking up artifacts from the quality differences in the translation. This observation demands more experimentation on MLIR dual-encoder models, which we leave for future work. 6 CONCLUSION We propose Multilingual Translate-Distill (MTD) for training MLIR dual-encoder models. We demonstrated that ColBERT-X models trained with the proposed MTD are more effective than using previously proposed MLIR training techniques on four MLIR collections. By conducting statistical equivalence tests, we showed that MTD is robust to the mixing strategies of the languages in the training mini-batch. \fDistillation for Multilingual Information Retrieval SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA"
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00978v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00978v1",
|
| 3 |
+
"title": "Language Fairness in Multilingual Information Retrieval",
|
| 4 |
+
"abstract": "Multilingual information retrieval (MLIR) considers the problem of ranking\ndocuments in several languages for a query expressed in a language that may\ndiffer from any of those languages. Recent work has observed that approaches\nsuch as combining ranked lists representing a single document language each or\nusing multilingual pretrained language models demonstrate a preference for one\nlanguage over others. This results in systematic unfair treatment of documents\nin different languages. This work proposes a language fairness metric to\nevaluate whether documents across different languages are fairly ranked through\nstatistical equivalence testing using the Kruskal-Wallis test. In contrast to\nmost prior work in group fairness, we do not consider any language to be an\nunprotected group. Thus our proposed measure, PEER (Probability of\nEqualExpected Rank), is the first fairness metric specifically designed to\ncapture the language fairness of MLIR systems. We demonstrate the behavior of\nPEER on artificial ranked lists. We also evaluate real MLIR systems on two\npublicly available benchmarks and show that the PEER scores align with prior\nanalytical findings on MLIR fairness. Our implementation is compatible with\nir-measures and is available at http://github.com/hltcoe/peer_measure.",
|
| 5 |
+
"authors": "Eugene Yang, Thomas J\u00e4nich, James Mayfield, Dawn Lawrie",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.IR",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.IR",
|
| 11 |
+
"cs.CL"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM Fairness",
|
| 15 |
+
"gt": "Language Fairness in Multilingual Information Retrieval",
|
| 16 |
+
"main_content": "INTRODUCTION Multilingual information retrieval searches a multilingual document collection and creates a unified ranked list for a given query [4\u2013 6, 16, 21, 25]. In tasks like navigational search [7], known item retrieval [1, 22], and retrieval for question-answering [10, 19], the user only needs a handful or just one relevant document to satisfy Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0431-4/24/07. https://doi.org/10.1145/3626772.3657943 the information need, and the language of that document does not matter. In contrast, users interested in gaining a broad problem understanding prefer seeing how coverage varies across languages. Analysis of retrieval results has shown that MLIR systems often show a preference for certain languages [16, 21]; we call this the MLIR Fairness problem. Such preference can bias a user\u2019s understanding of the topic [29, 31]. This problem is particularly apparent in models built on top of multilingual pretrained language models (mPLM) [21], which inherit bias from the text used to build them [11, 17]. This paper presents a new metric to allow quantitative study of MLIR Fairness. Prior work in fairness evaluation focuses on either individual or group fairness [34]. Individual fairness ensures that similar documents receive similar treatment; this often corresponds to a Lipschitz condition [3, 15]. Group fairness ensures that a protected group receives treatment at least as favorable as unprotected groups [28, 32, 33]. Group fairness metrics designed for protecting specific groups are not directly applicable to the MLIR fairness problem because the latter has no protected language; we want all languages to be treated equally in a ranked list. To operationalize our notion of MLIR fairness, we propose the Probability of Equal Expected Rank (PEER) metric. By adopting the Kruskal-Wallis \ud835\udc3btest, which is a rank-based, non-parametric variance analysis for multiple groups, we measure the probability that documents of a given relevance level for a query are expected to rank at the same position irrespective of language. We compare PEER to previous fairness metrics, and show its effectiveness on synthetic patterned data, on synthetic assignment of language to real retrieval ranked lists, and on system output for the CLEF 2003 and NeuCLIR 2022 MLIR benchmarks. 2 RELATED WORK There is no universally accepted definition of fairness. This paper views languages as groups within a ranking, and characterizes MLIR Fairness as a group fairness problem. Existing group fairness metrics fall into two categories: those that assess fairness independent of relevance, and those that take relevance into account. Ranked group fairness, based on statistical parity proposed by Zehlike et al. [32, 33], demands equitable representation of protected groups in ranking without explicitly considering relevance through statistical testing. Attention Weighted Ranked Fairness (AWRF), introduced by Sapiezynski et al. [28], compares group exposure at certain rank cutoffs against a pre-defined target distribution. It uses the same distribution for both relevant and nonrelevant documents. This means for example that if utility is defined as finding the most relevant documents, a system can gain utility by including more documents from the language with the most relevant documents early in the rankings. In doing so, more nonrelevant documents arXiv:2405.00978v1 [cs.IR] 2 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Eugene Yang, Thomas J\u00e4nich, James Mayfield, and Dawn Lawrie from that language are placed above relevant documents from the other languages. From a fairness perspective, this should be penalized as unfair. Our proposed metric does not rely on a target distribution, so it does not suffer from this utility/fairness tradeoff. Among metrics that incorporate relevance, Singh and Joachims [30] introduced the Disparate Treatment Ratio, which measures the equality of exposure of two groups. This metric is not well suited to MLIR though, since it handles only two groups. Adjacent to fairness, Clarke et al. [13] extended Normalized Discounted Cumulative Gain (nDCG) to incorporate diversity. Their metric, \ud835\udefc-nDCG, assigns document weights based on both relevance and diversity. Diversity though applies to user utility where fairness applies to documents (in our case, the languages of the returned documents) [8]. We nonetheless report \ud835\udefc-nDCG to contextualize our results. Related work on fairness over sequences of rankings [14, 24] requires both more evidence and distributional assumptions compared to fairness of a specific ranking. While similar, our method assumes the position of each document is a random variable. 3 PROBABILITY OF EQUAL EXPECTED RANK In this section, we describe the proposed measure \u2013 PEER: Probability of Equal Expected Rank. We first introduce our notation and the fairness principle, followed by forming the statistical hypothesis of the system\u2019s fairness across document languages. Finally, we define PEER as the \ud835\udc5d-value of the statistical test. Let \ud835\udc51\ud835\udc56\u2208D be the \ud835\udc56-th document in the collection D of size \ud835\udc41. We define the language that \ud835\udc51\ud835\udc56is written in as \ud835\udc59\ud835\udc51\ud835\udc56\u2208{L1, ...L\ud835\udc40}. For convenience, we define the set \ud835\udc3f\ud835\udc57= \b \ud835\udc51\ud835\udc56 \f \f \ud835\udc59\ud835\udc51\ud835\udc56= L\ud835\udc57 \t to be all documents in language L\ud835\udc57. For a given query \ud835\udc5e\u2208Q, we define the degree of document \ud835\udc51\ud835\udc56being relevant (or the relevance grade) to the query \ud835\udc5eas \ud835\udc66\ud835\udc5e \ud835\udc56\u2208 {R (0), R (1), ..., R (\ud835\udc3e)}, where \ud835\udc45(0) indicates not relevant and \ud835\udc45(\ud835\udc3e) is the most relevant level, i.e., graded-relevance with \ud835\udc3elevels. Similarly, we define the set \ud835\udc45(\ud835\udc5e,\ud835\udc58) = n \ud835\udc51\ud835\udc56 \f \f \f \ud835\udc66\ud835\udc5e \ud835\udc56= R (\ud835\udc58) o to be all documents at the R (\ud835\udc58) relevance level. Furthermore, we define the documents in L\ud835\udc57with relevance level R (\ud835\udc58) for a query \ud835\udc5eas \ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 = \ud835\udc3f\ud835\udc57\u2229\ud835\udc45(\ud835\udc5e,\ud835\udc58). In this work, we consider a ranking function \ud835\udf0b: D\u00d7Q \u2192[1...\ud835\udc41] that produces the rank \ud835\udc5f\ud835\udc5e \ud835\udc56\u2208[1...\ud835\udc41]. 3.1 Fairness through Hypothesis Testing We define MLIR fairness using the following principle: Documents in different languages with the same relevance level, in expectation, should be presented at the same rank. We measure the satisfaction level of this principle by treating it as a testable hypothesis. For relevance level R (\ud835\udc58), assuming \ud835\udc5f\ud835\udc5e \ud835\udc56is a random variable over [1...\ud835\udc41], we implement the principle using the null hypothesis: \ud835\udc3b0 : E\ud835\udc51\ud835\udc56\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc4e [\ud835\udc5f\ud835\udc5e \ud835\udc56] = E\ud835\udc51\ud835\udc57\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc4f [\ud835\udc5f\ud835\udc5e \ud835\udc57] \u2200L\ud835\udc4e\u2260L\ud835\udc4f, (1) which is the equivalence of the expected rank among documents in each language with the given relevance level and given query \ud835\udc5e. Such null hypotheses can be tested with the Kruskal-Wallis \ud835\udc3b test (K-W test) [18]. The null hypothesis of this test is that all groups have the same mean (i.e., equivalent mean ranks). The K-W test is like a non-parametric version of the ANOVA F-test, which tests whether each group (languages in our case) comes from the same distribution. Since the K-W test does not assume any particular underlying distribution, it uses the ranking of the data points to make this determination. Unlike prior work such as Zehlike et al. [33] that assumes a binomial distribution for each document over the groups, not assuming the distribution of the query-document scores used for ranking and instead operating directly on ranks yields a robust statistical test. Conceptually, the test statistics \ud835\udc3bfor the K-W test is the ratio between the sum of group rank variance and the total rank variance. The variance ratio obeys a chi-squared distribution; we use its survival function to derive the \ud835\udc5d-value. Specifically, we can express the test statistic \ud835\udc3bas \ud835\udc3b= \u0010 |\ud835\udc45(\ud835\udc5e,\ud835\udc58) | \u22121 \u0011 \u00cd\ud835\udc40 \ud835\udc57=1 \f \f \f\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 \f \f \f \u0010 \u00af \ud835\udc5f\ud835\udc5e,\ud835\udc58 \ud835\udc57 \u2212\u00af \ud835\udc5f \u00112 \u00cd\ud835\udc40 \ud835\udc57=1 \u00cd \ud835\udc51\ud835\udc56\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 \u0010 \ud835\udc5f\ud835\udc5e \ud835\udc56\u2212\u00af \ud835\udc5f \u00112 (2) where \u00af \ud835\udc5f\ud835\udc5e,\ud835\udc58 \ud835\udc57 = 1 |\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 | \u2211\ufe01 \ud835\udc51\ud835\udc56\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 \ud835\udc5f\ud835\udc5e \ud835\udc56 (3) and \u00af \ud835\udc5f= 1 |\ud835\udc45(\ud835\udc5e,\ud835\udc58) | \ud835\udc40 \u2211\ufe01 \ud835\udc57=1 \u2211\ufe01 \ud835\udc51\ud835\udc56\u2208\ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 \ud835\udc5f\ud835\udc5e \ud835\udc56 (4) for a given query \ud835\udc5eand relevance level R (\ud835\udc58). Recall that \ud835\udc45(\ud835\udc5e,\ud835\udc58) and \ud835\udc37(\ud835\udc5e,\ud835\udc58) \ud835\udc57 are sets. For each query \ud835\udc5eand given relevance level, we report the \ud835\udc5d-value of the K-W test, which is the Probability of documents in all languages with given relevance level having Equal Expected Rank, by comparing the \ud835\udc3bstatistic against a chi-squared distribution with \ud835\udc40\u22121 degrees of freedom. The \ud835\udc5d-value provides us with the probability that documents in different languages are ranked fairly within a given relevance level. We denote the \ud835\udc5d-value for a given query \ud835\udc5eand a relevance level R (\ud835\udc58) as \ud835\udc5d(\ud835\udc5e,\ud835\udc58). Our fairness notion is similar to the one proposed by Diaz et al. [14]. However, we operationalize the principle by treating each document as a sample from a distribution given the language and relevance level, instead of assuming the entire ranked list is a sample from all possible document permutations. 3.2 Fairness at Each Relevance Level The impact of unfairly ranking documents in different languages may differ at each relevance level. Such differences can be linked to a specific user model or application. For example, for an analyst actively seeking information for which each language provides different aspects, ranking nonrelevant documents of a particular language at the top does not degrade fairness; finding disproportionately fewer relevant documents in a certain language, on the other hand, may yield biased analytical conclusions. In contrast, for a user seeking answers to a specific question who views the language as just the content carrier, reading more nonrelevant documents from a language may degrade that language\u2019s credibility, leading the user eventually to ignore all content in that language. In this case, we do consider the language fairness of the ranking of nonrelevant documents; in the former case we do not. To accommodate different user models, we define the PEER score as a linear combination of the \ud835\udc5d-value of each relevance level R (\ud835\udc58). \fLanguage Fairness in Multilingual Information Retrieval SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 0 10 20 30 40 50 # Interleaved 0.0 0.5 1.0 (a) Shifting 0 10 20 30 40 50 Rank 0.0 0.5 1.0 (b) Moving Single 0 10 20 30 40 50 Length 0.0 0.5 1.0 (c) Interleaving 0 10 20 30 40 50 Length 0.0 0.5 1.0 (d) Increasing Length 1.0 1.5 2.0 2.5 3.0 Sampling Mean for Relevant Docs 0.0 0.2 0.4 0.6 (e) Score Sampling w/ Non Relevant Docs=1.0 1.0 1.5 2.0 2.5 3.0 Sampling Mean Non-Relevant Docs 0.0 0.2 0.4 0.6 (f) Score Sampling w/ Relevant Docs=1.0 Non Relevant Relevant Figure 1: Ranked lists with different fairness patterns between two languages and binary relevance. Let \ud835\udc64(\ud835\udc58) \u2208[0, 1] be the weights and \u00cd\ud835\udc3e \ud835\udc58=1 \ud835\udc64(\ud835\udc58) = 1, the overall weighted PEER for query \ud835\udc5eis \ud835\udc43\ud835\udc38\ud835\udc38\ud835\udc45(\ud835\udc5e) = \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc64(\ud835\udc58)\ud835\udc5d(\ud835\udc5e,\ud835\udc58) (5) 3.3 Rank Cutoff and Aggregation While a ranking function \ud835\udf0branks each document in collection D, in practice, a user only examines results up to a certain cutoff. Some IR effectiveness measurements consider elaborate browsing models, such as exponential-decreasing attention in Ranked-biased precision (RBP) [23] or patience-based attention in expected reciprocal rank (ERR) [9]; user behavior though is perpendicular to language fairness, so we consider only a simple cutoff model. With a rank cutoff \ud835\udc4b, we treat only the top-\ud835\udc4bdocuments as the sampling universe for the K-W test. However, since disproportionately omitting documents of a certain relevance level is still considered unfair, before conducting the hypothesis test, we concatenate unretrieved documents (or those ranked below the cutoff) at that relevance level to the ranked list, assigning them a tied rank of \ud835\udc4b+1. This is optimistic, since these documents might rank lower in the actual ranked list. However, this provides a robust penalty for any ranking model that provides only a truncated ranked list. We define the \ud835\udc5d-value as 1.0 when no document is retrieved at a given relevance level in spite of their presence in the collection; from the user perspective, no document at that level is presented, so it is fair (albeit ineffective) across languages. We denote the weighted \ud835\udc5d-value calculated on the top-\ud835\udc4bdocuments as \ud835\udc43\ud835\udc38\ud835\udc38\ud835\udc45(\ud835\udc5e)@\ud835\udc4b. Overall, we report the average weighted PEER over all queries at rank \ud835\udc4b, i.e., \ud835\udc43\ud835\udc38\ud835\udc38\ud835\udc45@\ud835\udc4b= |Q|\u22121 \u00cd \ud835\udc5e\u2208Q \ud835\udc43\ud835\udc38\ud835\udc38\ud835\udc45(\ud835\udc5e)@\ud835\udc4b. Since we treat each document as a random variable of position in a ranked list, what we are measuring is how likely a system is fair between languages instead of how fair each ranked list is. A higher PEER score indicates that the measured MLIR system is more likely to place documents written in different languages but with the same relevance level at similar ranks. 4 EXPERIMENTS AND RESULTS 4.1 Synthetic Data To demonstrate PEER behavior, we create ranked lists of two languages and binary relevance with four methods, each creating lists from very unfair to very fair. Results are illustrated in Figure 1. Shifting starts with all documents in one language ranking higher than those in the other, and slowly interleaves them until the two are alternating. Figure 1(a) shows for fifty documents that when no documents are interleaved (left), fairness is low, with PEER close to 0. As alternation increases, the PEER score increases. In Moving Single, the ranked list consists entirely of one language except for one document. That single document moves from the top (unfair) to the middle of the ranking (fair). In Figure 1(b) with 99 majority language documents, the PEER scores increase as the singleton moves from the top to the middle. Figure 1(c) shows Interleaving, in which the languages alternate and the number of retrieved documents slowly increases. With odd lengths the highest and the lowest ranked documents are in the same language, giving them the same average and 1.0 PEER scores. With even lengths, one language has a slightly higher rank than the other. The difference shrinks with longer ranked lists, resulting in increased PEER scores. In Increasing Length, 100 retrieved documents comprise first an alternating section followed by all documents in a single language, and the size of the alternating section is gradually increased. This is similar to shifting, but with overlapping languages at the top instead of in the middle of the rank list. At the left of Figure 1(d) only the document at rank 1 is minority language, followed by minority language at ranks 1 and 3, and so on. The right of the graph is identical to the right of Figure 1(a). These four patterns demonstrate that PEER scores match our intuition of fairness between languages. The next section evaluates real MLIR retrieval systems on two MLIR evaluation collections. 4.2 Assigning Languages to a Real Ranked List We used NeuCLIR\u201922 runs to create new synthetic runs with relevant documents in the same positions, but with languages assigned to the relevant documents either fairly or unfairly. We randomly \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Eugene Yang, Thomas J\u00e4nich, James Mayfield, and Dawn Lawrie Table 1: Effectiveness and fairness results. Both AWRF and PEER exclude nonrelevant documents. They are removed in AWRF calculation. For PEER, the importance weights of nonrelevant documents are set to 0. Collection Rank Cutoff 20 1000 Measure nDCG \ud835\udefc-nDCG AWRF PEER Recall \ud835\udefc-nDCG AWRF PEER CLEF 2003 QT \u00bb BM25 0.473 0.444 0.513 0.239 0.743 0.579 0.788 0.202 DT \u00bb BM25 0.636 0.640 0.623 0.243 0.857 0.747 0.895 0.299 DT \u00bb ColBERT 0.669 0.674 0.658 0.293 0.889 0.768 0.904 0.328 ColBERT-X ET 0.591 0.592 0.610 0.215 0.802 0.695 0.845 0.327 ColBERT-X MTT 0.643 0.658 0.649 0.318 0.827 0.748 0.860 0.362 NeuCLIR 2022 QT \u00bb BM25 0.305 0.447 0.537 0.453 0.557 0.569 0.752 0.383 DT \u00bb BM25 0.338 0.448 0.542 0.497 0.633 0.580 0.809 0.421 DT \u00bb ColBERT 0.403 0.539 0.635 0.449 0.708 0.652 0.842 0.426 ColBERT-X ET 0.299 0.447 0.578 0.458 0.487 0.561 0.745 0.421 ColBERT-X MTT 0.375 0.545 0.621 0.425 0.612 0.644 0.786 0.386 selected how many relevant documents would be assigned to each language, and created that many language labels. For each label we drew from a normal distribution with either the same mean for the two languages (fair), or different means (unfair). We assigned a drawn number to each label, sorted the labels by that number, and assigned the labels to the relevant documents in the resulting order. We did the same for the nonrelevant documents, ensuring that each language was assigned at least 45% of those documents. Figures 1(e) and (f) vary the sampling mean of the second language\u2019s relevant and nonrelevant documents, respectively, while keeping a first language sampling mean of 1.0. The figures show that PEER captures fairness independently for each relevance level. Since there are far fewer relevant documents, the evidence for fairness is also weaker, resulting in slower decay when changing the sampling mean for relevant documents. 4.3 Real MLIR Systems We evaluate five MLIR systems, including query translation (QT) and document translation (DT) with BM25, DT with English ColBERT [27], and ColBERT-X models trained with English triples (ET) and multilingual translate-train (MTT) [21], on CLEF 2003 (German, Spanish, French, and English documents with English queries) and NeuCLIR 2022 (Chinese, Persian, and Russian documents with English queries). For QT, English queries are translated into each document language and monolingual search results from each language are fused by score. We report \ud835\udefc-nDCG, AWRF (with number of relevant documents as target distribution), and the proposed PEER with rank cutoffs at 20 and 1000. Along with nDCG@20 and Recall@1000, we summarize the results in Table 1. Logically, merging ranked lists from each monolingual BM25 search with translated queries purely by scores is inviting unfair treatment, as scores from each language are incompatible with different query lengths and collection statistics [26]. We observed this trend in both PEER and AWRF, while \ud835\udefc-nDCG strongly correlates with the effectiveness scores and does not distinguish fairness. Neural MLIR models trained with only English text and transferred zero-shot to MLIR with European languages exhibit a strong language bias compared to those trained with document languages [16, 21]. PEER exhibits a similar trend in CLEF 2003, showing ColBERT-X ET is less fair than the MTT counterpart, while AWRF is less sensitive. Lawrie et al. [21] show that preference for English documents in the ET model causes this unfair treatment; this suggests that MLIR tasks without the training language (English) in the document collection would not suffer from such discrepancy. In fact, both PEER and AWRF indicate that MTT model is less fair among the three languages in NeuCLIR 2022, which is likely caused by the quality differences in machine translation [20]. AWRF and PEER disagree on the comparison between English ColBERT on translated documents (DT) and ColBERT-X models. While AWRF suggests DT \u00bb ColBERT 1 is fairer than ColBERT-X MTT in CLEF03, DT creates a larger difference among languages [21]. PEER, in contrast, aligns with prior analysis, giving a lower score to DT \u00bb ColBERT. According to Huang et al. [16], QT \u00bb BM25 has a similar language bias compared to mDPR [12], which was trained with English MS MARCO [2]. PEER suggests a similar conclusion between QT \u00bb BM25 and ColBERT-X ET, which AWRF assigns a larger difference between the two with a rank cutoff of 20. With a rank cutoff of 1000, AWRF strongly correlates with recall (Pearson \ud835\udc5f= 0.93 over both collections), while PEER does not (Pearson \ud835\udc5f= \u22120.55). The 0.904 AWRF value (range 0-1) of DT \u00bb ColBERT on CLEF03 suggests a fair system, while the ranked list does not. This strong relationship shows that AWRF, with target distribution being the ratio of relevant documents, is indeed measuring recall instead of fairness. While it is an artifact of the choice of target distribution, the need to define a target distribution reduces the robustness of AWRF in measuring MLIR Fairness. 5 SUMMARY We propose measuring the Probability of Equal Expected Rank (PEER) for MLIR fairness. As PEER measures the weighted \ud835\udc5d-value of a non-parametric group hypothesis test, it neither requires a target distribution nor makes distributional assumptions; this makes the metric robust. Through comparison to prior analytical work in MLIR Fairness, we conclude that PEER captures the differences and nuances between systems better than other fairness metrics. 1The (ET)+ITD setting in Lawrie et al. [21]. \fLanguage Fairness in Multilingual Information Retrieval SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA"
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00981v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00981v1",
|
| 3 |
+
"title": "Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation",
|
| 4 |
+
"abstract": "Designing preference elicitation (PE) methodologies that can quickly\nascertain a user's top item preferences in a cold-start setting is a key\nchallenge for building effective and personalized conversational recommendation\n(ConvRec) systems. While large language models (LLMs) constitute a novel\ntechnology that enables fully natural language (NL) PE dialogues, we\nhypothesize that monolithic LLM NL-PE approaches lack the multi-turn,\ndecision-theoretic reasoning required to effectively balance the NL exploration\nand exploitation of user preferences towards an arbitrary item set. In\ncontrast, traditional Bayesian optimization PE methods define theoretically\noptimal PE strategies, but fail to use NL item descriptions or generate NL\nqueries, unrealistically assuming users can express preferences with direct\nitem ratings and comparisons. To overcome the limitations of both approaches,\nwe formulate NL-PE in a Bayesian Optimization (BO) framework that seeks to\ngenerate NL queries which actively elicit natural language feedback to reduce\nuncertainty over item utilities to identify the best recommendation. We\ndemonstrate our framework in a novel NL-PE algorithm, PEBOL, which uses Natural\nLanguage Inference (NLI) between user preference utterances and NL item\ndescriptions to maintain preference beliefs and BO strategies such as Thompson\nSampling (TS) and Upper Confidence Bound (UCB) to guide LLM query generation.\nWe numerically evaluate our methods in controlled experiments, finding that\nPEBOL achieves up to 131% improvement in MAP@10 after 10 turns of cold start\nNL-PE dialogue compared to monolithic GPT-3.5, despite relying on a much\nsmaller 400M parameter NLI model for preference inference.",
|
| 5 |
+
"authors": "David Eric Austin, Anton Korikov, Armin Toroghi, Scott Sanner",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.AI",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.AI",
|
| 11 |
+
"cs.CL"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM AND Reasoning",
|
| 15 |
+
"gt": "Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation",
|
| 16 |
+
"main_content": "INTRODUCTION Personalized conversational recommendation (ConvRec) systems require effective natural language (NL) preference elicitation (PE) strategies that can efficiently learn a user\u2019s top item preferences in cold start settings, ideally requiring only an arbitrary set of NL item descriptions. While the advent of large language models (LLMs) has introduced the technology to facilitate NL-PE conversations [13, 19] we conjecture that monolithic LLMs have limited abilities to strategically conduct active, multi-turn NL-PE dialogues about a set of arbitrary items. Specifically, we hypothesize that LLMs lack the sequential decision-theoretic reasoning to interactively generate queries that avoid over-exploitation or over-exploration of user-item preferences, thus risking over-focusing on already revealed item preferences or wastefully exploring preferences over low-value items. Further challenges faced by monolithic LLM NL-PE approaches include the need to jointly reason over large, potentially unseen sets of item descriptions, and the lack of control and interpretability in system behaviour even after prompt engineering or fine-tuning [24]. In contrast, conventional PE algorithms [17, 18, 23, 36, 37], including Bayesian optimization methods [2, 5, 11, 28, 33], establish formal decision-theoretic policies such as Thompson Sampling (TS) and Upper Confidence Bound (UCB) [15] to balance exploration and exploitation with the goal of quickly identifying the user\u2019s most preferred items. However, these techniques typically assume a user can express preferences via direct item ratings or comparisons \u2013 an unrealistic expectation when users are unfamiliar with most items [1]. While recent work has extended Bayesian PE to a fixed set of template-based queries over pre-defined keyphrases [33], no arXiv:2405.00981v1 [cs.AI] 2 May 2024 \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Austin and Korikov, et al. Figure 1: PEBOL\u2019s belief updates over a cold-start user\u2019s item utilities during three turns of NL dialogue. Bayesian preference beliefs not only facilitate recommendation, but also enable Bayesian optimization policies to guide LLM query generation, avoiding over-exploration (asking about clearly low-value items) and over-exploitation (over-focusing on known preferences). existing work extends Bayesian methodologies to generative NL-PE over a set of generic NL item descriptions. In this paper, we make the following contributions: \u2022 We introduce the first Bayesian optimization formalization of NL-PE for arbitrary NL dialogue over a generic set of NL item descriptions \u2013 establishing a framework for research on augmenting LLMs with decision-theoretic reasoning. \u2022 We present PEBOL (Preference Elicitation with Bayesian Optimization augmented LLMs), a novel NL-PE algorithm which 1) infers item preferences via Natural Language Inference (NLI) [34] between dialogue utterances and item descriptions to maintain Bayesian preference beliefs and 2) introduces LLM-based acquisition functions, where NL query generation is guided by decision-theoretic strategies such as TS and UCB over the preference beliefs. \u2022 We numerically evaluate PEBOL against monolithic GPT-3.5 NL-PE methods via controlled NL-PE dialogue experiments over multiple NL item datasets and levels of user noise. \u2022 We observe that PEBOL achieves up to 131% improvement in MAP@10 after 10 turns compared to GPT-3.5, despite using a much smaller 400M NLI model for preference inference. 2 BACKGROUND AND RELATED WORK 2.1 Bayesian Optimization Given an objective function \ud835\udc53: X \u2192R, (standard) optimization systematically searches for a point \ud835\udc65\u2217\u2208X that maximizes1 \ud835\udc53. Bayesian optimization focuses on settings where \ud835\udc53is a black-box function which does not provide gradient information and cannot be evaluated exactly \u2013 rather, \ud835\udc53must be evaluated using indirect or noisy observations which are expensive to obtain [9, 26]. To address these challenges, Bayesian optimization maintains probabilistic beliefs over \ud835\udc53(\ud835\udc65) and its observations to guide an uncertaintyaware optimization policy which decides where to next observe \ud835\udc53(\ud835\udc65). Bayesian optimization begins with a prior \ud835\udc5d(\ud835\udc53) which represents the beliefs about \ud835\udc53before any observations are made. Letting \ud835\udc66\ud835\udc56 represent a noisy or indirect observation of \ud835\udc53(\ud835\udc65\ud835\udc56), and collecting a 1We take the maximization direction since this paper searches for items with maximum utility for a person. sequence of observations into a dataset D = (x, y), an observation model defines the likelihood \ud835\udc5d(D|\ud835\udc53). We then use the observed data and Bayes theorem to update our beliefs and obtain the posterior \ud835\udc5d(\ud835\udc53|D) = \ud835\udc5d(\ud835\udc53)\ud835\udc5d(D|\ud835\udc53) \ud835\udc5d(D) . (1) This posterior informs an acquisition function \ud835\udefe(\ud835\udc65|D) which determines where to next observe \ud835\udc53(\ud835\udc65) in a way that balances exploitation (focusing observations where \ud835\udc53is likely near its maximum) with exploration (probing areas where \ud835\udc53has high uncertainty). 2.2 Preference Elicitation PE has witnessed decades of research, and includes approaches based on Bayesian optimization (e.g., [3, 7, 10, 12, 16]), Bandits (e.g., [5, 20, 21, 37]), constrained optimization [25], and POMDPs [2]. In the standard PE setting, a user is assumed to have some hidden utilities u = [\ud835\udc621, ...,\ud835\udc62\ud835\udc41] over a set I of \ud835\udc41items, where item \ud835\udc56is preferred to item \ud835\udc57if\ud835\udc62\ud835\udc56> \ud835\udc62\ud835\udc57. The goal of PE is typically to search for an item \ud835\udc56\u2217\u2208arg max\ud835\udc56\ud835\udc62\ud835\udc56that maximizes user utility in a minimal number of PE queries, which most often ask a user to express item preferences as item ratings (e.g., [3, 5, 20, 21, 37]) or relative preferences between item pairs or sets (e.g., [2, 7, 10, 11, 13, 28]). An alternative form of PE asks users to express preferences over predefined item features, also through ratingor comparison-based queries [18, 23, 36]. Central to the above PE methods are query selection strategies that balance the exploration and exploitation of user preferences, with TS and UCB algorithms (cf. Sec. 4.2) often exhibiting strong performance [5, 23, 33, 36, 37]. However, none of these methods are able to interact with users through NL dialogue. 2.3 Language-Based Preference Elicitation Yang et al. [33] introduce Bayesian PE strategies using TS and UCB for keyphrase rating queries, where keyphrases are first mined from NL item reviews and then co-embedded with user-item preferences in a recommendation system. Handa et al. [13] propose using LLMs to interface with a conventional Bayesian PE system, suggesting a preprocessing step to extract features from NL descriptions and a verbalization step to fluidly express pairwise item comparison queries. Li et al. [19] prompt an LLM to generate PE queries for some specific domain (e.g., news content, morals), observe user responses, \fBayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 2: The PEBOL NL-PE algorithm, which maintains a Bayesian belief state over a user\u2019s item preferences given an arbitrary set of NL item descriptions x. This belief is used by a decision-theoretic policy to balance the exploration and exploitation of preferences by strategically selecting an item description \ud835\udc65\ud835\udc56\ud835\udc61as the basis for LLM query generation. Belief updates are computed through Bayesian inference with NLI entailment scores between item descriptions and query-response pairs. and evaluate LLM relevance predictions for a single item. While these works make progress towards NL-PE, they do not study how LLM query generation can strategically explore user preferences towards an arbitrary item set outside the realm of item-based or category-based feedback. 2.4 Conversational Recommendation Recent work on ConvRec uses language models2 to facilitate NL dialogue while integrating calls to a recommender module which generates item recommendations based on user-item interaction history [4, 22, 30, 32]. He et al. [14] report that on common datasets, zero-shot GPT-3.5/4 outperforms these ConvRec methods, which generally use older language models and require user-item interaction history for their recommendation modules. 2.5 Natural Language Inference Binary Natural Language Inference (NLI) [34] models predict the likelihood that one span of text called a premise is entailed by (i.e., can be inferred from) a second span called the hypothesis. For example, an effective NLI model should predict a high likelihood that the premise \u201cI want to watch Iron Man\u201d entails the hypothesis \u201cI want to watch a superhero movie\u201d. As illustrated by this example, the hypothesis typically must be more general than the premise. NLI models are trained by fine-tuning encoder-only LLMs on NLI datasets [6, 27, 31], which typically consist of short text spans for the premise and hypothesis \u2013 thus enabling relatively efficient performance on similar tasks with a fairly small number LLM parameters. 3 PROBLEM DEFINITION We now present a Bayesian optimization formulation of NL-PE. The goal of NL-PE is to facilitate a NL dialogue which efficiently discovers a user\u2019s most preferred items out of a set of \ud835\udc41items. Each item \ud835\udc56\u2208I has a NL description \ud835\udc65\ud835\udc56, which might be a title, long-form description, or even a sequence of reviews, with the item set I collectively represented by x \u2208X with x = [\ud835\udc651, ...,\ud835\udc65\ud835\udc41]. We assume the user has some (unknown) utility function \ud835\udc53: X \u2192R 2Earlier systems (e.g. [4, 22]) use relatively small RNN-based language models. establishing hidden utilities u = \ud835\udc53(x) so that item \ud835\udc56is preferred to item \ud835\udc57if \ud835\udc62\ud835\udc56> \ud835\udc62\ud835\udc57. Our goal is to find the most preferred item(s): \ud835\udc56\u2217\u2208arg max \ud835\udc56\u2208I \ud835\udc62\ud835\udc56. (2) In contrast to standard Bayesian PE formalisms (c.f. Sec 2.2), we do not assume that the user can effectively convey direct item-level preferences by either: 1) providing item ratings (i.e., utilities) or 2) pairwise or listwise item comparisons. Instead, we must infer user preferences by observing utterances during a NL system-user dialogue. At turn \ud835\udc61of a dialogue, we let \ud835\udc5e\ud835\udc61and \ud835\udc5f\ud835\udc61be the system and user utterance, respectively, with q\ud835\udc61= [\ud835\udc5e1, ...,\ud835\udc5e\ud835\udc61] and r\ud835\udc61= [\ud835\udc5f1, ...,\ud835\udc5f\ud835\udc61] representing all system and user utterances up to\ud835\udc61. In this paper, we call \ud835\udc5e\ud835\udc61the query and \ud835\udc5f\ud835\udc61the response, though extensions to more generic dialogues (e.g., when users can also ask queries) are discussed in Section 7. We let H\ud835\udc61= (q\ud835\udc61, r\ud835\udc61) be the conversation history at turn \ud835\udc61. To formulate NL-PE as a Bayesian optimization problem, we place a prior belief on the user\u2019s utilities \ud835\udc5d(u|x), conditioned on item descriptions since they are available before the dialogue begins. We then assume an observation model that gives the likelihood \ud835\udc5d(r\ud835\udc61|x, u, q\ud835\udc61), letting us define the posterior utility belief as \ud835\udc5d(u|x, H\ud835\udc61) \u221d\ud835\udc5d(r\ud835\udc61|x, u, qt)\ud835\udc5d(u|x). (3) This posterior informs an acquisition function \ud835\udefe(x, H\ud835\udc61) which generates3 a new NL query \ud835\udc5e\ud835\udc61+1 = \ud835\udefe(x, H\ud835\udc61), (4) to systematically search for \ud835\udc56\u2217. The preference beliefs also let us define an Expected Utility (EU) \ud835\udf07\ud835\udc61 \ud835\udc56for every item as \ud835\udf07\ud835\udc61 \ud835\udc56= E\ud835\udc5d(u|x,H\ud835\udc61) [\ud835\udc62\ud835\udc56], (5) which allows the top-\ud835\udc58items to be recommended at any turn based on their expected utilities. Our Bayesian optimization NL-PE paradigm lets us formalize several key questions, including: (1) How do we represent beliefs \ud835\udc5d(u|x, H\ud835\udc61) in user-item utilities u, given NL item descriptions x and a dialogue H\ud835\udc61? 3To represent the generative acquisition of NL outputs, we deviate from the conventional definition of acquisition functions as mapping to R. \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Austin and Korikov, et al. Figure 3: Cherry-picked system-generated dialogues from our NL-PE experiments. The Monolithic GPT-3.5 dialogue (left) demonstrates over-exploitation, with \ud835\udc5e3 directly extending \ud835\udc5e2 after a positive user preference is observed and leading to the extreme case of query repetition (\ud835\udc5e4 = \ud835\udc5e3). In contrast, PEBOL (right) continues exploring even after a positive response, while focusing on promising aspects (three out of four queries elicit a positive response) by using UCB-guided query generation. (2) What are effective models for the likelihood \ud835\udc5d(r\ud835\udc61|x, u, q\ud835\udc61) of observed responses r\ud835\udc61given x, q\ud835\udc61, and user utilities u? (3) How can our beliefs inform the generative acquisition of NL queries \ud835\udc5e\ud835\udc61+1 given H\ud835\udc61to strategically search for \ud835\udc56\u2217? These questions reveal a number of novel research directions discussed further in Section 7. In this paper, we present PEBOL, a NL-PE algorithm based on the above Bayesian optimization NL-PE formalism, and numerically evaluate it against monolithic LLM alternatives through controlled, simulated NL dialogues (cf. Sec. 6). 4 METHODOLOGY Limitations of Monolithic LLM Prompting. An obvious NL-PE approach, described further as baseline in Section 5.1, is to prompt a monolithic LLM with all item descriptions x, dialogue history H\ud835\udc61, and instructions to generate a new query at each turn. However, providing all item descriptions [\ud835\udc651, ...,\ud835\udc65\ud835\udc41] in the LLM context window is very computationally expensive for all but the smallest item sets. While item knowledge could be internalized through finetuning, each item update would imply system retraining. Critically, an LLM\u2019s preference elicitation behaviour cannot be controlled other than by prompt-engineering or further fine-tuning, with neither option offering any guarantees of predictable or interpretable behaviour that balances the exploitation and exploration of user preferences. PEBOL Overview. We propose to addresses these limitations by augmenting LLM reasoning with a Bayesian Optimization procedure in a novel algorithm, PEBOL, illustrated in Figure 2. At each turn \ud835\udc61, our algorithm maintains a probabilistic belief state over user preferences as a Beta belief state (cf. Sec. 4.1). This belief state guides an LLM-based acquisition function to generate NL queries explicitly balancing exploration and exploitation to uncover the top user preferences (cf. Sec. 4.2). In addition, our acquisition function reduces the context needed to prompt the LLM in each turn from all \ud835\udc41item descriptions x to a single strategically selected item description \ud835\udc65\ud835\udc56\ud835\udc61. PEBOL then uses NLI over elicited NL preferences and item descriptions to map dialogue utterances to numerical observations (c.f. Sec 4.3). 4.1 Utility Beliefs 4.1.1 Prior Beliefs. Before any dialogue, PEBOL establishes a prior belief \ud835\udc5d(u|x) on user-item utilities. We assume item utilities are independent so that \ud835\udc5d(u|x) = \ud835\udc41 \u00d6 \ud835\udc56=1 \ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56), (6) and that the prior for each utility \ud835\udc62\ud835\udc56is a Beta distribution \ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56) = Beta(\ud835\udefc0 \ud835\udc56, \ud835\udefd0 \ud835\udc56). (7) Since this paper focuses on fully cold start settings, we assume a uniform Beta prior with (\ud835\udefc0 \ud835\udc56, \ud835\udefd0 \ud835\udc56) = (1, 1). Beta distributions, illustrated in Figure 1, lie in the domain [0, 1] \u2013 a normalized interval for bounded ratings in classical recommendation systems. We can thus interpret utility values of \ud835\udc62\ud835\udc56= 1 or \ud835\udc62\ud835\udc56= 0 to represent a complete like or dislike of item \ud835\udc56, respectively, while values \ud835\udc62\ud835\udc56\u2208(0, 1) provide a strength of preference between these two extremes. 4.1.2 Observation Model. To perform a posterior update on our utility beliefs given observed responses r\ud835\udc61, we need an observation model that represents the likelihood \ud835\udc5d(r\ud835\udc61|x, u, q\ud835\udc61). Modelling the likelihood of r\ud835\udc61is a challenging task, so we will require some simplifying assumptions. Firstly, we assume that the likelihood of a single response \ud835\udc5f\ud835\udc61is independent from any previous dialogue history H\ud835\udc61\u22121, so that: \ud835\udc5d(r\ud835\udc61|x, u, q\ud835\udc61) = \ud835\udc61 \u00d6 \ud835\udc61\u2032=1 \ud835\udc5d(\ud835\udc5f\ud835\udc61\u2032 |x, u,\ud835\udc5e\ud835\udc61\u2032). (8) Note that this independence assumption will allow incremental posterior belief updates, so that \ud835\udc5d(u|x, H\ud835\udc61) \u221d\ud835\udc5d(\ud835\udc5f\ud835\udc61|x, u,\ud835\udc5e\ud835\udc61)\ud835\udc5d(u|x, H\ud835\udc61\u22121). (9) 4.1.3 Binary Item Response Likelihoods and Posterior Update. With the factorized distributions over item utilities and observational likelihood history now defined, we simply have to provide a concrete observational model of the response likelihood conditioned on the query, item descriptions, and latent utility: \ud835\udc5d(\ud835\udc5f\ud835\udc61|x, u,\ud835\udc5e\ud835\udc61). \fBayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 4: MAP@10 for MonoLLM and PEBOL-P with uncertainty-informed policies (UCB, TS, ER) \u2013 all methods show preference learning over time. MonoLLM is outperformed by PEBOL on Yelp and MovieLens, but remains competitive with PEBOL up to the fourth turn on Recipe-MPR until demonstrating a sharp performance drop that takes several turns to recover from. Because the prior is factorized over conditionally independent \ud835\udc62\ud835\udc56(cf. (6)), we can likewise introduce individual per-item factorized binary responses \ud835\udc5f\ud835\udc61 \ud835\udc56\u2208{0(dislike), 1(like)} to represent the individual relevance of each item \ud835\udc56to the preference elicited at turn \ud835\udc61. Critically, we won\u2019t actually require an individual response per item \u2014 this will be computed by a natural language inference (NLI) model [6] to be discussed shortly \u2014 but we\u2019ll begin with an individual binary response model for \ud835\udc5f\ud835\udc61 \ud835\udc56for simplicity: \ud835\udc5d(\ud835\udc5f\ud835\udc61 \ud835\udc56|\ud835\udc65\ud835\udc56,\ud835\udc62\ud835\udc56,\ud835\udc5e\ud835\udc61) = Bernoulli(\ud835\udc62\ud835\udc56). (10) With our response likelihood defined, this now leads us to our first pass at a full posterior utility update that we term PEBOL-B for observed Binary rating feedback. Specifically, given observed binary ratings \ud835\udc5f\ud835\udc61 \ud835\udc56, the update at \ud835\udc61= 1 uses the Beta prior (7) with the Bernoulli likelihood (10) to form a standard Beta-Bernoulli conjugate pair and compute the posterior utility belief \ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H1) \u221d\ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56)\ud835\udc5d(\ud835\udc5f1 \ud835\udc56|\ud835\udc65\ud835\udc56,\ud835\udc62\ud835\udc56,\ud835\udc5e\ud835\udc61) (11) = Beta(\ud835\udefc1 \ud835\udc56, \ud835\udefd1 \ud835\udc56), (12) where \ud835\udefc1 \ud835\udc56= \ud835\udefc0 \ud835\udc56+ \ud835\udc5f1 \ud835\udc56, \ud835\udefd1 \ud835\udc56= \ud835\udefd0 \ud835\udc56+ (1 \u2212\ud835\udc5f1 \ud835\udc56). Subsequent incremental updates updates follow Eq. (9) and use the same conjugacy to give \ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H\ud835\udc61) = Beta(\ud835\udefc\ud835\udc61 \ud835\udc56, \ud835\udefd\ud835\udc61 \ud835\udc56), (13) where \ud835\udefc\ud835\udc61 \ud835\udc56= \ud835\udefc\ud835\udc61\u22121 \ud835\udc56 + \ud835\udc5f\ud835\udc61 \ud835\udc56, \ud835\udefd\ud835\udc61 \ud835\udc56= \ud835\udefd\ud835\udc61\u22121 \ud835\udc56 + (1 \u2212\ud835\udc5f\ud835\udc61 \ud835\udc56). 4.1.4 Natural Language Inference and Probabilistic Posterior Update. As hinted above, effective inference becomes slightly more nuanced since we don\u2019t need to observe an explicit binary response per item in our PEBOL framework. Rather, we receive general preference feedback \ud835\udc5f\ud835\udc61on whether a user generically prefers a text description \ud835\udc5e\ud835\udc61and then leverage an NLI model [6] to infer whether the description \ud835\udc65\ud835\udc56of item \ud835\udc56would be preferred according to this feedback. For instance, for a (\ud835\udc5e\ud835\udc61,\ud835\udc5f\ud835\udc61) pair (\u201cWant to watch a children\u2019s movie?\u201d,\u201cYes\u201d), NLI should infer a rating of \ud835\udc5f\ud835\udc61 1 = 1 for \ud835\udc651 = \u201cThe Lion King\u201d and \ud835\udc5f\ud835\udc61 2 = 0 for \ud835\udc652 = \u201cTitanic\u201d. To deal with the fact that NLI models actually return an entailment probability, our probabilistic observation variant, PEBOL-P leverages the entailment probability that \ud835\udc5f\ud835\udc61 \ud835\udc56= 1 (cf. Sec 4.3), which we denote as \ud835\udc64\ud835\udc61 \ud835\udc56\u2208[0, 1]. We then simply relax our previous binary observations \ud835\udc5f\ud835\udc61 \ud835\udc56to continuous \ud835\udc64\ud835\udc61 \ud835\udc56, thus leading to soft posterior updates \ud835\udefc\ud835\udc61 \ud835\udc56= \ud835\udefc\ud835\udc61\u22121 \ud835\udc56 + \ud835\udc64\ud835\udc61 \ud835\udc56, \ud835\udefd\ud835\udc61 \ud835\udc56= \ud835\udefd\ud835\udc61\u22121 \ud835\udc56 + (1 \u2212\ud835\udc64\ud835\udc61 \ud835\udc56). To visually illustrate how this posterior inference process works in practice, Figure 1 shows the effect of PEBOL\u2019s posterior utility belief updates based on NLI for three query-response pairs \u2013 we can see the system gaining statistical knowledge about useful items for the user from the dialogue. 4.2 LLM-Based Acquisition Functions Recall from Sec. 2.1 that in Bayesian optimization, the posterior informs an acquisition function which determines where to make the next observation. PEBOL generates a new query \ud835\udc5e\ud835\udc61with a two-step acquisition function \ud835\udefe, first using Bayesian Optimization policies (step 1) based on the posterior utility beliefs \ud835\udc5d(u|x, H\ud835\udc61) to select NL context, and then using this selected context to guide LLM prompting (step 2). We express the overall acquisition function \ud835\udefe= \ud835\udefe\ud835\udc3a\u25e6\ud835\udefe\ud835\udc36as a composition of a context acquisition function \ud835\udefe\ud835\udc36 (cf. Sec. 4.2.1) and a NL generation function \ud835\udefe\ud835\udc3a(cf. Sec. 4.2.2). 4.2.1 Context Acquisition via Bayesian Optimization Policies. First, PEBOL harnesses Bayesian optimization policies to select an item description \ud835\udc65\ud835\udc56\ud835\udc61which will be used to prompt an LLM to generate a query about an aspect described by \ud835\udc65\ud835\udc56\ud835\udc61(cf. Sec. 4.2.2). Selecting an item \ud835\udc56\ud835\udc61whose utility \ud835\udc62\ud835\udc56\ud835\udc61is expected to be near the maximum, \ud835\udc62\ud835\udc56\u2217, will generate exploitation queries asking about properties of items that are likely to be preferred by the user. In contrast, selecting an item\ud835\udc56\ud835\udc61associated with high uncertainty in its utility\ud835\udc62\ud835\udc61 \ud835\udc56will generate exploration queries that probe into properties of items for which user preferences are less known. Thus, strategically selecting \ud835\udc65\ud835\udc56\ud835\udc61allows PEBOL to balance the exploration and exploitation behaviour of NL queries, decreasing the risks of becoming stuck in local optima (over-exploitation) or wasting resources exploring low utility item preferences (over-exploration). We define the item selected by the context acquisition function as \ud835\udc56\ud835\udc61= \ud835\udefe\ud835\udc36(x, H\ud835\udc61), (14) and list several alternatives for\ud835\udefe\ud835\udc36, including the well-known strategies of TS and UCB [26]: (1) Thompson Sampling (TS): First, a sample of each item\u2019s utility \u02c6 \ud835\udc62\ud835\udc61 \ud835\udc56is taken from the posterior, \u02c6 \ud835\udc62\ud835\udc61 \ud835\udc56\u223c\ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H\ud835\udc61). Then, the item with the highest sampled utility is selected: \ud835\udc56\ud835\udc61= arg max \ud835\udc56 \u02c6 \ud835\udc62\ud835\udc61 \ud835\udc56. (15) TS explores more when beliefs have higher uncertainty and exploits more as the system becomes more confident. (2) Upper Confidence Bound (UCB): Let \ud835\udc43\ud835\udc58(\ud835\udefc, \ud835\udefd) represent the\ud835\udc58\u2019th percentile of Beta(\ud835\udefc, \ud835\udefd), which provides a confidence \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Austin and Korikov, et al. Figure 5: MAP@10 for PEBOL-P with various context acquisition policies. bound on the posterior. UCB selects the item with the highest confidence bound \ud835\udc56\ud835\udc61= arg max \ud835\udc56 \ud835\udc43\ud835\udc58(\ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H\ud835\udc61)), (16) following a balanced strategy because confidence bounds are increased by both high utility and high uncertainty. (3) Entropy Reduction (ER): An explore-only strategy that selects the item with the most uncertain utility: \ud835\udc56\ud835\udc61= arg max \ud835\udc56 Var(\ud835\udc5d(\ud835\udc62\ud835\udc56|\ud835\udc65\ud835\udc56, H\ud835\udc61)). (17) (4) Greedy: An exploit-only strategy that selects the item with the highest expected utility \ud835\udf07\ud835\udc61 \ud835\udc56(Eq. 5): \ud835\udc56\ud835\udc61= arg max \ud835\udc56 \ud835\udf07\ud835\udc61 \ud835\udc56. (18) (5) Random: An explore-only heuristic that selects the next item randomly. 4.2.2 Generating Short, Aspect-Based NL Queries. Next, PEBOL prompts an LLM to generate a NL query \ud835\udc5e\ud835\udc61based on the selected item description \ud835\udc65\ud835\udc56\ud835\udc61while also using the dialogue history H\ud835\udc61to avoid repetitive queries. We choose to generate \u201cyes-or-no\u201d queries asking if a user prefers items with some aspect \ud835\udc4e\ud835\udc61, which is a short text span extracted dynamically from \ud835\udc65\ud835\udc56\ud835\udc61to be different from any previously queried aspects \ud835\udc4e1, ...,\ud835\udc4e\ud835\udc61\u22121. We adopt this query generation strategy to: 1) reduce cognitive load on the user, who may be frustrated by long and specific queries about unfamiliar items and 2) better facilitate NLI through brief, general phrases [34]. Letting \ud835\udf19represent the query generation prompt, we let \ud835\udc5e\ud835\udc61,\ud835\udc4e\ud835\udc61= \ud835\udefe\ud835\udc3a(\ud835\udc65\ud835\udc56\ud835\udc61, H\ud835\udc61,\ud835\udf19) (19) be the LLM generated query and aspect at turn \ud835\udc61, with prompting details discussed in Section 5.2.2. An example of such a query and aspect (bold) is \u201cAre you interested in movies with patriotic themes?\u201d, generated by PEBOL in our movie recommendation experiments and shown in Figure 2. 4.3 NL Item-Preference Entailment 4.3.1 Preference Descriptions from Query Response Pairs. Next, PEBOL receives a NL user response \ud835\udc5f\ud835\udc61, which it must convert to individual item preference observations. Since the LLM is instructed to generate \"yes-or-no\" queries \ud835\udc5e\ud835\udc61asking a user if they like aspect \ud835\udc4e\ud835\udc61, we assume the user response will be a \"yes\" or a \"no\", and create a NL description of the users preference \ud835\udf0c\ud835\udc61, letting \ud835\udf0c\ud835\udc61= \ud835\udc4e\ud835\udc61if \ud835\udc5f\ud835\udc61=\u201cyes\u201d, and \ud835\udf0c\ud835\udc61= concat(\u201cnot \u201d,\ud835\udc4e\ud835\udc61) if \ud835\udc5f\ud835\udc61= \u201cno\u201d. For example, given a query that asks if the user prefers the aspect \u201cpatriotism\u201d in an item, if the user response is \u201cyes\u201d, then the user preference \ud835\udf0c\ud835\udc61is \u201cpatriotism\u201d, and \u201cnot patriotism\u201d otherwise. This approach produces short, general preference descriptions that are well suited for NLI models [34]. 4.3.2 Inferring Item Ratings from NL Preferences. Given a NL preference \ud835\udf0c\ud835\udc61, PEBOL must infer whether the user would like an item described by\ud835\udc65\ud835\udc56. Specifically, PEBOL acquires ratings w\ud835\udc61= [\ud835\udc64\ud835\udc61 1, ...,\ud835\udc64\ud835\udc61 \ud835\udc41] (cf. Sec. 4.1.4) by using NLI to predict whether an item description \ud835\udc65\ud835\udc56entails (i.e., implies) the preference \ud835\udf0c\ud835\udc61. For example, we expect that an NLI model would predict that \ud835\udc65\ud835\udc56=\u201cThe Lion King\u201d entails \ud835\udf0c\ud835\udc61=\u201canimated\u201d while \ud835\udc65\ud835\udc57=\u201cTitanic\u201d does not, inferring that a user who expressed preference \ud835\udf0c\ud835\udc61would like item \ud835\udc56but not \ud835\udc57. We use an NLI model \ud835\udc54\ud835\udf03(\ud835\udc65\ud835\udc56, \ud835\udf0c\ud835\udc61) to predict the probability \ud835\udc64\ud835\udc61 \ud835\udc56that \ud835\udc65\ud835\udc56entails \ud835\udf0c\ud835\udc61, and return \ud835\udc5f\ud835\udc61 \ud835\udc56= \u230a\ud835\udc64\ud835\udc61 \ud835\udc56\u2309in the case of binary observations (PEBOL-B) and \ud835\udc64\ud835\udc61 \ud835\udc56in the case of probabilistic observations (PEBOL-P). 4.4 The Complete PEBOL System This concludes the PEBOL specification \u2013 the entire process from prior utility belief to the LLM-based acquisition function generation of a query to the posterior utility update is illustrated in Figure 2. 5 EXPERIMENTAL METHODS We numerically evaluate our PEBOL variations through controlled NL-PE dialogue experiments across multiple datasets and response noise levels \u2013 comparing against a monolithic GPT-3.5-turbo NLPE system, which we refer to as MonoLLM. We do not compare against ConvRec methods [4, 22, 30, 32] because they are not coldstart systems, requiring observed user-item interactions data to drive their recommendation modules. We also do not base our experiments on ConvRec datasets such as ReDIAL [22], since they are made up of pre-recorded conversation histories and cannot be used to evaluate active, cold-start NL-PE systems. 5.1 MonoLLM Baseline A major challenge of using MonoLLM for NL-PE is that item descriptions x either need to be internalized through training or be provided in the context window (cf. Sec. 4). Since we focus on fully cold-start settings, we test the latter approach as a baseline. Specifically, we prompt the LLM to generate new queries to elicit preferences given the full conversation history H\ud835\udc61and x (all prompts are shown in the Supplementary Materials). We evaluate recommendation performance after each turn by using another prompt to recommend a list of ten item names from x given H\ud835\udc61. Due to context window limits, this MonoLLM approach is only feasible for \fBayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Figure 6: MAP@10 for PEBOL using binary vs. probabilistic entailment scores. PEBOL-P with the best policy (TS on Yelp and MovieLens, UCB on Recipe-MPR) generally outperforms PEBOL-B. small item sets with short item descriptions; thus, we have to limit |I| to 100 for fair comparison to the MonoLLM baseline. 5.2 Simulation Details We test PEBOL and MonoLLM through NL-PE dialogues with LLMsimulated users whose item preferences are hidden from the system, evaluating recommendation performance over 10 turns of dialogue, and using GPT-3.5-turbo-0613 for all text generation. 5.2.1 User Simulation. For each experiment, we simulate 100 users, each of which likes a single item \ud835\udc56\u2208I. Each user is simulated by GPT-3.5-turbo-0613, which is given item description \ud835\udc65\ud835\udc56and instructed to provide only \u201cyes\u201d or \u201cno\u201d responses to a query \ud835\udc5e\ud835\udc61as if it was a user who likes item \ud835\udc56. 5.2.2 PEBOL Query Generation. Given an item description \ud835\udc65\ud835\udc56and previously generated aspects (\ud835\udc4e1, ...,\ud835\udc4e\ud835\udc61\u22121), an LLM is prompted to generate an aspect \ud835\udc4e\ud835\udc61describing the item \ud835\udc56that is no more than 3 words long. The LLM is then prompted again to generate a \u201cyes-orno\u201d query asking if a user prefers \ud835\udc4e\ud835\udc61. 5.2.3 NLI. We use the 400M FAIR mNLI4 model to predicts logits for entailment, contradiction, and neutral, and divide these logits by an MNLI temperature \ud835\udc47\u2208{1, 10, 100} As per the FAIR guidelines, we pass the temperature-scaled entailment and contradiction scores through a softmax layer and take the entailment probabilities. 5.2.4 User Response Noise. We test three user response noise levels \u2208{0,0.25,0.5} corresponding to the proportion or user responses that are randomly selected between \"yes\" and \"no\". 5.2.5 Omitting Query History Ablation. We test how tracking query history in PEBOL effects performance with an ablation study that removes previously generated aspects (\ud835\udc4e1, ...,\ud835\udc4e\ud835\udc61\u22121) from the aspect extraction prompt. 5.3 Datasets We obtain item descriptions from three real-world datasets: MovieLens25M5, Yelp6, and Recipe-MPR [35] (example item descriptions from each shown in Table 1 in the Supplementary Materials). After the filtering steps below for Yelp and MovieLens, we randomly sample 100 items to create x. For Yelp, we filter restaurant descriptions to be from a single major North American city and to have at least 4https://huggingface.co/facebook/bart-large-mnli 5https://grouplens.org/datasets/movielens/25m/ 6https://www.yelp.com/dataset 50 reviews and five or more category labels. For MovieLens,7 we filter movies to be in the 10% by rating count with at least 20 tags, and let movie descriptions use the title, genre labels, and 20 most common user-assigned tags. 5.4 Research Questions Our experiments explore the following research questions (RQs): \u2022 RQ1: How does PEBOL perform against the GPT-3.5 MonoLLM baseline? \u2022 RQ2: Does PEBOL perform better with binary or probabilistic observations, and how sensitive is the latter to temperature? \u2022 RQ3: How do PEBOL and MonoLLM perform under user response noise? \u2022 RQ4: How do the context selection policies of TS, UCB, ER, Greedy, and Random effect PEBOL performance? \u2022 RQ5: How much does PEBOL performance depend on access to the query history during query generation? 6 EXPERIMENTAL RESULTS 6.1 RQ1 PEBOL vs. MonoLLM Figure 4 shows MAP@10 over 10 dialogue turns for MonoLLM and PEBOL (UCB,TS,ER), with all methods starting near random guessing (reflecting a cold start) and showing clear preference learning over time.8 PEBOL achieves a 131% improvement over MonoLLM in MAP@10 after 10 dialogue turns on Yelp, an 88% improvement on MovieLens, and a 55% improvement on RecipeMPR. MonoLLM also displays a large drop in MAP@10 between turns 4 and 5 on RecipeMPR which it takes an additional five turns to recover from. In contrast, we do not see any comparably large drops in our methods, indicating that the incremental Bayesian belief updating mechanism of PEBOL may make it less prone to catastrophic mistakes. 6.2 RQ2 Binary vs. Probabilistic Responses Figure 6 compares PEBOL performance for PEBOL-B versus PEBOLP. In general, PEBOL-P performed better, likely due to PEBOL-B discarding valuable information from entailment probabilities. Specifically Figure 6 compares PEBOL-B with each policy against PEBOL-P with the best performing policy on each dataset. On MovieLens, 7For all experiments with MovieLens, we use the 16k version of GPT-3.5-turbo-0613, due to MonoLLM requiring extra context length for x. 8For each policy, we use the MNLI temperature that performed best on the most datasets (see Supplementary Materials) with continous responses. \fConference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY Austin and Korikov, et al. Figure 7: The effect of including the generated aspect history in the aspect generation prompt. Including the history improves performance, which we hypothesize is due to reducing repeated or uninformative queries. Figure 8: The effect of user response noise on MAP@10 \u2013 error bars are 95% confidence intervals. At all noise levels, MonoLLM is outperformed by all PEBOL-P methods on Yelp and MovieLens and by UCB, Greedy, and ER on Recipe-MPR. we see at least a 34% improvement in MAP from probabilistic entailment over binary entailment starting from turn 2, though we note that TS with binary updates performs comparably to the best probabilistic methods on Yelp. To identify our best continuous method, we first compare the performance of PEBOL with PEBOL-P using MNLI temperatures 1, 10, and 100 with each policy. We keep the temperature with the highest MAP@10 at turn 10 for the most datasets: for Yelp and MovieLens, this is TS with MNLI temperature 1; for Recipe-MPR, it is UCB with MNLI temperature 10. 6.3 RQ3 Effect of User Response Noise Figure 8 shows the impact of user response noise on MAP@10 at turn 10 \u2013 PEBOL generally continues to outperform MonoLLM under user response noise. At turn 10 for all noise levels, MonoLLM is the worst performing method on Yelp and MovieLens, and on RecipeMPR it is always outperformed by UCB, Greedy, and ER. Methods typically maintain their relative performance under various noise levels \u2013 though an exception is MovieLens where TS is the best method under zero noise and nearly the worst one at 25% noise (though the margin between the best and worst method in this case is small). 6.4 RQ4 Comparison of Context Acquisition Policies Figure 5 compares the performance of various PEBOL context acquisition policies \u2013 all policies show active preference learning, other than random item selection on RecipeMPR. There is considerable overlap between methods, however for most turns TS does well on Yelp and MovieLens while being beaten by Greedy, ER, and UCB on Recipe-MPR. As expected due to the randomness in sampling, TS performance is correlated with random item selection, while UCB performs quite similarly to greedy. 6.5 RQ5 Effect of Aspect History in Query Generation We see an improvement in performance from including a list of previously generated aspects in the aspect generation prompt. In Figure 7, the average improvement across all turns and all datasets for including the aspect history is 50% for TS and 49% for UCB. Including the aspect generation history helps to avoid repeat queries, which gain no information and would frustrate a user. 7 CONCLUSION AND FUTURE WORK This paper presents a novel Bayesian optimization formalization of natural language (NL) preference elicitation (PE) over arbitrary NL item descriptions, as well as introducing and evaluating PEBOL, an algorithm for NL Preference Elicitation with Bayesian Optimization augmented LLMs \u2013 and establishing many new research directions. For instance, while PEBOL uses a pointwise query generation strategy that selects one item description at a time for LLM context, future work can explore LLM-based acquisition functions with pairwise and setwise context selection. Such multi-item context selection would enable contrastive query generation that could better discriminate between item preferences. Another direction for future research is the integration of NL-PE methodologies such as PEBOL into conversational recommendation (ConvRec) system architectures (e.g. [8, 29]), which must balance many tasks including recommendation, explanation, and personalized question answering. PE in ConvRec systems thus implies that algorithms need to elicit preferences given arbitrary pairs of system-user utterances, in addition to queries and responses. \fBayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation Conference acronym \u2019XX, June 03\u201305, 2018, Woodstock, NY"
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00982v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00982v1",
|
| 3 |
+
"title": "On the Evaluation of Machine-Generated Reports",
|
| 4 |
+
"abstract": "Large Language Models (LLMs) have enabled new ways to satisfy information\nneeds. Although great strides have been made in applying them to settings like\ndocument ranking and short-form text generation, they still struggle to compose\ncomplete, accurate, and verifiable long-form reports. Reports with these\nqualities are necessary to satisfy the complex, nuanced, or multi-faceted\ninformation needs of users. In this perspective paper, we draw together\nopinions from industry and academia, and from a variety of related research\nareas, to present our vision for automatic report generation, and -- critically\n-- a flexible framework by which such reports can be evaluated. In contrast\nwith other summarization tasks, automatic report generation starts with a\ndetailed description of an information need, stating the necessary background,\nrequirements, and scope of the report. Further, the generated reports should be\ncomplete, accurate, and verifiable. These qualities, which are desirable -- if\nnot required -- in many analytic report-writing settings, require rethinking\nhow to build and evaluate systems that exhibit these qualities. To foster new\nefforts in building these systems, we present an evaluation framework that\ndraws on ideas found in various evaluations. To test completeness and accuracy,\nthe framework uses nuggets of information, expressed as questions and answers,\nthat need to be part of any high-quality generated report. Additionally,\nevaluation of citations that map claims made in the report to their source\ndocuments ensures verifiability.",
|
| 5 |
+
"authors": "James Mayfield, Eugene Yang, Dawn Lawrie, Sean MacAvaney, Paul McNamee, Douglas W. Oard, Luca Soldaini, Ian Soboroff, Orion Weller, Efsun Kayi, Kate Sanders, Marc Mason, Noah Hibbler",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.IR"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM Fairness",
|
| 15 |
+
"gt": "On the Evaluation of Machine-Generated Reports",
|
| 16 |
+
"main_content": "INTRODUCTION The emergence of generative Large Language Models (LLMs) has brought with it the ability to automatically generate all kinds of text. With it, a host of problems\u2014old and new\u2014have (re)emerged that affect these generated texts. The fields of Information Retrieval (IR) and Natural Language Processing (NLP) both have important roles in building new methods to improve text generation and in designing approaches to evaluate the quality of these methods. LLMs can enable new ways for people to satisfy various information needs. Simple information needs (e.g., factoids) can be answered with relatively short generated responses pointing to a single source. However, when information needs are complex, nuanced, or multifaceted, a suitable response must also be more complex. They need to draw together numerous facts gathered from potentially multiple sources to completely and faithfully respond to the information need. We refer to this longer-form answer generation as a \u201creport\u201d on a user-specified topic. More formally, we define a report as a text that attempts to satisfy an explicitly stated information need by finding documents arXiv:2405.00982v1 [cs.CL] 2 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. in a corpus (potentially a mixture of text, images, tables, etc.) that contain relevant information, expressing that information in the text, and providing appropriate citations from the report to the supporting documents. We envision a high-quality report as the ideal response to a user with a complex task in mind, since such a report would succinctly, coherently, and verifiably cover all the information in a corpus pertinent to their information need. Note that this definition makes the framework better suited to reports that inform an analyst than to reports that generate novel analyses. Report writing can be viewed as a natural downstream task of Retrieval Augmented Generation (RAG), where faithfulness has a focus of study [52, 73, 76]. In this view, an LLM generates the report using the report request as part of the prompt and searches the document collection for relevant information that can be added to the prompt to ensure the report\u2019s accuracy. Report generation can also be thought of as summarization. From the summarization viewpoint, a report is an attributed task-based informative abstractive multi-document summary (see Section 3.1.1 for a more detailed explanation of these categories). Such a report might also include portions that are not summaries at all, but are, for example, introductory material or comparisons of the summarized information. We posit that all of these viewpoints are valid, and each informs evaluation for report generation. This work describes an abstract framework for evaluating automated report generation, ARGUE (Automated Report Generation Under Evaluation), that is built on top of lessons learned from prior evaluation approaches in information retrieval, summarization and text generation. It will be used by the TREC track NeuCLIR in its report generation task.1 The ARGUE framework builds a foundation for a broader research agenda in evaluating automatically generated long-form text beyond reports. Some of ARGUE\u2019s most important features are: \u2022 We use the concept of information nuggets out of the summarization literature to capture the content a report should contain. We express each nugget as a question together with a list of acceptable answers to that question. \u2022 Citations are a key report component. A citation is a pointer from a source element in the report (typically a sentence) to a target element in a document (typically the entire document). \u2022 We propose that precision and recall serve as the basis for most content-based measures. ARGUE supports precision measures over the sentences of the report, and recall measures over the information nuggets. 2 REQUIREMENTS This section defines requirements of a report evaluation system. We first define the various actors (and one non-actor) in ARGUE: Report Requester: The person requesting the report. This is the person whose purpose the report should satisfy. Report Audience: The person who will be reading the report. This is often the same as the report requester. Report Writer: The automated system that takes a report request and a document collection as inputs and produces the report. Report Request: A detailed specification of the report to be written. The report request can include: 1https://neuclir.github.io/ \u2022 User story: explains the report requester\u2019s background, situation, and report-writing philosophy, as well as a description of the audience for the report. \u2022 Problem statement: indicates the content that the report is required to contain. \u2022 Background: describes what is already known about the topic that need not appear in the report. \u2022 Constraints: specifies restrictions such as the length of the report or a temporal window for sources. Assessor: Any person making judgments in producing evaluation materials or scoring submitted runs. Assessors include those selecting report topics, writing report requests, identifying nuggets, binding nuggets to documents in the collection, and making other judgments necessary to assign scores to reports. The evaluation we advocate has several key attributes. First, it must ensure that the report is responsive to the report request. It must ensure the report\u2019s key information presented is attested in the document collection, that the report properly cites those documents, and that the information they contain is faithfully captured by the report. It must score a report using evaluation data created by a person. While scoring may be automated, requiring the ground truth data to be human-generated helps to prevent circularity between report generation and report evaluation, thereby reducing the bias the evaluation might have toward e.g., a particular generative model. Finally, the evaluation must have the intention of reusability. Producing a reusable evaluation is challenging because of the level of interpretation required to make the required judgments. Reusability is thus often at odds with the other goals of an evaluation. The information retrieval community has thought through many of the issues underlying reusability, and we present ARGUE to try to take advantage of that experience. While it is nearly impossible to accurately claim that any evaluation component is novel, there are points of emphasis in our proposed evaluation style that we think make it stand out from other extant text generation evaluations. First is the amount and detail of the background information provided in the report request. While other evaluations have provided additional information describing inclusion criteria, in practice systems have often focused only on brief specifications. For example, a narrative giving detailed information about what should and should not be considered relevant, long a part of TREC topics, has rarely been exploited. The arrival of large language models that can easily incorporate such materials makes now an opportune time to focus on including ancillary documentation in a report request, not just for this style of evaluation, but for any text generation evaluation. While we advocate that these ancillary details be made explicit in the evaluation, we acknowledge that in real report-writing applications implicit knowledge might be more practical and adequate for the task. Second, until recently hallucination in text generation system output was not a major focus, primarily because generative systems were not good enough to create convincing hallucinated text. With the rise of large generative LLMs hallucination has become a common part of text generation system output; the evaluation must account for this as well. Borrowing from an IR evaluation perspective, we promote the view of nuggets as opinion, not fact. In report evaluation, nuggets play the role that relevant documents play in IR. Were document \fOn the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA relevance treated as fact rather than opinion, it would be virtually impossible to come to agreement on which documents were relevant to a given topic; inter-annotator agreement would be too low. Treating relevance as opinion avoids this problem. In exchange, relevance as opinion adds constraints to the evaluation, primarily that the author of the topic should be the relevance assessor. If relevance is not decided until after system submissions, that means that assessor continuity is important; assessors should be selected such that they can create topics at one time, and assess relevance at a later time, possibly months later. We advocate accepting this tradeoff for nuggets in report generation evaluation. For nuggets, the implication is that items reasonably seen by a report writer as nuggets might not be identified in advance by the assessor. A given evaluation might address this issue through a pyramid approach [61] to identify nugget importance if multiple reference reports are available. Or an evaluation might determine that nugget subjectivity will not change the preference order of meaningfully different systems and ignore it. In either case, we recommend that report sentences bearing and accurately reflecting a citation should not be penalized during scoring, precisely because they might be valid nuggets in someone\u2019s eyes. Constraints such as maximum document length can discourage intentional overgeneration of sentences that have a small chance of matching assessor nuggets. To meet these requirements, four broad questions should be asked about each report being evaluated: Q1 Does the report include the information contained in the document collection that the report requires? Q2 Does it accurately express all such information? Q3 Does it contain appropriate citations to the collection? Q4 Has the information been fitted together into a useful form? Q4 is a crucial part of any text generation evaluation. It covers such attributes as fluency [65], coherence [40, 50], consistency [32], and rhetorical structure [16, 21]. In light of this importance, it has a long history and has been studied in depth elsewhere. Thus, while we leave a place for this in the overall evaluation in ARGUE, we leave it to others to address it in light of the changing NLP landscape. 3 BACKGROUND Here we review related work on report writing and evaluation. 3.1 Report Writing Report writing involves text generation, for which prior work on summarization and RAG provides useful perspectives. 3.1.1 Summarization. In its most general form, a summary is a document whose substantive content is based entirely on the content of other target document(s), and that is more concise than simply presenting the other document(s) in their original form would have been [55]. Summaries have been defined along several axes: \u2022 Single-document or Multi-document [47]: Is the summary built from one document (single-document), or many (multi-document)? \u2022 Extractive or Abstractive [13]: Does the summary primarily draw language from the summarized documents (extractive), or does it generate new language (abstractive)? \u2022 Indicative or Informative [37]: Does the summary help the reader to decide whether to read the summarized document(s) (indicative), or does it include enough content to make it unnecessary to read those document(s) (informative)? \u2022 Generic or Task-Based [83]: Is the summary constructed with no particular task in mind (generic), or is there a specific task that the summary is designed to support (task-based)? \u2022 Attributed or Unattributed [70]: Does the summary include citations to the summarized documents (attributed), or does it lack citations (unattributed)? \u2022 Original or Update [56, 63]: Should the summary include all information (original), or only information that the reader does not already know (update)? \u2022 Closed or Open Domain [26, 92]: Are the documents to summarize supplied (closed domain), or must the system perform a search to identify the appropriate documents (open domain)? The reports in which we are interested are attributed task-based informative abstractive open-domain multi-document summaries that may call for either original or update summaries. 3.1.2 Retrieval-Augmented Generation. Following preliminary research on furnishing transformer architectures with external knowledge sources, Lewis et al. [41] introduce RAG models as a way to improving language model performance on knowledge-intensive tasks, using an encoded Wikipedia collection as a non-parametric memory system. RAG models have since been used to improve dialogue systems [38, 77], machine translation [7, 8], and text-style transfer [44] among other applications [43]. Various approaches have been proposed to incorporate RAG models into summarization [2, 64] and other document generation tasks. One use of retrieval has been to find an example summary, sometimes with retrieved summary reranking [9], to serve as a template for the summary of another document. Retrieval can also be used to improve language model factuality. By curating large, high quality collections, generation can be grounded in supporting documents [4]. This mechanism has been shown to be particularly beneficial for rarer entities and concepts [54]. Finally, RAG enables LLMs to access information that was not available at pre-training time, such as proprietary or copyrighted information [57]. Vision-language modeling [1, 3, 51] enables multimodal retrievalaugmented generation systems that benefit from rich non-textual data [33, 60]. Different modalities facilitate the completion of different tasks, including image understanding [12, 93], open-domain VQA [33, 49], translation [20], and multimodal generation [89]. 3.2 Evaluation As report generation includes elements of several prior tasks, including document retrieval, summarization, question answering, and retrieval-augmented generation, we briefly review salient work on those tasks that we see as related to ARGUE. 3.2.1 Information Retrieval. Evaluation of ad hoc retrieval is typically based on assessor-produced relevance judgments of documents that are selected by pooling system responses in a shared task, or sometimes based on active learning [29, 72]. Obtaining both good precision and good recall is important in real-world systems, so commonly used metrics combine both components (e.g., mean \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. average precision, nDCG [35]). Statistical significance testing can be performed, for example with Student\u2019s \ud835\udc61-test [78]. In a report-writing scenario, recall is important to allow assessment of how comprehensively the report responds to the report request. Precision is also important for automated report generation; reports are a type of multi-document synthesis, and incorporating content from non-pertinent documents can adversely affect the utility of the report. To create evaluation datasets for report writing, care must be taken to develop report requests that match information available in the document collection. If requests are too broadly scoped, or if too much salient information is present in the collection, it will be difficult (i.e., prohibitively expensive in human labor) to determine the full set of correct nuggets present in the collection. 3.2.2 Summarization. Evaluating automatic summarization can require significant manual effort. In 2001, NIST initiated the Document Understanding Conference (DUC) to develop evaluation methods for summarization. DUC continued until 2007 and then became the summarization track of the Text Analysis Conference (TAC) through 2014. The DUC/TAC summarization evaluations were notable for having people write summaries manually, and using those \u201cmodel\u201d summaries (or \u201creference texts\u201d) as the jumping-off point for metric development. The DUC evaluation procedure measured coverage (that is, recall) through a pairwise comparison between two summaries: the model summary and a \u201cpeer\u201d summary (which could be a generated summary or another model). The model was divided into Elementary Discourse Units (EDUs), essentially clauses [45, 80] while the peer was split on sentence boundaries. An assessor would match each EDU with the sentences in the peer that contained that information, and indicate how much of the meaning of the EDU was expressed in the corresponding matched peer units. Unmarked sentences in the peer were then marked for relevance. Harman and Over [31] found that model summaries from different authors were markedly different, and that assessors also did not agree on model unit coverage ratings. Work also began around DUC 2003 on automatic metrics, specifically comparing the model summary to the peer using word n-gram statistics. Lin and Hovy [48] looked at the BLEU measure developed for machine translation, and found that recall on word unigrams correlated better with the DUC assessments than full BLEU scoring, which incorporates longer n-grams. Following that, they developed ROUGE [46], a recall-oriented metric similar to BLEU. ROUGE has a number of variants depending on how tokens are parsed, how n-grams are selected and assembled, and how scores are aggregated across summaries to obtain a system score. A study by Graham [28] explored a large grid of ROUGE parameters in comparison with BLEU using data from DUC-2004, and found that BLEU and ROUGE-2 (2-grams, stemmed, stopwords removed, computing an average of precision scores) had the highest correlation with human assessment. ROUGE has been used to evaluate summarization [46], Long-Form Question Answering (LFQA) [39, 88] and RAG [41]. ROUGE has well-documented problems as an evaluation metric in e.g., summarization [28] or LFQA [39]. From our perspective, its main problems as an evaluation metric for report generation are its requirement for reference reports (making it expensive), its poor robustness to hallucination (making it inaccurate), and that it does not handle citations (making it incomplete). In 2004, Nenkova and Passonneau [61] proposed the \u201cPyramid Method\u201d for evaluation. Since comparing generated summaries against a model is subject to the inherent variation in model summaries, they propose to abstract the model summaries into Summary Content Units (SCUs). SCUs are clauses that appear (with more or less the same meaning) in multiple model summaries. They are weighted by the number of model summaries that express them. Figure 1 shows an example of two SCUs from parts of four model summaries. In informal usage, SCUs have been referred to as \u201cnuggets.\u201d Rather than being a clause, a nugget might be a description of a concept along with how it was expressed in the models.2 Subsequent research on the pyramid method has focused on automatic creation and alignment of SCUs. For example, Gao et al. [24] performs a dependency parse of the model summary, then represents individual clauses using vector embeddings. Nugget fuzziness can be addressed by using hoppers [59, 79] to bin together differing descriptions that refer to the same item. The main difficulties in using nuggets for report evaluation are that they treat hallucinations (contradictions and misinformation) exactly the same as content that has no matching nugget, and that they do not support citations. We have incorporated nugget-based evaluation into ARGUE, tying nuggets to reports not directly, but rather through cited documents. 3.2.3 Question Answering. Factoid Question Answering (QA) evaluation typically consists of using accuracy or \ud835\udc391 against a gold standard answer (or answer set) [15, 69, 84]. This type of evaluation has many advantages, as it can be easily automated and is simple to annotate. Long-form QA [19, 62] is evaluated similarly to summarization, typically with automated metrics like ROUGE, model-based metrics like BERTScore [91] or BLEURT [75], or human evaluation [39, 88]. 3.2.4 Retrieval-Augmented Generation. Early retrieval augmented generation systems have been evaluated using task-specific metrics on end-to-end tasks. For example, in the context of question answering, exact match and \ud835\udc391 metrics have been used [30, 41]. For summarization, ROUGE and BERTScore on reference summaries are common [26]. These approaches have two limitations: they only measure ability to complete end tasks, and thus cannot assess intermediate stages or evaluate generation across multiple dimensions; and they are not well-suited to capture failures that can be introduced by current generative models [27]. More recently, techniques have proposed to more holistically evaluate RAG systems. Gienapp et al. [25] introduce a theoretical framework for evaluating ad hoc generative retrieval. Chen et al. [11] focus on robustness of RAG systems against various perturbations. Thakur et al. [82] benchmark hallucinations and the ability of RAG systems to identify relevant information for 18 languages. Others have introduced benchmarks to measure the ability of RAG systems to provide citations [6, 23, 53, 90]. While not specifically 2See https://tac.nist.gov/publications/2010/presentations/TAC2010_Summ_Overview. pdf for an example of SCUs as nuggets. \fOn the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA A1 In 1998 two Libyans indicted in 1991 for the Lockerbie bombing were still in Libya. B1 Two Libyans were indicted in 1991 for blowing up a Pan Am jumbo jet over Lockerbie, Scotland in 1988. C1 Two Libyans, accused by the United States and Britain of bombing a New York bound Pan Am jet over Lockerbie, Scotland in 1988, killing 270 people, for 10 years were harbored by Libya who claimed the suspects could not get a fair trail in America or Britain. D2 Two Libyan suspects were indicted in 1991. SCU1 (w=4): two Libyans were officially accused of the Lockerbie bombing A1 [two Libyans]1 [indicted]1 B1 [Two Libyans were indicted]1 C1 [Two Libyans,]1 [accused]1 D2 [Two Libyan suspects were indicted]1 SCU2 (w=3): the indictment of the two Lockerbie suspects was in 1991 A1 [in 1991]2 B1 [in 1991]2 D2 [in 1991.]2 Figure 1: A pair of example Summary Content Units. Four semantically similar sentences from four different model summaries are grouped into two SCUs highlighting the key facts from those sentences. From Nenkova and Passonneau [61]. designed for RAG applications, metrics designed to evaluate factuality (e.g., FactScore [58]) or faithful manipulation of long inputs (e.g., BooookScore [10]) can complement application-specific evaluation frameworks. Most approaches to automated evaluation aim to estimate the effectiveness of RAG systems across desirable dimensions (e.g., faithfulness, answer relevance, and context relevance). Techniques include prompting LLMs to evaluate generated summaries [76], and fine-tuning lightweight models on synthetic data [73]. Downstream applications, such as question answering, can also be used to evaluate the effectiveness of RAG systems [74]. 4 PROPOSED FRAMEWORK This section describes our conceptual evaluation framework for automated report generation. We name this abstract framework ARGUE (Automated Report Generation Under Evaluation) for convenience. We model the information need as a report request, which is analogous to the topics in TREC-style ad hoc retrieval evaluation. The report writer is required to respond with a verifiable report, with citations to its information sources. As in retrieval system evaluation, we restrict the system to citing documents in a pre-defined document collection instead of arbitrary information on the web. The framework is thus limited in the range of writing types it can evaluate. In particular, it does not currently support evaluation of reported information that is not explicitly supported by statements in the document collection. This restriction allows experiments that compare systems across research studies and over time. 4.1 Framework Overview In ARGUE, creating a report generation benchmark has three phases. The first phase creates evaluation data. We believe that systems should be evaluated over human-curated data so that they are ranked on effectiveness rather than alignment to machine output. System input comprises a document collection and report requests that describe information needs. The second phase distributes these inputs to participants. Generated reports are expected to be responsive to the information needs statements. A valid report will cite source documents that contain the reported information. Citations are a key attribute of this framework. Other report stylistic requirements might include, for example, a length limit to encourage systems to express information succinctly. If the document collection is in a language different from the report request, or is multilingual, the report may be required to be written in the language of the report request. We envision that the input data will be distributed as part of an evaluation campaign, but this is not required. Assuming an evaluation campaign, generated reports will be received and evaluated by assessors; however, to support reusability, key components will be replaced by automated mechanisms to allow future systems to be scored using the same evaluation data. The third phase scores reports. Since the goal of this framework is to evaluate systems, each system will need to generate multiple reports based on the various report requests. Report scores will be aggregated to assign system scores. Required information in reports will be expressed by assessors in the form of nugget questions and answers. Answers will be attested in the collection and tied to particular documents that attest those answers, thereby tying the nuggets to supporting documents. During scoring, report citations will be used to determine which nuggets are described in the report. Thus there will be a notion of recall over nuggets, which is a new feature in RAG evaluation. Citations will also be used to ensure that non-required information that is included in the report (facts that are not part of the necessary nuggets) is attested in the collection. A precision score over report segments measures how well the report adheres to information found in the collection. This allows hallucination to be addressed, whether it be false information or true information that is unattested. While traditional recall and precision are set measures, they can be modified to account for some nuggets having greater weight than others or to allow report segments to bear multiple citations. 4.2 Evaluation Inputs and Outputs 4.2.1 Evaluation Inputs. The first system input is the collection of items that will be used as source material for the retrieval task. While these items could be documents written in one or more languages, it is also possible for the items to be images, videos, audio files, or some combination. For the reminder of this paper, we will refer to the items as documents. Because of the importance of having citeable units, the document collection will be divided into target elements, which are typically documents, but can be smaller units of text such as passages, paragraphs, or sentences, depending on the needs of the evaluation. In this paper we will assume that an entire document has been selected as the target element. Segmentation into target elements should be done once and distributed with the collection to ensure that all systems are evaluated on an even footing. The document collection should include documents that contain sufficient information relevant to the desired report. Following most information retrieval-based evaluations, documents are assumed to be truthful; verifying the truthfulness of document contents is orthogonal to and beyond the scope of the framework. Instead, the framework focuses on citation, requiring that all reported information cites supporting documents from the evaluation document collection. Information that cites \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. a document incorrectly or that is missing a required citation is appropriately penalized. The second system input is a set of assessor-developed information needs referred to as report requests. A report will be generated for each report request. Report requests are more extensive and subtler than information needs for previous IR or summarization tasks. See Section 2 for the full report request description. Creation of report requests is a complex process that tries to satisfy multiple, sometimes conflicting goals. It bears many similarities to topic creation for a TREC-style IR evaluation [85]. In topic identification for ARGUE, the topic creator must be familiar both with information retrieval, and with any special requirements of the document collection. For example, a bilingual document collection would require that the topic creator be at least bilingual. A document collection on medical topics would require topic creators who were well-versed in the medical domain. In addition, an IR evaluation typically tries to control the number of documents that are relevant to the topic being developed, in part because doing so can improve reusability. An ARGUE evaluation must control not only the number of documents that contain relevant information, but also the number of nuggets and the number of target elements that align to each nugget. Having too many items in any of these categories leads to high assessment costs; having too few leads to higher score variance and lower ability to distinguish systems. That said, assessors need not capture all information that might satisfy the information need. It is up to the assessor to determine what, in their opinion, is the essential information. 4.2.2 Evaluation Output. The report will be generated by an automated report writer. Reports produced by the report writer should satisfy the constraints listed in Section 2. For the purposes of this framework, we make a convenience assumption that the report requester and the report audience are the same. As an example, the assessor could have the role of analyst, with the purpose of the report being to support the process of drawing analytic conclusions. The generated report will be segmented into report segments, either manually or automatically. For convenience, we will assume in this work that a report segment is a sentence, but it could be some other well-defined portion of report text. Finer-grained segments may enable more nuanced distinctions. Given that precision scores operate over report segments, and given that automated sentence segmentation is imperfect, we believe that it is important that the report writer control the segmentation. Thus, each report must be segmented into sentences by the report writer prior to evaluation. The evaluation should include guidelines on sentence segmentation. The report must also include appropriate citations, pointers from source elements (sentences) to target elements (documents). Each report sentence will bear zero or more citations, as described below. 4.3 Citations Each substantive sentence of a submitted report must cite the document target element(s) from which it was derived. Which sentences are substantive may vary according to the goals of the evaluation. A citation then is a pointer from one report segment to one target element. A given report segment may bear more than one citation, and a given target element may be cited more than once. By traversing such citations the evaluation system can map sentences in the report to documents and then to nuggets. Note that the report writer must know nothing about the nuggets that will be used to evaluate the report; they are known exclusively to the assessor. The assessor may choose to require just one citation per sentence, or, if completeness is to be measured, all valid and salient citations. The validity of a citation has three components. First, the report segment must be supported by the target element. That is, reading the target element should verify the sentence\u2019s accuracy. In a manual evaluation, the assessor decides whether a given sentence is supported by the target element. In an automated evaluation, support of a report segment for a target element could be measured in several ways. The simplest is a semantic match, testing whether the semantics of the two texts match. A number of such automated metrics are available, such as Sentence-BERT [71]. A more accurate but harder measurement would be whether the target element entails the report sentence. Entailment has been a component of evaluation sets such as GLUE [87] and SUPERGLUE [86], and good solutions to the problem have been identified [67]. Second, at the same time, the sentence bearing the citation should be responsive to the report request. This means that the cited target element is linked to a nugget, and that the report segment provides an answer to one of that nugget\u2019s questions (see below for nugget questions). Thus the acceptability of a nugget answer depends on which document the report cites. Again, the assessor will determine whether the report segment answers a nugget question. One way to automate assessment of responsiveness might be to use an automated QA system to find answers to a nugget question, then use a semantic matching system to determine whether the report segment matches one of those answers. Third, some evaluations will also assess whether a talented author in the field of the report would include that citation if they had written the report. An evaluation that simply wants all substantive sentences to bear a citation will omit this component; a more nuanced evaluation of reports in their final form could include it. In either case, judgments will need to be made on which sentences require a citation. Cases where no citation is required include introductory sentences, background sentences that reflect the problem statement, and sentences that summarize other cited sentences. If we are interested only in nugget recall, we can safely ignore whether sentences ought to have citations. But if we are interested in precision, we would not like to penalize a report for containing such non-citing sentences (except perhaps when measuring the quality of the report as a whole). To handle non-citing sentences, it must be determined whether the sentence should have a citation. If a citation is not needed, the report can be scored as if the identified sentences were not present in the report. 4.4 Nuggets The proposed evaluation is centered on nuggets. A nugget is a piece of information that should appear in the report and that could be expressed in a variety of ways in the document collection. 4.4.1 Nugget Definition. A nugget in this framework is a combination of a question and one or more answers to that question that address some aspect of the report request and that are expressed in at least one target element in the collection. Nuggets must be expressed at an appropriate level of granularity for the desired \fOn the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA report. If the report answers such a question using appropriate citations into the document collection, we deem it to have succeeded in identifying that nugget; evaluation metrics (described in Section 4.5 below) can then use statistics over the correctly answered, incorrectly answered, and unanswered nugget questions to produce a score for a given report. Answers to nugget questions should express the information that a reasonable person would expect in a report written in response to the report request. The concept of nuggets arose from summarization evaluation [61]. New in this framework is the expression of nuggets as questions with allowable answers. We are interested in evaluation data that can be used to automatically evaluate systems, much like relevance assessments can be used to evaluate an IR system even decades after their creation. We believe this formulation will be helpful in automating report generation evaluation. Nuggets need not capture everything any report responding to the report request might legitimately include. Given that reports by necessity will be shorter than the source documents, the assessor will determine the required information and express that as nuggets, reinforcing the idea that nuggets are opinions instead of facts. The set of answers to a nugget question are drawn from all the answers supported by the document collection. Questions and answers will be in the request language even if, for example, the source information comes from an image or is in a different language. 4.4.2 Nugget Identification. Nuggets are identified by the assessor. Nuggets must be both relevant to the report request and attested in the document collection. In practice, the assessor could either look through retrieved documents to identify important aspects of the topic from the target elements, or identify nuggets a report on the topic ought to include, then search the document collection to see which are attested. A combination of both methods could be used. To ensure reproducibility and enable evaluating recall, it is desirable to identify most (or all) nuggets that should be included. In addition to identifying the set of nuggets for a report request, the assessor must also identify each target element in the document collection that supports an answer to each nugget. To do so, the assessor must have both a way to identify target elements that contain nugget-supporting information, and a way to bind target elements to nugget answers. The former problem is similar to that faced by many IR collection developers of ensuring that all or most relevant items have been discovered. Full collection annotation is not practical for large collections. Three main techniques for identifying relevant documents are interactive search, pooling [36, 81, 94], and active learning [14, 42, 68]. Interactive search is simply having a person use any desired tool to identify relevant documents. In pooling, the assessor judges only documents found in an aggregate of several systems\u2019 top results. Either assessors must have access to systems that together are likely to find most of the relevant documents, or this step must wait until task participants have submitted their runs. It is usually desirable to augment the pools manually using interactive search. In active learning, a classifier identifies relevant documents. Each time the assessor judges a document, the classifier is retrained to take the new judgment into account. Any or all of these techniques might be used to restrict the number of documents that must be examined during nugget identification. The second task, assigning target elements to nuggets, is more challenging. We highlight three challenges here. First is withinnugget variation. For example, one nugget answer might be a superset of another, such as \u201cJune\u201d versus \u201c26 June.\u201d If the more general answer is acceptable, the more specific answer must be included in the answer set to distinguish it from an incorrect answer such as \u201c12 June.\u201d The summarization community introduced hoppers [79] to capture commonality across descriptions that differ in some details. For example, two descriptions of a particular natural disaster might indicate different numbers of casualties; perhaps the descriptions were written at different times or based on two different information sources. Whether hopper-style conflation is used for a given evaluation depends on the desired report type. An overall report on the natural disaster might use hoppers; a report on how different news services covered the disaster might need to distinguish differing descriptions. As with decisions on nugget creation, if hoppers are used, the choice of hoppers is left to the assessor. A second challenge is a single report segment or target element expressing information about more than one nugget. This is handled through multiple citations borne by a single report sentence, and/or multiple mappings between target elements and nuggets. This complicates the bookkeeping needed to give appropriate credit to each nugget, but poses no theoretical problems. A third challenge is a single nugget requiring multiple report sentences or target elements to be fully captured. This challenge arises because nugget question/answer pairs lend themselves well to simple facts expressed in the report, but are less well suited to identifying complex information. Nonetheless we believe that the general framework will be extensible to complex nuggets whose expression is distributed across several report sentences or target elements by allowing complex questions answered by Boolean combinations of target elements, and by exploiting recent research in question answering [17, 18]. 4.4.3 Practical considerations. The following considerations are not requirements of the framework, but instead practical tips we have gleaned working to instantiate this and similar evaluation frameworks. First, we believe that an assessor must be familiar both with IR concepts and any special requirements of collection and evaluation topic area (such as the aforementioned bilingual or medical settings). Second, it may be advantageous for an assessor to produce a gold standard report to help assemble the information that should be in a satisfactory report. Nugget questions can then be composed from that report. Creating a gold standard report also enables a ROUGE evaluation for comparison. Third, IR evaluations usually limit the number of relevant documents to simplify and reduce the cost of evaluation. Report evaluation would also like to control the number of nuggets and document mappings to ensure the evaluation can distinguish good and bad systems; however, this can eliminate from consideration practical use cases that would otherwise be in scope for the task. This tradeoff has traditionally been considered worthwhile, but it should be remembered that it is a tradeoff. Fourth, LLMs can call on memorized knowledge not found in the document collection. Often the LLM training collection is unknown. If the LLM has not seen the evaluation corpus, it will need to rely on hallucination, which will negatively affect evaluation data quality. Finally, while finding all potential nuggets is unnecessary \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al. Figure 2: Report sentence scoring. Answers to eight yes/no questions dictate an outcome for each input sentence. + indicates that the sentence is rewarded, that it is penalized, and 0 that it does not affect the overall report score. Report Request: I am a Hollywood reporter writing an article about the highest grossing films Avengers: Endgame and Avatar. My article needs to include when each of these films was considered the highest grossing films and any manipulations undertaken to bring moviegoers back to the box office with the specific goal of increasing the money made on the film. Gold Standard Report: Avatar originally became the highest grossing film in 2010 [D1]. Avengers: Endgame replaced Avatar as the highest grossing film in 2019 [D1, D2, D3, D8, D10, D12, D13]. It overtook Avatar by adding an additional six minutes of footage to the film to draw viewers back to the movie theater [D4]. Two years later Avatar was re-released in mainland China [D1, D2, D5, D6, D7, D8, D9, D10, D11]. It earned a sufficient amount of money to retake the title of highest-grossing film in 2021 [D5, D11, D6, D7, D2, D8, D9, D1]. Nuggets as Questions and Answers: (1) When did Avatar first become the highest grossing film? \u2022 2010 [D1] (2) When did Avengers: Endgame become the highest grossing film? \u2022 2019 [D1,D2, D3, D8, D10, D12, D13] \u2022 July 2019 [D3, D12, D13] \u2022 July 20, 2019 [D3] \u2022 July 21, 2019 [D13]\u2020 (3) What did studio executives do to the Avengers: Endgame film to become the highest grossing film? \u2022 Added six minutes of additional footage [D4] \u2022 Added footage [D4] \u2022 Added 6 minutes [D4] \u2022 Additional footage at the end of the film [D14] (4) When did Avatar retake the title of highest grossing film? \u2022 2021 [D1, D2, D6,D7,D9,D11] \u2022 March 2021 [D1, D6 ,D7, D9, D11] \u2022 March 13, 2021 [D1, D6, D9] \u2022 Two years after the Avengers: Endgame became the highest grossing film [D2] (5) What event led to Avatar becoming the highest grossing film? \u2022 Re-release in Mainland China [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Re-release in China [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Release in Mainland China for a second time [D1, D2, D5, D6, D7, D8, D9, D10] \u2022 Returned to theaters in China [D11] \u2020In Taiwan Time Figure 3: Example evaluation material for a report request. since nugget worthiness is an assessor\u2019s opinion, finding all answers to nugget questions is important for collection reusability, especially as nuggets can only be answered using documents known to have the answer. If nuggets are generated prior to submissions, it might be worth pooling submissions to identify more nugget answers. 4.5 Metrics Many metrics can be used to assess automatically generated reports. Two common IR measures are recall and precision; we focus on these here because they are well-known, easy to calculate, and highlight most of the important scoring issues we face in generated report evaluation. Recall and precision each require a numerator and a denominator. The recall denominator is the number of distinct assessor-identified nuggets; its numerator is the number of correctly reported nuggets (those supported by one or more of the necessary supporting citations in the report). So recall tells us how many of the concepts central to the report were actually reported on. Precision must account for phenomena below the nugget level, so we calculate it over report segments (which again we assume to be sentences). The denominator is the number of report sentences, minus any sentence that does not require a citation or that properly cites information not part of any nugget. The numerator is the number of sentences deemed to bear accurate citations, plus any sentences specified by the evaluation that correctly bear no citation. Figure 2 describes a typical approach to sentence evaluation. The rules embedded in the flowchart are not hard-and-fast, but are likely adequate for many evaluations under this framework. In the flowchart, \u201c+\u201d means the sentence is correct and should be rewarded; \u201c\u2013\u201d means that it is incorrect and should be penalized; and \u201c0\u201d means that the sentence is not included in the score. The flowchart shows how each sentence of the report can be scored. We propose these principles to guide sentence scoring: \u2022 Sentences with citations whose target element does not support them should be penalized (Outcome#1 in Figure 2). \u2022 Properly cited and attested sentences that are not relevant to the report should be ignored (Outcome #2). \fOn the Evaluation of Machine-Generated Reports SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA (1) Avengers: Endgame and Avatar are two of the highest-grossing films in history. Outcome: #4 (2) Avengers: Endgame surpassed Avatar as the highest-grossing film globally, with a box office revenue of $2.787 billion. [D12] Outcome: #2 (3) This achievement was officially announced in July 2019. [D13] Outcome: #3, Nugget: 2 (4) To bring moviegoers back to the box office and increase the revenue of Avengers: Endgame, Marvel Studios implemented several strategies. Outcome: #4 (5) One of these strategies was re-releasing the film with additional content. Outcome: #6 (6) Marvel Studios President Kevin Feige confirmed plans for a re-release, including previously unseen footage, to generate renewed interest and boost ticket sales. [D101] Outcome: #2 (7) The new version of the film did not include any re-edited scenes within the main feature but instead featured additional scenes after the end credits, including a tribute. [D14] Outcome: #3, Nugget: 3 (8) This approach aimed to entice both existing and new audiences to revisit the theaters and contribute to the film\u2019s box office success. Outcome: #6 (9) Similarly, Avatar also made a comeback to reclaim its position as the highestgrossing film. Outcome: #4 (10) The film was re-released in China, where it achieved a single-day box office revenue of $8 million. [D2] Outcome: #3, Nugget: 5 (11) The re-release in China was particularly successful, as it generated significant revenue and propelled Avatar back to the top of the global box office charts. [D1] Outcome: #3, Nugget: 5 (12) The film\u2019s re-release in China was attributed to the impact of the COVID-19 pandemic, which led to the delay of many new releases and created an opportunity for Avatar to regain its popularity. [D2] Outcome: #3, Nugget: 5 (13) In conclusion, Avengers: Endgame and Avatar both held the title of the highestgrossing film globally at different points in time. Outcome: #4 (14) Marvel Studios strategically re-released Avengers: Endgame with additional content to attract audiences and boost ticket sales. Outcome: #6 (15) Avatar capitalized on the re-release trend in China, taking advantage of the pandemic-induced delay of new releases. Outcome: #6 (16) These manipulations aimed to increase the films\u2019 box office revenue and solidify their positions as record-breaking blockbusters. Outcome: #4 Figure 4: Example report evaluation result. \u2022 A sentence that cites a target element supporting a nugget that the sentence fulfills should be rewarded (Outcome #3). \u2022 Sentences that neither have nor require citations should not affect the score (Outcome #4). \u2022 Sentences that should contain a citation but do not should be penalized the first time their claim occurs (Outcomes #5, #6). \u2022 Sentences that claim the absence of a fact should be rewarded or penalized depending on whether the absence is explicitly stated as a nugget (Outcomes #7, #8). For this, a nugget can be created for information that the report request explicitly asks for but is not attested in the collection. Most sentences will bear either zero or one citation. A sentence can bear multiple citations, either because the same information is multiply attested in the collection, or because it is complex. Sentences that cite multiple target elements supporting the same nugget are treated as a single citation. Alternatively, the evaluation may macroaverage citation scores if all sentences are to be given equal weight, or microaverage them if the focus is on citation accuracy. Support by multiple report sentences counts only once per nugget. To automatically score a report, each decision diamond in Figure 2 must be automatable. Some are trivial, such as \u201cHas citation;\u201d others are less so. We believe current technology could do a reasonable job with most of the tasks. For instance, entailment models can likely determine if a document supports a report sentence. Note that originality is not a component of this evaluation; preventing plagiarism, while important, is a specialized area with its own metrics and evaluations [5, 22, 34, 66]. 5 EXAMPLE ASSESSMENT Figure 3 shows an example of the two items required to do manual or automatic assessment. The report request identifies the desired report content. The nugget questions and answers show how each answer is linked to the documents that attest to that answer. The Gold Standard Report that is shown is optional, but a useful intermediate step for the assessor between source document search and nugget question creation. Figure 4 is a report generated in response to the example in Figure 3, broken into report segments to illustrate manual evaluation. Each Outcome: # indicates how the sentence would be categorized using the flowchart in Figure 2. For Outcome: #3, the nugget answer in the sentence is also recorded. In this example, no sentence received a negative assessment because there were no outcomes of #1 or #7. Therefore, precision is 5/(16 \u221211) = 1.0. One nugget was repeated in Lines 10, 11, and 12, so recall is 3/5 = 0.6. Finally, for both Lines 2 and 6, the assessor would have needed to refer to the original source document to assess the statement, since the information in the sentence had not been captured in a required nugget. Assessing such sentences will likely be the most time-consuming part of manual assessment. 6 CONCLUSIONS LLMs have enabled remarkable new ways to satisfy information needs. Rather than simply providing \u201c10 blue links\u201d or an extracted answer snippet, LLMs have the potential to peer into documents to identify information salient to a topic and compile it into highly coherent, long-form text responses. We envision these generated reports will be a central way that some users will satisfy complex, nuanced, or multifaceted information needs. Because we believe that current evaluation methodologies for these report-generation systems are insufficient to maintain quality and guard against known defects, we felt the need for a report evaluation framework based on core principles \u2014 responsiveness to the information need, grounding and verifiability in documents, completeness, and reusability \u2014 while deliberately omitting aspects of report generation that current systems do not seem to struggle with (e.g., coherence, structure, etc.). Our new perspective on report generation evaluation is IR-centric, pulling together triedand-true notions of relevance, recall, and user modeling. We have also demonstrated an instantiation of our framework that could be applied either manually or with automatic systems. Evaluation methodologies inform progress and direct attention. We hope our proposed generated report evaluation framework will spur progress in the development of next-generation information access systems that can provide responsive, complete, and verifiable information on complex, nuanced, and multifaceted topics. WHOSE PERSPECTIVE This paper represents the perspectives of a group of industry and academic researchers at a variety of career stages. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA J. Mayfield et al."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.00987v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00987v1",
|
| 3 |
+
"title": "S$^2$AC: Energy-Based Reinforcement Learning with Stein Soft Actor Critic",
|
| 4 |
+
"abstract": "Learning expressive stochastic policies instead of deterministic ones has\nbeen proposed to achieve better stability, sample complexity, and robustness.\nNotably, in Maximum Entropy Reinforcement Learning (MaxEnt RL), the policy is\nmodeled as an expressive Energy-Based Model (EBM) over the Q-values. However,\nthis formulation requires the estimation of the entropy of such EBMs, which is\nan open problem. To address this, previous MaxEnt RL methods either implicitly\nestimate the entropy, resulting in high computational complexity and variance\n(SQL), or follow a variational inference procedure that fits simplified actor\ndistributions (e.g., Gaussian) for tractability (SAC). We propose Stein Soft\nActor-Critic (S$^2$AC), a MaxEnt RL algorithm that learns expressive policies\nwithout compromising efficiency. Specifically, S$^2$AC uses parameterized Stein\nVariational Gradient Descent (SVGD) as the underlying policy. We derive a\nclosed-form expression of the entropy of such policies. Our formula is\ncomputationally efficient and only depends on first-order derivatives and\nvector products. Empirical results show that S$^2$AC yields more optimal\nsolutions to the MaxEnt objective than SQL and SAC in the multi-goal\nenvironment, and outperforms SAC and SQL on the MuJoCo benchmark. Our code is\navailable at:\nhttps://github.com/SafaMessaoud/S2AC-Energy-Based-RL-with-Stein-Soft-Actor-Critic",
|
| 5 |
+
"authors": "Safa Messaoud, Billel Mokeddem, Zhenghai Xue, Linsey Pang, Bo An, Haipeng Chen, Sanjay Chawla",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Model AND Based AND Reinforcement AND Learning",
|
| 14 |
+
"gt": "S$^2$AC: Energy-Based Reinforcement Learning with Stein Soft Actor Critic",
|
| 15 |
+
"main_content": "INTRODUCTION S!AC (ours) SQL (Haarnoja et al., ICML 17) SAC (Haarnoja et al., ICML 18) Explicit entropy evaluation Num. SVGD steps = 0 Figure 1: Comparing S2AC to SQL and SAC. S2AC with a parameterized policy is reduced to SAC if the number of SVGD steps is 0. SQL becomes equivalent to S2AC if the entropy is evaluated explicitly with our derived formula. MaxEnt RL (Todorov, 2006; Ziebart, 2010; Haarnoja et al., 2017; Kappen, 2005; Toussaint, 2009; Theodorou et al., 2010; Abdolmaleki et al., 2018; Haarnoja et al., 2018a; Vieillard et al., 2020) has been proposed to address challenges hampering the deployment of RL to real-world applications, including stability, sample efficiency (Gu et al., 2017), and robustness (Eysenbach & Levine, 2022). Instead of learning a deterministic policy, as in classical RL (Sutton et al., 1999; Schulman et al., 2017; Silver et al., 2014; Lillicrap et al., 2015), MaxEnt RL learns a stochastic policy that captures the intricacies of the action space. This enables better exploration during training and eventually better robustness to environmental perturbations at test time, i.e., the agent learns multimodal action space distributions which enables picking the next best action in case a perturbation prevents the execution of the optimal one. To achieve this, MaxEnt RL models the policy using the expressive family of EBMs (LeCun et al., 2006). This translates into learning policies that maximize the sum of expected future reward and expected future entropy. However, estimating the entropy of such complex distributions remains an open problem. To address this, existing approaches either use tricks to go around the entropy computation or make limiting assumptions on the policy. This results in either poor scalability or convergence to suboptimal solutions. For example, SQL (Haarnoja et al., 2017) implicitly incorporates entropy in the Q-function computation. This requires using importance sampling, which results in high variability and hence poor training stability and limited scalability to high dimensional action spaces. SAC (Haarnoja 1 arXiv:2405.00987v1 [cs.LG] 2 May 2024 \fPublished as a conference paper at ICLR 2024 \u03c0(\"|$!) \u03c0(\"|$\") STAC SQL SAC !! !\" S!AC Figure 2: S2AC learns a more optimal solution to the MaxEnt RL objective than SAC and SQL. We design a multigoal environment where an agent starts from the center of the 2-d map and tries to reach one of the three goals (G1, G2, and G3). The maximum expected future reward (level curves) is the same for all the goals but the expected future entropy is different (higher on the path to G2/G3): the action distribution \u03c0(a|s) is bi-modal on the path to the left (G2 and G3) and unimodal to the right (G1). Hence, we expect the optimal policy for the MaxEnt RL objective to assign more weights to G2 and G3. We visualize trajectories (in blue) sampled from the policies learned using SAC, SQL, and S2AC. SAC quickly commits to a single mode due to its actor being tied to a Gaussian policy. Though SQL also recovers the three modes, the trajectories are evenly distributed. S2AC recovers all the modes and approaches the left two goals more frequently. This indicates that it successfully maximizes not only the expected future reward but also the expected future entropy. et al., 2018a), on the other hand, follows a variational inference procedure by fitting a Gaussian distribution to the EBM policy. This enables a closed-form evaluation of the entropy but results in a suboptimal solution. For instance, SAC fails in environments characterized by multimodal action distributions. Similar to SAC, IAPO (Marino et al., 2021) models the policy as a uni-modal Gaussian. Instead of optimizing a MaxEnt objective, it achieves multimodal policies by learning a collection of parameter estimates (mean, variance) through different initializations for different policies. To improve the expressiveness of SAC, SSPG (Cetin & Celiktutan, 2022) and SAC-NF (Mazoure et al., 2020) model the policy as a Markov chain with Gaussian transition probabilities and as a normalizing flow (Rezende & Mohamed, 2015), respectively. However, due to training stability issues, the reported results in Cetin & Celiktutan (2022) show that though both models learn multi-modal policies, they fail to maximize the expected future entropy in positive rewards setups. We propose a new algorithm, S2AC, that yields a more optimal solution to the MaxEnt RL objective. To achieve expressivity, S2AC models the policy as a Stein Variational Gradient Descent (SVGD) (Liu, 2017) sampler from an EBM over Q-values (target distribution). SVGD proceeds by first sampling a set of particles from an initial distribution, and then iteratively transforming these particles via a sequence of updates to fit the target distribution. To compute a closed-form estimate of the entropy of such policies, we use the change-of-variable formula for pdfs (Devore et al., 2012). We prove that this is only possible due to the invertibility of the SVGD update rule, which does not necessarily hold for other popular samplers (e.g., Langevin Dynamics (Welling & Teh, 2011)). While normalizing flow models (Rezende & Mohamed, 2015) are also invertible, SVGD-based policy is more expressive as it encodes the inductive bias about the unnormalized density and incorporates a dispersion term to encourage multi-modality, whereas normalizing flows encode a restrictive class of invertible transformations (with easy-to-estimate Jacobian determinants). Moreover, our formula is computationally efficient and only requires evaluating first-order derivatives and vector products. To improve scalability, we model the initial distribution of the SVGD sampler as an isotropic Gaussian and learn its parameters, i.e., mean and standard deviation, end-to-end. We show that this results in faster convergence to the target distribution, i.e., fewer SVGD steps. Intuitively, the initial distribution learns to contour the high-density region of the target distribution while the SVGD updates result in better and faster convergence to the modes within that region. Hence, our approach is as parameter efficient as SAC, since the SVGD updates do not introduce additional trainable parameters. Note that S2AC can be reduced to SAC when the number of SVGD steps is zero. Also, SQL becomes equivalent to S2AC if the entropy is computed explicitly using our formula (the policy in SQL is an amortized SVGD sampler). Beyond RL, the backbone of S2AC is a new variational inference algorithm with a more expressive and scalable distribution characterized by a closed-form entropy estimate. We believe that this variational distribution can have a wider range of exciting applications. We conduct extensive empirical evaluations of S2AC from three aspects. We start with a sanity check on the merit of our derived SVGD-based entropy estimate on target distributions with known entropy values (e.g., Gaussian) or log-likelihoods (e.g., Gaussian Mixture Models) and assess its 2 \fPublished as a conference paper at ICLR 2024 sensitivity to different SVGD parameters (kernel, initial distribution, number of steps and number of particles). We observe that its performance depends on the choice of the kernel and is robust to variations of the remaining parameters. In particular, we find out that the kernel should be chosen to guarantee inter-dependencies between the particles, which turns out to be essential for invertibility. Next, we assess the performance of S2AC on a multi-goal environment (Haarnoja et al., 2017) where different goals are associated with the same positive (maximum) expected future reward but different (maximum) expected future entropy. We show that S2AC learns multimodal policies and effectively maximizes the entropy, leading to better robustness to obstacles placed at test time. Finally, we test S2AC on the MuJoCo benchmark (Duan et al., 2016). S2AC yields better performances than the baselines on four out of the five environments. Moreover, S2AC shows higher sample efficiency as it tends to converge with fewer training steps. These results were obtained from running SVGD for only three steps, which results in a small overhead compared to SAC during training. Furthermore, to maximize the run-time efficiency during testing, we train an amortized SVGD version of the policy to mimic the SVGD-based policy. Hence, this reduces inference to a forward pass through the policy network without compromising the performance. 2 PRELIMINARIES 2.1 SAMPLERS FOR ENERGY-BASED MODELS In this work, we study three representative methods for sampling from EBMs: (1) Stochastic Gradient Langevin Dynamics (SGLD) & Deterministic Langevin Dynamics (DLD) (Welling & Teh, 2011), (2) Hamiltonian Monte Carlo (HMC) (Neal et al., 2011), and (3) Stein Variational Gradient Descent (SVGD) (Liu & Wang, 2016). We review SVGD here since it is the sampler we eventually use in S2AC, and leave the rest to Appendix C.1. SVGD is a particle-based Bayesian inference algorithm. Compared to SGLD and HMC which have a single particle in their dynamics, SVGD operates on a set of particles. Specifically, SVGD samples a set of m particles {aj}m j=1 from an initial distribution q0 which it then transforms through a sequence of updates to fit the target distribution. Formally, at every iteration l, SVGD applies a form of functional gradient descent \u2206f that minimizes the KL-divergence between the target distribution p and the proposal distribution ql induced by the particles, i.e., the update rule for the ith particles is: al+1 i = al i + \u03f5\u2206f(al i) with \u2206f(al i) = Eal j\u223cql \u0002 k(al i, al j)\u2207al j log p(al j) + \u2207al jk(al i, al j) \u0003 . (1) Here, \u03f5 is the step size and k(\u00b7, \u00b7) is the kernel function, e.g., the RBF kernel: k(ai, aj) = exp(||ai \u2212 aj||2/2\u03c32). The first term within the gradient drives the particles toward the high probability regions of p, while the second term serves as a repulsive force to encourage dispersion. 2.2 MAXIMUM-ENTROPY RL We consider an infinite horizon Markov Decision Process (MDP) defined by a tuple (S, A, p, r), where S is the state space, A is the action space and p : S \u00d7 A \u00d7 S \u2192[0, \u221e] is the state transition probability modeling the density of the next state st+1 \u2208S given the current state st \u2208S and action at \u2208A. Additionally, we assume that the environment emits a bounded reward function r \u2208[rmin, rmax] at every iteration. We use \u03c1\u03c0(st) and \u03c1\u03c0(st, at) to denote the state and state-action marginals of the trajectory distribution induced by a policy \u03c0(at|st). We consider the setup of continuous action spaces Lazaric et al. (2007); Lee et al. (2018); Zhou & Lu (2023). MaxEnt RL (Todorov, 2006; Ziebart, 2010; Rawlik et al., 2012) learns a policy \u03c0\u2217(at|st), that instead of maximizing the expected future reward, maximizes the sum of the expected future reward and entropy: \u03c0\u2217= arg max\u03c0 X t \u03b3tE(st,at)\u223c\u03c1\u03c0 \u0002 r(st, at) + \u03b1H(\u03c0(\u00b7|st)) \u0003 , (2) where \u03b1 is a temperature parameter controlling the stochasticity of the policy and H(\u03c0(\u00b7|st)) is the entropy of the policy at state st. The conventional RL objective can be recovered for \u03b1 = 0. Note that the MaxEnt RL objective above is equivalent to approximating the policy, modeled as an EBM over Q-values, by a variational distribution \u03c0(at|st) (see proof of equivalence in Appendix D), i.e., \u03c0\u2217= arg min\u03c0 X t Est\u223c\u03c1\u03c0 \u0002 DKL \u0000\u03c0(\u00b7|st)\u2225exp(Q(st, \u00b7)/\u03b1)/Z \u0001\u0003 , (3) where DKL is the KL-divergence and Z is the normalizing constant. We now review two landmark MaxEnt RL algorithms: SAC (Haarnoja et al., 2018a) and SQL (Haarnoja et al., 2017). SAC is an actor-critic algorithm that alternates between policy evaluation, i.e., evaluating the Q-values for a policy \u03c0\u03b8(at|st): Q\u03d5(st, at) \u2190r(st, at) + \u03b3 Est+1,at+1\u223c\u03c1\u03c0\u03b8 \u0002 Q\u03d5(st+1, at+1) + \u03b1H(\u03c0\u03b8(\u00b7|st+1)) \u0003 (4) 3 \fPublished as a conference paper at ICLR 2024 and policy improvement, i.e., using the updated Q-values to compute a better policy: \u03c0\u03b8 = arg max\u03b8 X t Est,at\u223c\u03c1\u03c0\u03b8 \u0002 Q\u03d5(at, st) + \u03b1H(\u03c0\u03b8(\u00b7|st)) \u0003 . (5) SAC models \u03c0\u03b8 as an isotropic Gaussian, i.e., \u03c0\u03b8(\u00b7|s) = N(\u00b5\u03b8, \u03c3\u03b8I). While this enables computing a closed-form expression of the entropy, it incurs an over-simplification of the true action distribution, and thus cannot represent complex distributions, e.g., multimodal distributions. SQL goes around the entropy computation, by defining a soft version of the value function V\u03d5 = \u03b1 log \u0000 R A exp \u0000 1 \u03b1Q\u03d5(st, a\u2032) \u0001 da\u2032\u0001 . This enables expressing the Q-value (Eq (4)) independently from the entropy, i.e., Q\u03d5(st, at) = r(st, at) + \u03b3Est+1\u223cp[V\u03d5(st+1)]. Hence, SQL follows a soft value iteration which alternates between the updates of the \u201csoft\u201d versions of Q and value functions: Q\u03d5(st, at) \u2190r(st, at) + \u03b3Est+1\u223cp[V\u03d5(st+1)], \u2200(st, at) (6) V\u03d5(st) \u2190\u03b1 log \u0000 R A exp \u0000 1 \u03b1Q\u03d5(st, a\u2032) \u0001 da\u2032\u0001 , \u2200st. (7) Once the Q\u03d5 and V\u03d5 functions converge, SQL uses amortized SVGD Wang & Liu (2016) to learn a stochastic sampling network f\u03b8(\u03be, st) that maps noise samples \u03be into the action samples from the EBM policy distribution \u03c0\u2217(at|st) = exp \u0000 1 \u03b1(Q\u2217(st, at) \u2212V \u2217(st)) \u0001 . The parameters \u03b8 are obtained by minimizing the loss J\u03b8(st) = DKL \u0000\u03c0\u03b8(\u00b7|st)|| exp \u0000 1 \u03b1(Q\u2217 \u03d5(st, \u00b7) \u2212V \u2217 \u03d5 (st)) \u0001 with respect to \u03b8. Here, \u03c0\u03b8 denotes the policy induced by f\u03b8. SVGD is designed to minimize such KL-divergence without explicitly computing \u03c0\u03b8. In particular, SVGD provides the most greedy direction as a functional \u2206f\u03b8(\u00b7, st) (Eq (1)) which can be used to approximate the gradient \u2202J\u03b8/\u2202at. Hence, the gradient of the loss J\u03b8 with respect to \u03b8 is: \u2202J\u03b8(st)/\u2202\u03b8 \u221dE\u03be \u0002 \u2206f\u03b8(\u03be, st)\u2202f\u03b8(\u03be, st)/\u2202\u03b8 \u0003 . Note that the integral in Eq (7) is approximated via importance sampling, which is known to result in high variance estimates and hence poor scalability to high dimensional action spaces. Moreover, amortized generation is usually unstable and prone to mode collapse, an issue similar to GANs. Therefore, SQL is outperformed by SAC Haarnoja et al. (2018a) on benchmark tasks like MuJoCo. 3 APPROACH We introduce S2AC, a new actor-critic MaxEnt RL algorithm that uses SVGD as the underlying actor to generate action samples from policies represented using EBMs. This choice is motivated by the expressivity of distributions that can be fitted via SVGD. Additionally, we show that we can derive a closed-form entropy estimate of the SVGD-induced distribution, thanks to the invertibility of the update rule, which does not necessarily hold for other EBM samplers. Besides, we propose a parameterized version of SVGD to enable scalability to high-dimensional action spaces and nonsmooth Q-function landscapes. S2AC is hence capable of learning a more optimal solution to the MaxEnt RL objective (Eq (2)) as illustrated in Figure 2. 3.1 STEIN SOFT ACTOR CRITIC Like SAC, S2AC performs soft policy iteration which alternates between policy evaluation and policy improvement. The difference is that we model the actor as a parameterized sampler from an EBM. Hence, the policy distribution corresponds to an expressive EBM as opposed to a Gaussian. Critic. The critic\u2019s parameters \u03d5 are obtained by minimizing the Bellman loss as traditionally: \u03d5\u2217= arg min\u03d5 E(st,at)\u223c\u03c1\u03c0\u03b8 \u0002 (Q\u03d5(st, at) \u2212\u02c6 y)2\u0003 , (8) with the target \u02c6 y = rt(st, at) + \u03b3E(st+1,at+1)\u223c\u03c1\u03c0 \u0002 Q \u00af \u03d5(st+1, at+1) + \u03b1H(\u03c0(\u00b7|st+1)) \u0003 . Here \u00af \u03d5 is an exponentially moving average of the value network weights (Mnih et al., 2015). Actor as an EBM sampler. The actor is modeled as a sampler from an EBM over the Q-values. To generate a set of valid actions, the actor first samples a set of particles {a0} from an initial distribution q0 (e.g., Gaussian). These particles are then updated over several iterations l \u2208[1, L], i.e., {al+1} \u2190{al} + \u03f5h({al}, s) following the sampler dynamics characterized by a transformation h (e.g., for SVGD, h = \u2206f in Eq (1)). If q0 is tractable and h is invertible, it\u2019s possible to compute a closed-form expression of the distribution of the particles at the lth iteration via the change of variable formula Devore et al. (2012): ql(al|s) = ql\u22121(al\u22121|s) \f \fdet(I + \u03f5\u2207alh(al, s)) \f \f\u22121 , \u2200l \u2208[1, L]. In this case, the policy is represented using the particle distribution at the final step L of the sampler dynamics, i.e., \u03c0(a|s) = qL(aL|s) and the entropy can be estimated by averaging log qL(aL|s) over a set of particles (Section 3.2). We study the invertibility of popular EBM samplers in Section 3.3. 4 \fPublished as a conference paper at ICLR 2024 \ud835\udc4e! ! \ud835\udc4e! \"! \ud835\udc5e!(\ud835\udc4e|\ud835\udc60) \ud835\udc4e! ! \ud835\udc4e! #! \ud835\udc5e!(\ud835\udc4e|\ud835\udc60) S!AC(\ud835\udf19, \ud835\udf03) S!AC(\ud835\udf19) Figure 3: S2AC(\u03d5, \u03b8) achieves faster convergence to the target distribution (in orange) than S2AC(\u03d5) by parameterizing the initial distribution N(\u00b5\u03b8, \u03c3\u03b8) of the SVGD sampler. Parameterized initialization. To reduce the number of steps required to converge to the target distribution (hence reducing computation cost), we further propose modeling the initial distribution as a parameterized isotropic Gaussian, i.e., a0 \u223cN(\u00b5\u03b8(s), \u03c3\u03b8(s)). The parameterization trick is then used to express a0 as a function of \u03b8. Intuitively, the actor would learn \u03b8 such that the initial distribution is close to the target distribution. Hence, fewer steps are required to converge, as illustrated in Figure 3. Note that if the number of steps L = 0, S2AC is reduced to SAC. Besides, to deal with the non-smooth nature of deep Q-function landscapes which might lead to particle divergence in the sampling process, we bound the particle updates to be within a few standard deviations (t) from the mean of the learned initial distribution, i.e., \u2212t\u03c3\u03b8 \u2264al \u03b8 \u2264t\u03c3\u03b8, \u2200l \u2208[1, L]. Eventually, the initial distribution q0 \u03b8 learns to contour the high-density region of the target distribution and the following updates refine it by converging to the spanned modes. Formally, the parameters \u03b8 are computed by minimizing the expected KL-divergence between the policy qL \u03b8 induced by the particles from the sampler and the EBM of the Q-values: \u03b8\u2217=arg max\u03b8Est\u223cD,aL \u03b8 \u223c\u03c0\u03b8 \u0002 Q\u03d5(st, aL \u03b8 ) \u0003 + \u03b1Est\u223cD [H(\u03c0\u03b8(\u00b7|st))] s.t. \u2212t\u03c3\u03b8 \u2264al \u03b8 \u2264t\u03c3\u03b8, \u2200l \u2208[1, L]. (9) Here, D is the replay buffer. The derivation is in Appendix E. Note that the constraint does not truncate the particles as it is not an invertible transformation which then violates the assumptions of the change of variable formula. Instead, we sample more particles than we need and select the ones that stay within the range. We call S2AC(\u03d5, \u03b8) and S2AC(\u03d5) as two versions of S2AC with/without the parameterized initial distribution. The complete S2AC algorithm is in Algorithm 1 of Appendix A. 3.2 A CLOSED-FORM EXPRESSION OF THE POLICY\u2019S ENTROPY A critical challenge in MaxEnt RL is how to efficiently compute the entropy term H(\u03c0(\u00b7|st+1)) in Eq (2). We show that, if we model the policy as an iterative sampler from the EBM, under certain conditions, we can derive a closed-form estimate of the entropy at convergence. Theorem 3.1. Let F : Rn \u2192Rn be an invertible transformation of the form F(a) = a + \u03f5h(a). We denote by qL(aL) the distribution obtained from repeatedly applying F to a set of samples {a0} from an initial distribution q0(a0) over L steps, i.e., aL = F \u25e6F \u25e6\u00b7 \u00b7 \u00b7 \u25e6F(a0). Under the condition \u03f5||\u2207al ih(ai)||\u221e\u226a1, \u2200l \u2208[1, L], the distribution of the particles at the Lth step is: log qL(aL) \u2248log q0(a0) \u2212\u03f5 XL\u22121 l=0 Tr(\u2207alh(al)) + O(\u03f52dL). (10) Here, d is the dimensionality of a, i.e., a \u2208Rd and O(\u03f52dL) is the order of approximation error. Proof Sketch: As F is invertible, we apply the change of variable formula (Appendix C.2) on the transformation F \u25e6F \u25e6\u00b7 \u00b7 \u00b7 F and obtain: log qL(aL) = log q0(a0)\u2212PL\u22121 l=0 log \f \fdet(I + \u03f5\u2207alh(al)) \f \f. Under the assumption \u03f5||\u2207aih(ai)||\u221e\u226a1, we apply the corollary of Jacobi\u2019s formula (Appendix C.3) and get Eq. (10). The detailed proof is in Appendix F. Note that the condition \u03f5||\u2207aih(ai)||\u221e\u226a1 can always be satisfied when we choose a sufficiently small step size \u03f5, or the gradient of h(a) is small, i.e., h(a) is Lipschitz continuous with a sufficiently small constant. It follows from the theorem above, that the entropy of a policy modeled as an EBM sampler (Eq (9)) can be expressed analytically as: H(\u03c0\u03b8(\u00b7|s))=\u2212Ea0 \u03b8\u223cq0 \u03b8 h log qL \u03b8 (aL \u03b8 |s) i \u2248\u2212Ea0 \u03b8\u223cq0 \u03b8 h log q0 \u03b8(a0|s)\u2212\u03f5 XL\u22121 l=0 Tr \u0010 \u2207al \u03b8h(al \u03b8, s) \u0011 i . (11) In the following, we drop the dependency of the action on \u03b8 for simplicity of the notation. 3.3 INVERTIBLE POLICIES Next, we study the invertibility of three popular EBM samplers: SVGD, SGLD, and HMC as well as the efficiency of computing the trace, i.e., Tr(\u2207alh(al, s)) in Eq (10) for the ones that are invertible. Proposition 3.2 (SVGD invertibility). Given the SVGD learning rate \u03f5 and RBF kernel k(\u00b7, \u00b7) with variance \u03c3, if \u03f5 \u226a\u03c3, the update rule of SVGD dynamics defined in Eq (1) is invertible. 5 \fPublished as a conference paper at ICLR 2024 SVGD \u210b(\ud835\udc5e!) = 3.5 DLD \u210b(\ud835\udc5e!) = \u221225.93 SGLD \u210b(\ud835\udc5e!) = \u221211.57 HMC \u210b(\ud835\udc5e!) = \u221254.5 Initial Distribution \ud835\udc5e\" = \ud835\udca9(0, 6\ud835\udc3c) (a) Recovering the GT entropy m \u210b(\ud835\udc5e!) Kernel variance \ud835\udf0e (b) Effect of \u03c3 on H(qL) (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= (\ud835\udc5a, \ud835\udc3f)= \u210b(\ud835\udc5e!) Target Distribution (c) Effect of m and L on H(qL) Figure 4: Entropy evaluation results. Proof Sketch: We use the explicit function theorem to show that the Jacobian \u2207aF(a, s) of the update rule F(a, s) is diagonally dominated and hence invertible. This yields invertibility of F(a, s). See detailed proof in Appendix G.3. Theorem 3.3. The closed-form estimate of log qL(aL|s) for the SVGD based sampler with an RBF kernel k(\u00b7, \u00b7) is log qL(aL|s)\u2248logq0(a0|s)+ \u03f5 m\u03c32 L\u22121 X l=0 m X j=1,al\u0338=al j k(al j, al) \u0010 (al\u2212al j)\u22a4\u2207al jQ(s, al j)+ \u03b1 \u03c32 \u2225al\u2212al j\u22252\u2212d\u03b1 \u0011 . Here, (\u00b7)\u22a4denotes the transpose of a matrix/vector. Note that the entropy does not depend on any matrix computation, but only on vector dot products and first-order vector derivatives. The proof is in Appendix H.1. Intuitively, the derived likelihood is proportional to (1) the concavity of the curvature of the Q-landscape, captured by a weighted average of the neighboring particles\u2019 Q-value gradients and (2) pairwise-distances between the neighboring particles (\u223c\u2225al i\u2212al j\u22252 \u00b7 exp (\u2225al i\u2212al j\u22252)), i.e., the larger the distance the higher is the entropy. We elaborate on the connection between this formula and non-parametric entropy estimators in Appendix B. Proposition 3.4 (SGLD, HMC). The SGLD and HMC updates are not invertible w.r.t. a. Proof Sketch: SGLD is stochastic (noise term) and thus not injective. HMC is only invertible if conditioned on the velocity v. Detailed proofs are in Appendices G.1-G.2. From the above theoretic analysis, we can see that SGLD update is not invertible and hence is not suitable as a sampler for S2AC. While the HMC update is invertible, its derived closed-form entropy involves calculating Hessian and hence computationally more expensive. Due to these considerations, we choose to use SVGD with an RBF kernel as the underlying sampler of S2AC. 4 RESULTS We first evaluate the correctness of our proposed closed-form entropy formula. Then we present the results of different RL algorithms on multigoal and MuJoCo environments. 4.1 ENTROPY EVALUATION This experiment tests the correctness of our entropy formula. We compare the estimated entropy for distributions (with known ground truth entropy or log-likelihoods) using different samplers and study the sensitivity of the formula to different samplers\u2019 parameters. (1) Recovering the ground truth entropy. In Figure 4a, we plot samples (black dots) obtained by SVGD, SGLD, DLD and HMC at convergence to a Gaussian with ground truth entropy H(p) = 3.41, starting from the same initial distribution (leftmost sub-figure). We also report the entropy values computed via Eq.(11). Unlike SGLD, DLD, and HMC, SVGD recovers the ground truth entropy. This empirically supports Proposition 3.4 that SGLD, DLD, and HMC are not invertible. (2) Effect of the kernel variance. Figure 4b shows the effect of different SVGD kernel variances \u03c3, where we use the same initial Gaussian from Figure 4a. We also visualize the particle distributions after L SVGD steps for the different configurations in Figure 9 of Appendix I. We can see that when the kernel variance is too small (e.g., \u03c3 = 0.1), the invertibility is violated, and thus the estimated entropy is wrong even at convergence. On the other extreme when the kernel variance is too large (e.g., \u03c3=100), i.e., when the particles are too scattered initially, the particles do not converge to the target Gaussian due to noisy gradients in the first term of Eq.(1). The best configurations hence lie somewhere in between (e.g., \u03c3\u2208{3, 5, 7}). (3) Effect of SVGD steps and particles. Figure 4c and Figure 10b (Appendix. I) show the behavior of our entropy formula under different configurations of the number of SVGD steps and particles, on two settings: (i) GMM M with an increasing number of components M, and (ii) distributions with increasing ground truth entropy values, i.e., Gaussians with increasing variances \u03c3. Results show that our entropy consistently grows with an increasing M (Figure 4c) and increasing \u03c3 (Figure 10b), even when a small number of SVGD steps and particles is used (e.g., L = 10, m = 10). 6 \fPublished as a conference paper at ICLR 2024 4.2 MULTI-GOAL EXPERIMENTS ! = 0.2 ! = 1 ! = 10 ! = 20 '! '\" (', (( () Multigoal Environment Figure 5: Multigoal env. To check if S2AC learns a better solution to the max-entropy objective (Eq (2)), we design a new multi-goal environment as shown in Figure 5. The agent is a 2D point mass at the origin trying to reach one of the goals (in red). Q-landscapes are depicted by level curves. Actions are bounded in [\u22121, 1] along both axes. Critical states for the analysis are marked with blue crosses. It is built on the multi-goal environment in Haarnoja et al. (2017) with modifications such that all the goals have (i) the same maximum expected future reward (positive) but (ii) different maximum expected future entropy. This is achieved by asymmetrically placing the goals (two goals on the left side and one on the right, leading to a higher expected future entropy on the left side) while assigning the same final rewards to all the goals. The problem setup and hyperparameters are detailed in Appendix J. (1) Multi-modality. Figure 6 visualizes trajectories (blue lines) collected from 20 episodes of S2AC(\u03d5, \u03b8), S2AC(\u03d5), SAC, SQL and SAC-NF (SAC with a normalizing flow policy, Mazoure et al. (2020)) agents (rows) at test time for increasing entropy weights \u03b1 (columns). S2AC and SQL consistently cover all the modes for all \u03b1 values, while this is only achieved by SAC and SAC-NF for large \u03b1 values. Note that, in the case of SAC, this comes at the expense of accuracy. Although normalizing flows are expressive enough in theory, they are known to quickly collapse to local optima in practice Kobyzev et al. (2020). The dispersion term in S2AC encodes an inductive bias to mitigate this issue. (2) Maximizing the expected future entropy. We also see that with increasing \u03b1, more S2AC and SAC-NF trajectories converge to the left goals (G2/G3). This shows both models learn to maximize the expected future entropy. This is not the case for SQL whose trajectory distribution remains uniform across the goals. SAC results do not show a consistent trend. This validates the hypothesis that the entropy term in SAC only helps exploration but does not lead to maximizing future entropy. The quantified distribution over reached goals is in Figure 12 of Appendix J. (3) Robustness/adaptability. To assess the robustness of the learned policies, we place an obstacle (red bar in Figure 7) on the path to G2. We show the test time trajectories of 20 episodes using S2AC, SAC, SQL and SAC-NF agents trained with different \u03b1\u2019s. We observe that, for S2AC and SAC-NF, with increasing \u03b1, more trajectories reach the goal after hitting the obstacles. This is not the case for SAC, where many trajectories hit the obstacle without reaching the goal. SQL does not manage to escape the barrier even with higher \u03b1. Additional results on the (4) effect of parameterization of q0, and the (5) entropy\u2019s effect on the learned Q-landscapes are respectively reported in Figure 11 and Figure 14 of Appendix J. S!AC(\ud835\udf19) SAC SQL S!AC(\ud835\udf19, \ud835\udf03) \ud835\udefc= 0.2 \ud835\udefc= 1 \ud835\udefc= 10 \ud835\udefc= 20 SAC-NF Figure 6: S2AC and SAC-NF learn to maximize the expected future entropy (biased towards G2/G3) while SAC and SQL do not. S2AC consistently recovers all modes, while SAC-NF with smaller \u03b1\u2019s does not, indicating its instability. S!AC(\ud835\udf19) SAC SQL S!AC(\ud835\udf19, \ud835\udf03) \ud835\udefc= 0.2 \ud835\udefc= 1 \ud835\udefc= 10 \ud835\udefc= 20 SAC-NF Figure 7: S2AC and SAC-NF are more robust to perturbations. Obstacle O is placed diagonally at [\u22121, 1]. Trajectories that did and did not reach the goal after hitting O are in green and red, respectively. 7 \fPublished as a conference paper at ICLR 2024 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 \u00b0500 0 500 1000 1500 2000 2500 3000 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 2000 4000 6000 8000 Average Return 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 4000 Average Return PPO SAC-NF DDPG SAC S2AC(\u00a1, \u00b5, \u221a) S2AC(\u00a1, \u00b5) S2AC(\u00a1) SQL 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Steps \u00a3105 0 500 1000 1500 2000 2500 3000 3500 4000 Average Return (a) Hopper-v2 (b) Walker2d-v2 (c) HalfCheetah-v2 (d) Ant-v2 (e) Humanoid-v2 (f) Median (g) IQM (h) Mean (i) Optimality Gap (j) P(X>Y) Figure 8: (a)-(e): Performance curves on the MuJoCo benchmark (training). S2AC outperforms SQL and SAC-NF on all environments and SAC on 4 out of 5 environments. (f)-(i): Comparison of Median, IQM, Mean, and Optimality Gap between S2AC and baseline algorithms. (j): The probabilities of S2AC outperforming baseline algorithms. 4.3 MUJOCO EXPERIMENTS We evaluate S2AC on five environments from MuJoCo (Brockman et al., 2016): Hopper-v2, Walker2dv2, HalfCheetah-v2, Ant-v2, and Humanoid-v2. As baselines, we use (1) DDPG (Gu et al., 2017), (2) PPO (Schulman et al., 2015), (3) SQL (Haarnoja et al., 2017), (4) SAC-NF (Mazoure et al., 2020), and (5) SAC (Haarnoja et al., 2018a). Hyperparameters are in Appendix K. (1) Performance and sample efficiency. We train five different instances of each algorithm with different random seeds, with each performing 100 evaluation rollouts every 1000 environment steps. Performance results are in Figure 8(a)-(e). The solid curves correspond to the mean returns over the five trials and the shaded region represents the minimum and maximum. S2AC(\u03d5, \u03b8) is consistently better than SQL and SAC-NF across all the environments and has superior performance than SAC in four out of five environments. Results also show that the initial parameterization was key to ensuring the scalability (S2AC(\u03d5) has poor performance compared to S2AC(\u03d5, \u03b8)). Figure 8(f)-(j) demonstrate the statistical significance of these gains by leveraging statistics from the rliable library (Agarwal et al., 2021) which we detail in Appendix K. Hopper Walker2d HalfCheetah Ant Action dim 3 6 6 8 State dim 11 17 17 111 SAC 0.723 0.714 0.731 0.708 SQL 0.839 0.828 0.815 0.836 S2AC(\u03d5, \u03b8) 3.267 4.622 4.583 5.917 S2AC(\u03d5, \u03b8, \u03c8) 0.850 0.817 0.830 0.837 Table 1: Action selection run-time on MuJoCo. (2) Run-time. We report the run-time of action selection of SAC, SQL, and S2AC algorithms in Table 1. S2AC(\u03d5, \u03b8) run-time increases linearly with the action space. To improve the scalability, we train an amortized version that we deploy at test-time, following (Haarnoja et al., 2017). Specifically, we train a feed-forward deepnet f\u03c8(s, z) to mimic the SVGD dynamics during testing, where z is a random vector that allows mapping the same state to different particles. Note that we cannot use f\u03c8(s, z) during training as we need to estimate the entropy in Eq (11), which depends on the unrolled SVGD dynamics (details in Appendix K). The amortized version S2AC(\u03d5, \u03b8, \u03c8) has a similar run-time to SAC and SQL with a slight tradeoff in performance (Figure 8). 5 RELATED WORK MaxEnt RL (Todorov, 2006; Ziebart, 2010; Rawlik et al., 2012) aims to learn a policy that gets high rewards while acting as randomly as possible. To achieve this, it maximizes the sum of expected future reward and expected future entropy. It is different from entropy regularization (Schulman et al., 2015; O\u2019Donoghue et al., 2016; Schulman et al., 2017) which maximizes entropy at the current time step. It is also different from multi-modal RL approaches (Tang & Agrawal, 2018) which recover different modes with equal frequencies without considering their future entropy. MaxEnt RL has been broadly incorporated in various RL domains, including inverse RL (Ziebart et al., 2008; Finn et al., 2016), stochastic control (Rawlik et al., 2012; Toussaint, 2009), guided policy search (Levine & Koltun, 2013), and off-policy learning (Haarnoja et al., 2018a;b). MaxEnt RL is shown to maximize a lower bound of the robust RL objective (Eysenbach & Levine, 2022) and is hence less sensitive 8 \fPublished as a conference paper at ICLR 2024 to perturbations in state and reward functions. From the variational inference lens, MaxEnt RL aims to find the policy distribution that minimizes the KL-divergence to an EBM over Q-function. The desired family of variational distributions is (1) expressive enough to capture the intricacies of the Q-value landscape (e.g., multimodality) and (2) has a tractable entropy estimate. These two requirements are hard to satisfy. SAC (Haarnoja et al., 2018a) uses a Gaussian policy. Despite having a tractable entropy, it fails to capture arbitrary Q-value landscapes. SAC-GMM (Haarnoja, 2018) extends SAC by modeling the policy as a Gaussian Mixture Model, but it requires an impractical grid search over the number of components. Other extensions include IAPO (Marino et al., 2021) which also models the policy as a uni-modal Gaussian but learns a collection of parameter estimates (mean, variance) through different initializations. While this yields multi-modality, it does not optimize a MaxEnt objective. SSPG (Cetin & Celiktutan, 2022) and SAC-NF (Mazoure et al., 2020) respectively improve the policy expressivity by modeling the policy as a Markov chain with Gaussian transition probabilities and as a normalizing flow. Due to training instability, the reported multi-goal experiments in (Cetin & Celiktutan, 2022) show that, though both models capture multimodality, they fail to maximize the expected future entropy in positive reward setups. SQL (Haarnoja et al., 2017), on the other hand, bypasses the explicit entropy computation altogether via a soft version of value iteration. It then trains an amortized SVGD (Wang & Liu, 2016) sampler from the EBM over the learned Q-values. However, estimating soft value functions requires approximating integrals via importance sampling which is known to have high variance and poor scalability. We propose a new family of variational distributions induced by a parameterized SVGD sampler from the EBM over Q-values. Our policy is expressive and captures multi-modal distributions while being characterized by a tractable entropy estimate. EBMs (LeCun et al., 2006; Wu et al., 2018) are represented as Gibbs densities p(x) = exp E(x)/Z, where E(x) \u2208R is an energy function describing inter-variable dependencies and Z = R exp E(x) is the partition function. Despite their expressiveness, EBMs are not tractable as the partition function requires integrating over an exponential number of configurations. Markov Chain Monte Carlo (MCMC) methods (Van Ravenzwaaij et al., 2018) (e.g., HMC (Hoffman & Gelman, 2014), SGLD (Welling & Teh, 2011)) are frequently used to approximate the partition function via sampling. There have been recent efforts to parameterize these samplers via deepnets (Levy et al., 2017; Gong et al., 2018; Feng et al., 2017) to improve scalability. Similarly to these methods, we propose a parameterized variant of SVGD (Liu & Wang, 2016) as an EBM sampler to enable scalability to highdimensional action spaces. Beyond sampling, we derive a closed-form expression of the sampling distribution as an estimate of the EBM. This yields a tractable estimate of the entropy. This is opposed to previous methods for estimating EBM entropy which mostly rely on heuristic approximation, lower bounds Dai et al. (2017; 2019a), or neural estimators of mutual information (Kumar et al., 2019). The idea of approximating the entropy of EBMs via MCMC sampling by leveraging the change of variable formula was first proposed in Dai et al. (2019b). The authors apply the formula to HMC and LD, which, as we show previously, violate the invertibility assumption. To go around this, they augment the EBM family with the noise or velocity variable for LD and HMC respectively. But the derived log-likelihood of the sampling distribution turns out to be \u2013counter-intuitively\u2013 independent of the sampler\u2019s dynamics and equal to the initial distribution, which is then parameterized using a flow model (details in Appendix B.2). We show that SVGD is invertible, and hence we sample from the original EBM, so that our derived entropy is more intuitive as it depends on the SVGD dynamics. SVGD-augmented RL (Liu & Wang, 2016) has been explored under other RL contexts. Liu et al. (2017) use SVGD to learn a distribution over policy parameters. While this leads to learning diverse policies, it is fundamentally different from our approach as we are interested in learning a single multi-modal policy with a closed-form entropy formula. Castanet et al. (2023); Chen et al. (2021) use SVGD to sample from multimodal distributions over goals/tasks. We go beyond sampling and use SVGD to derive a closed-form entropy formula of an expressive variational distribution. 6 CONCLUSION We propose S2AC, an actor-critic algorithm that yields a more optimal solution to the MaxEnt RL objective than previously proposed approaches. S2AC achieves this by leveraging a new family of variational distributions characterized by SVGD dynamics. The proposed distribution has high expressivity, i.e., it is flexible enough to capture multimodal policies in high dimensional spaces, and a tractable entropy estimate. Empirical results show that S2AC learns expressive and robust policies while having superior performance than other MaxEnt RL algorithms. For future work, we plan to study the application of the proposed variational distribution to other domains and develop benchmarks to evaluate the robustness of RL agents. 9 \fPublished as a conference paper at ICLR 2024 ACKNOWLEDGMENTS Bo An is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Programme (AISGAward No: AISG2-GC-2023-009). Haipeng Chen is supported by William & Mary FRC Faculty Research Grants."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.00988v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.00988v1",
|
| 3 |
+
"title": "Context-Aware Clustering using Large Language Models",
|
| 4 |
+
"abstract": "Despite the remarkable success of Large Language Models (LLMs) in text\nunderstanding and generation, their potential for text clustering tasks remains\nunderexplored. We observed that powerful closed-source LLMs provide good\nquality clusterings of entity sets but are not scalable due to the massive\ncompute power required and the associated costs. Thus, we propose CACTUS\n(Context-Aware ClusTering with aUgmented triplet losS), a systematic approach\nthat leverages open-source LLMs for efficient and effective supervised\nclustering of entity subsets, particularly focusing on text-based entities.\nExisting text clustering methods fail to effectively capture the context\nprovided by the entity subset. Moreover, though there are several language\nmodeling based approaches for clustering, very few are designed for the task of\nsupervised clustering. This paper introduces a novel approach towards\nclustering entity subsets using LLMs by capturing context via a scalable\ninter-entity attention mechanism. We propose a novel augmented triplet loss\nfunction tailored for supervised clustering, which addresses the inherent\nchallenges of directly applying the triplet loss to this problem. Furthermore,\nwe introduce a self-supervised clustering task based on text augmentation\ntechniques to improve the generalization of our model. For evaluation, we\ncollect ground truth clusterings from a closed-source LLM and transfer this\nknowledge to an open-source LLM under the supervised clustering framework,\nallowing a faster and cheaper open-source model to perform the same task.\nExperiments on various e-commerce query and product clustering datasets\ndemonstrate that our proposed approach significantly outperforms existing\nunsupervised and supervised baselines under various external clustering\nevaluation metrics.",
|
| 5 |
+
"authors": "Sindhu Tipirneni, Ravinarayana Adkathimar, Nurendra Choudhary, Gaurush Hiranandani, Rana Ali Amjad, Vassilis N. Ioannidis, Changhe Yuan, Chandan K. Reddy",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"cs.LG",
|
| 12 |
+
"I.2.7; I.2.m"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "LLM Fairness",
|
| 16 |
+
"gt": "Context-Aware Clustering using Large Language Models",
|
| 17 |
+
"main_content": "Introduction Large Language Models (LLMs) have demonstrated human-level performance in text understanding and generation, but their application to text clustering tasks is underexplored. We observed that powerful closed-source LLMs (such as GPT-4 (Achiam et al., 2023) and Claude (Anthropic, 2023)), known for their instruction-following abilities, can provide high-quality clusterings through prompting. However, these models become unaffordable when clustering a large number of sets, due to their high costs. To overcome this limitation, we aim to develop a scalable model based on an open-source LLM that can efficiently and effectively perform the clustering task. We study this problem of transferring the knowledge of clustering task from a powerful closed-source LLM (LLMc) to a scalable open-source LLM (LLMo) under the framework of supervised clustering, where the goal is to learn to cluster unseen entity subsets, given training data comprising several examples of entity subsets with complete clusterings1 (See Figure 1). In this work, we focus particularly on entities described by text. This problem has applications in various domains including e-commerce, news clustering, and email man\u2217Corresponding author: [email protected] 1Complete clustering of a set refers to a clustering in which every entity in the set is assigned to a cluster. 1 arXiv:2405.00988v1 [cs.CL] 2 May 2024 \fPreprint. Under review. agement, among others (Finley & Joachims, 2005; 2008; Haider et al., 2007). However, deep learning approaches for solving the supervised clustering problem remain largely unexplored. Existing methods overlook the specific context provided by an entity subset and often rely on the latent structural loss function (Fernandes et al., 2012) which involves the sequential computation of maximum spanning forests. In our work, Training data Model Test sample Prediction ? sam pl e 1 sam pl e 2 sam pl e N Figure 1: Illustration of the supervised clustering problem: Each training sample contains a subset of entities along with the corresponding ground truth clustering. Given a test sample, which is an unseen entity subset, the goal is to cluster the entities in the test sample. In a sample, color denotes a cluster, and shape denotes an entity. we propose an LLMo-based solution called CACTUS (Context-Aware ClusTering with aUgmented triplet losS) that captures contextual information, introduces an enhanced loss function, and incorporates a self-supervised clustering task. The context of an entity subset refers to the unique circumstances that relate the specific entities occurring in the subset. For example, consider the task of clustering a user\u2019s monthly purchases. A purchase of \u2018magnetic tape\u2019 could signify various intentions, such as for a science project or picture mounting. Examining the user\u2019s other purchases could provide the necessary context to help us determine the use case and place the entity in the appropriate cluster. However, most existing text clustering methods obtain a single embedding for each entity using a language model (Ahmed et al., 2022; Barnabo et al., 2023), thus ignoring the context. In contrast, our model computes entity embeddings that are dependent on the context or entity subset, which allows the model to identify entities with shared themes within the subset. The proposed method takes the entire entity subset as input to the LLM and captures inter-entity interactions using a scalable attention mechanism, as traditional full attention over all entities in a subset can become computationally expensive as subsets grow large. Specifically, in each Transformer layer, for each entity, we compute a single representative embedding that participates in inter-entity attention. Previous methods for supervised clustering applied the latent structural loss to pairwise entity features that are either hand-crafted or obtained from a neural network. While the latent structural loss involves sequential computations of spanning forests, the triplet loss can be parallelized (processing all triplets in a clustering in parallel using more memory) but faces the challenge of different triplets potentially having non-overlapping margin positions (see section 3.3). To address this issue, we augment the complete graph of entities with a neutral entity, which is connected to all other entities by a learnable similarity score that provides a reference for all margin locations. Additionally, to further improve supervised clustering performance, especially in the case of limited availability of ground truth clusterings, we introduce a novel self-supervised clustering task. This task involves randomly sampling seed entities and constructing clusters with different transformations of each seed. This idea is inspired by text data augmentation techniques (Shorten et al., 2021) used in NLP tasks, but we formulate it, for the first time, as a self-supervised clustering task that aligns better with our finetuning phase. To summarize, the main contributions of our work are as follows: \u2022 We propose a novel approach for supervised clustering of entity subsets using contextaware entity embeddings from LLMo with a scalable inter-entity attention mechanism. \u2022 We identify a problem with directly applying triplet loss to supervised clustering when different triplets can potentially have non-overlapping margin locations. To address this, we design a novel augmented triplet loss function. \u2022 We also design a self-supervised clustering task to improve LLMo\u2019s finetuning performance, especially when only a limited number of ground truth clusterings are available. 2 \fPreprint. Under review. \u2022 Our experiments demonstrate that the proposed method, CACTUS, outperforms previous unsupervised and supervised clustering baselines on real-world e-commerce query and product clustering datasets. We also conduct ablation studies to show the effectiveness of each of the proposed components. 2 Related work 2.1 Traditional methods for supervised text clustering The supervised clustering problem can be formulated as a binary pairwise classification task of predicting if a pair of entities belong to the same cluster. But this approach suffers from the drawback that the pairs are assumed to be i.i.d. (Finley & Joachims, 2005). Thus, structured prediction approaches have been explored as solutions to this problem. Traditional methods used hand-engineered pairwise features as inputs, where each pair of entities is described by a vector. Methods such as structural SVM (Tsochantaridis et al., 2004; Finley & Joachims, 2005) and structured perceptron (Collins, 2002) have been applied to this problem, where a parameterized scoring function is learned such that it assigns higher scores to correct clusterings in the training data. The scoring function depends on the pairwise features and the predicted clustering, and is formulated using correlation clustering (Bansal et al., 2002) or k-means (Finley & Joachims, 2008) frameworks. Observing that many within-cluster entity pairs have weak signals, Yu & Joachims (2009); Fernandes et al. (2012); Haponchyk et al. (2018) introduce maximum spanning forests over complete graphs of entities as latent structures in the scoring function. The inference stage involves finding a clustering with the highest score for a given entity subset. 2.2 Language models for text clustering Despite the widespread use of Language Models (LMs) across diverse domains and applications, their application to \u2018supervised\u2019 clustering remains limited. Haponchyk & Moschitti (2021) and Barnabo et al. (2023) utilize encoder-only LMs to obtain pairwise and individual entity representations, respectively, and finetune the LMs using latent structural loss. The former is not a scalable approach as each entity pair is passed separately through a conventional Transformer model. In contrast to these existing methods, we propose a novel approach that passes the entire entity set to a language model, and efficiently models interentity interactions within the Transformer layers, thereby improving clustering performance by capturing the unique context given by an entity subset. Furthermore, we depart from the latent structural loss (used in these existing works) that involves the sequential step of computing maximum spanning forests and employ an augmented triplet loss function that can be more easily parallelized and also achieves better performance. It is worth noting that LMs have been widely applied to slightly different but more prevalent problems of unsupervised (Grootendorst, 2022; Zhang et al., 2021a;c; Meng et al., 2022) and semi-supervised clustering (Zhang et al., 2021b; Lin et al., 2020; Zhang et al., 2022; An et al., 2023). These tasks involve clustering of a single large entity set, with some pairwise constraints provided for semi-supervised clustering. Some recent works Viswanathan et al. (2023); Zhang et al. (2023); Nakshatri et al. (2023) take advantage of the latest advances in LLMs by using them as oracles to make key decisions during the clustering process. However, these approaches are not suitable for our problem of clustering several entity subsets, as they require a new optimization problem for every new entity subset. Different from these LLM-based methods, our approach involves prompting LLMc to gather complete clusterings of several small entity subsets, which are subsequently used to fine-tune a scalable LLMo that is adapted to capture the underlying context efficiently. 3 Proposed Method This section provides a detailed description of the supervised clustering problem and our proposed method. Our approach involves finetuning an open-source pretrained Transformer encoder model, denoted by LLMo, for the task of context-aware clustering in a supervised 3 \fPreprint. Under review. m ean pool i ng i nt r aent i t y at t ent i on x L e1, e3, . . . e2, e5, . . . . . . G r ound t r ut h cl ust er i ng Augm ent ed t r i pl et l oss Aggl om er at i ve cl ust er i ng Pr edi ct ed cl ust er i ng Pai r wi se cosi ne si m i l ar i t y i nt er ent i t y at t ent i on Figure 2: Overview of CACTUS: The entities in the input subset are tokenized and passed through LLMo, where the self-attention layers are modified with scalable inter-entity attention (SIA) to obtain context-aware entity embeddings. Pairwise cosine similarities are used for computing loss and predicted clusterings. manner. Here, \u2018context\u2019 refers to the subset in which an entity occurs, which influences the entity\u2019s interpretation. To capture context-awareness efficiently, we modify the selfattention layers of LLMo to implement a scalable inter-entity attention mechanism, which is described in section 3.2. We identify limitations of directly applying the triplet loss to supervised clustering and propose an augmented triplet loss function as a solution in section 3.3. We further pretrain LLMo on a dataset-specific self-supervised clustering task before the finetuning phase, which is described in Appendix B due to space constraints. During inference, given an entity subset, we extract context-aware entity embeddings from the finetuned model, compute pairwise similarities, and feed them to an agglomerative clustering algorithm to obtain the predicted clustering. We refer to the overall method as CACTUS (Context-Aware ClusTering with aUgmented triplet losS). Figure 2 provides an overview of the proposed approach. 3.1 Preliminaries Let E be the universal set of entities in a dataset. For an entity subset E \u2286E, a clustering C = (C, f ) contains a set of clusters C and an entity-to-cluster assignment function f : E \u21a0 C2. We say that two clusterings, C = (C, f ) and C\u2032 = (C\u2032, f \u2032), over the same enitity subset E are equivalent if they induce the same partitioning of items i.e., if the pairwise co-cluster relationships are preserved. Formally, the clusterings C and C\u2032 are equivalent if and only if \u2200e1, e2 \u2208E, we have f (e1) = f (e2) \u21d0 \u21d2f \u2032(e1) = f \u2032(e2). (1) A labeled clustering dataset D = {(E1, C1), ..., (E|D|, C|D|)} contains |D| samples where each sample contains an entity subset Ek \u2286E and the corresponding ground truth clustering Ck. We describe the process of collecting cluster assignments from LLMc in Appendix A. These clusterings serve as ground truth in the dataset, which is partitioned into training, validation, and test splits. Given an entity subset Ek, our goal is to predict a clustering that is equivalent to the ground truth clustering Ck. We use text(e) to denote the textual description of entity e. 3.2 Context-awareness using Scalable Inter-entity Attention (SIA) Here, we describe the architecture of LLMo, which is a Transformer encoder model that is finetuned on the supervised clustering task using ground truth clusterings from LLMc. A common approach for text clustering involves obtaining a single embedding vector separately for each entity using a language model and defining a similarity or distance function in the embedding space, which is used in a clustering algorithm. We refer to this approach as NIA (No Inter-entity Attention) because there is no interaction between different entities in the embedding module. To capture context, i.e., to model entity embeddings that 2\u21a0denotes a surjective function. 4 \fPreprint. Under review. depend on the entity subset they occur in, we can also pass the entire subset in the input sequence and pool each entity\u2019s token embeddings. We refer to this approach as FIA (Full Inter-entity Attention), because all the token pairs among different entities are considered in the attention matrix. This is not very practical, especially, when entity descriptions are long and as the subsets grow large. So, we design a scalable inter-entity attention (SIA) mechanism that computes one representative embedding per entity which is used for interentity interactions. Though there are scalable attention methods for handling long sequences in Transformers (Beltagy et al., 2020; Kitaev et al., 2020; Ainslie et al., 2020), this is the first work to explore scalable attention in the context of clustering. The proposed SIA approach is described in detail below. We use the encoder of Flan-T5-base (Chung et al., 2022) as the underlying model and modify its attention layers for SIA. Let E = {e1, ..., e|E|} be an entity subset, where tokens of entity ei are denoted as tokenize(text(ei)) = (e1 i , ..., e|ei| i ). A Transformer-based LM gathers initial token embeddings and iteratively updates them using stacked Multi-Head Attention (MHA) and Feed Forward Network (FFN) layers. The Multi-Head Attention (MHA) layer traditionally computes all token-token pairwise attention scores, making it computationally intensive for long inputs. In SIA mechanism, we propose modifications to the MHA layer to make it more scalable for our clustering task. We split the attention computation into intra-entity and inter-entity components and make the latter more efficient by using pooled entity representations. Let ej i \u2208Rd denote the embedding of token ej i (jth token of ith entity) in the input to an MHA layer, and \u00af ei = 1 |ei| \u2211k ek i denote the mean-pooled representation of entity ei. The MHA layer transforms the embedding ej i to \u02dc ej i \u2208Rd as follows. For simplicity, we show the computations for a single attention head and skip the projection layer at the end of MHA. \u02dc ej i = |ei| \u2211 k=1 \u03b1intra(ej i, ek i )WVek i | {z } intra-entity attention + |E| \u2211 m=1 m\u0338=i \u03b1inter(ej i, em)WV \u00af em | {z } inter-entity attention (2) \u03b1intra(inter)(ej i, .) = exp(Attintra(inter)(ej i, .)) \u2211|ei| k=1 exp(Attintra(ej i, ek i )) + \u2211|E| m=1 m\u0338=i exp(Attinter(ej i, em)) (3) Attintra(ej i, ek i ) = (WQej i)T(WKek i ) + \u03d5(k \u2212i) (4) Attinter(ej i, em) = (WQej i)T(WK \u00af em) (5) where WQ, WK, WV \u2208Rd\u00d7d are the query, key, and value projection matrices, respectively. Eq. (2) shows that a token within one entity attends to aggregated representations of other entities rather than individual tokens within those entities. The traditional softmax computation is altered in (3) to separate the intra and inter-entity terms. The intra-entity attention (4) includes a relative positional encoding term, denoted by \u03d5(.), while the inter-entity attention (5) does not. This is because the order of tokens within an entity is relevant while the order of entities in a subset is irrelevant. The token embeddings from the last Transformer layer are mean-pooled entity-wise to obtain the context-aware entity embeddings. Complexity: Considering a subset of N entities where each entity contains L tokens, and a fixed embedding dimension d, the computational complexity of self-attention in the NIA embedding method is O(NL2) because there are NL tokens in the entity subset, and each token only attends to the L tokens within the same entity. In contrast, using the FIA approach increases the complexity to O(N2L2) as each token attends to all NL tokens from all entities. SIA provides a compromise between these two methods; it has O(NL(L + N)) complexity because each token attends to the L tokens within the same entity and to N \u22121 representative entity embeddings. 5 \fPreprint. Under review. 3.3 Augmented triplet loss After obtaining context-aware entity embeddings, we compute cosine similarity between all entity pairs in a subset: sim(ei, ek) = \u00af e\u22a4 i \u00af ek \u2225\u00af ei\u2225\u2225\u00af ek\u2225 (6) The similarities are used to obtain predicted clusterings using the average-link agglomerative clustering algorithm. For the loss function, using these pairwise similarities as edge weights, we can construct a fully connected graph where each entity is a node. Previous methods for supervised clustering employed structural loss, which uses a scoring function based on a maximum spanning forest of the fully connected graph. This uses Kruskal\u2019s MST algorithm, which sequentially adds edges to the spanning forest and leads to slower loss computation. In contrast, the triplet loss (Schroff et al., 2015), which was shown to be a competitive baseline in Barnabo et al. (2023), can be easily parallelized as each triplet can be processed independently of the others. For each entity in the input subset, the triplet loss considers other entities within the same cluster as positives and the remaining entities as negatives. For an entity subset E with ground truth clustering C = (C, f ), the triplet loss is given by Ltriplet(E, C, \u0398) = 1 |T(C)| \u2211 (e,ep,en)\u2208T(C) (\u03b3 \u2212sim(e, ep) + sim(e, en))+ (7) where \u0398 are the parameters of the context-aware entity embedding module, \u03b3 is the margin which is a hyperparameter, and T(C) = {(e, ep, en) : e, ep, en \u2208E; e \u0338= ep; f (e) = f (ep) \u0338= f (en)} is the set of triplets. Figure 3: Example of an entity subset with 3 clusters containing 2 entities each. There exists an intra-cluster (yellow) edge with similarity less than some inter-cluster (green-blue) edges. For margin=0.3, the triplet loss (eq. 7) is at its minimum while the proposed augmented triplet loss (eq. 8) is not. The triplet loss formulation presents a challenge due to potential non-overlapping margin locations across different triplets. Margin location refers to the range between similarities from anchor entity (e) to positive (ep) and negative (en) entities within a triplet. For example, in Figure 3 with three clusters containing two entities each, the pairwise similarities shown result in the minimum value for triplet loss. However, there exist inter-cluster edges with higher similarity than an intra-cluster edge, which results in \u2018green\u2019 and \u2018blue\u2019 clusters being merged by the agglomerative clustering algorithm before the \u2018yellow\u2019 cluster is formed. This phenomenon can also occur for intra and inter-cluster edges in different entity subsets, which makes it difficult to choose a global threshold for agglomerative clustering during inference. To avoid such problems, we augment the complete graph with a neutral node that is connected to all other entities via a learnable neutral edge similarity sneu. The neutral node is incorporated into the augmented triplet loss to encourage intra and inter-cluster edge similarities to lie on opposite sides of sneu. The new loss function is given by Laug-triplet(E, C, \u0398) = 1 |T(C)|+|Pintra(C)|+|Pinter(C)| \u001a \u2211 (e,ep,en)\u2208T(C) (\u03b3 \u2212sim(e, ep) + sim(e, en))+ + \u2211 (e,ep)\u2208Pintra(C) (\u03b3 2 \u2212sim(e, ep) + sneu)+ + \u2211 (e,en)\u2208Pinter(C) (\u03b3 2 \u2212sneu + sim(e, en))+ \u001b (8) where Pintra(C) = {(e, ep) : (e, ep, .) \u2208T(C)} is the set of entity pairs in the same cluster and Pinter(C) = {(e, en) : (e, ., en) \u2208T(C)} is the set of entity pairs in different clusters. The newly added loss terms encourage the intra-cluster (inter-cluster) pairwise similarities to be \u03b3 2 higher (lower) than the neutral edge similarity. Thus, the neutral edge softly constraints the margin location for all triplets. 6 \fPreprint. Under review. Table 1: Dataset statistics. (* Since the Gifts dataset is proprietary, we provide approximate numbers for the statistics reported.) Gifts* Arts Games Instruments Office No. of entities \u223c365K 22,595 16,746 10,522 27,532 No. of entity sets \u223c42K 55,629 54,995 27,420 100,775 Avg. size of entity set \u223c46 5.4 5.7 5.6 5.0 Avg. no. of clusters per entity set \u223c6 2.6 2.8 2.8 2.7 Avg. no. of entities per cluster \u223c8 2.1 2.1 2.0 1.9 Avg. no. of words per entity \u223c3 11.6 6.9 10.5 13.9 4 Experiments In this section, we describe the datasets used for our experiments and compare the proposed method to existing unsupervised and supervised clustering baselines using external clustering evaluation metrics. Additionally, we conduct ablation studies to analyze the effectiveness of the different components of our method. Finally, we present a qualitative study to illustrate how context-awareness improves clustering performance. 4.1 Experimental setup We compile five datasets for our experiments, including four from Amazon product reviews (Ni et al., 2019) and one proprietary dataset called Gifts. The Amazon datasets including Arts, Games, Instruments, and Office, consist of sequences of products reviewed by users, with each user\u2019s product sequence treated as one entity subset. We use preprocessed datasets from Li et al. (2023), considering product titles as textual descriptions of entities. The Gifts dataset contains search queries related to \u2018gifts\u2019 from an e-commerce platform. Each entity subset contains potential next queries for a particular source query. Dataset statistics are summarized in Table 1. On average, the Amazon datasets contain 5 to 6 entities per entity subset, while Gifts contains approximately 46 entities. In each dataset, we randomly sample 3K entity sets for test split and 1K sets for validation split and use the remaining for training. For all datasets, we use a proprietary LLMc to collect ground truth clusterings. We run self-supervised pretraining for the Amazon datasets but not for Gifts, as the queries in Gifts are very short, making it difficult to obtain multiple transformations of a query. We evaluate the predicted clusterings from LLMo by comparing them to ground truth clusterings. Thus, we use the following extrinsic clustering evaluation metrics: Rand Index (RI), Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), Adjusted Mutual Information (AMI), and F1-score Haponchyk et al. (2018). 4.2 Comparison with baselines As unsupervised clustering baselines, we employ the K-Means, Spectral, and Agglomerative clustering algorithms. The entity embeddings for unsupervised baselines are obtained from the pretrained Flan-T5-base encoder. For K-Means and Spectral clustering, we determine the number of clusters for each entity set using either the silhouette method or the average number from the training set based on validation metrics. For agglomerative clustering, we use cosine similarity with average linkage and determine the threshold based on the validation set. Given the scarcity of existing supervised clustering baselines, we incorporate only one such method from Barnabo et al. (2023) (SCL). NSC (Haponchyk & Moschitti, 2021) was not included as it demands substantial GPU memory and often leads to OOM errors. For a fair comparison, we employ FlanT5-base encoder as the LLM for all baselines and the results are shown in Table 2. CACTUS significantly outperforms all the unsupervised and supervised baselines. Compared to SCL, CACTUS improves the AMI and ARI metrics by 12.3%-26.8% and 15.3%-28.2%, respectively. Among the unsupervised methods, agglomerative clustering yields the best result in most cases. 7 \fPreprint. Under review. Table 2: Comparison of the proposed method to previous unsupervised and supervised clustering baselines. The first three are unsupervised methods and the last two are supervised clustering methods. (Agglo. stands for agglomerative clustering. *For the proprietary Gifts dataset, we report improvements against K-Means.) Model NMI AMI RI ARI F1 Gifts* K-Means +0.000 +0.000 +0.000 +0.000 +0.000 Spectral +0.020 +0.024 -0.002 +0.006 +0.014 Agglo. +0.047 +0.009 -0.019 +0.011 +0.027 SCL +0.167 +0.196 +0.065 +0.195 +0.114 CACTUS +0.207 +0.260 +0.098 +0.263 +0.144 Arts K-Means 0.660 0.167 0.690 0.250 0.766 Spectral 0.642 0.192 0.683 0.272 0.790 Agglo. 0.692 0.219 0.707 0.290 0.781 SCL 0.725 0.371 0.751 0.435 0.833 CACTUS 0.764 0.461 0.795 0.540 0.868 Games K-Means 0.681 0.213 0.712 0.247 0.767 Spectral 0.688 0.230 0.718 0.263 0.771 Agglo. 0.640 0.268 0.691 0.291 0.799 SCL 0.718 0.442 0.763 0.462 0.849 CACTUS 0.777 0.540 0.813 0.565 0.876 Instruments K-Means 0.678 0.181 0.705 0.213 0.764 Spectral 0.686 0.196 0.713 0.229 0.767 Agglo. 0.707 0.226 0.719 0.257 0.776 SCL 0.728 0.436 0.765 0.451 0.849 CACTUS 0.786 0.553 0.817 0.578 0.883 Office K-Means 0.731 0.267 0.748 0.332 0.808 Spectral 0.735 0.275 0.752 0.340 0.809 Agglo. 0.748 0.324 0.760 0.383 0.829 SCL 0.772 0.445 0.792 0.500 0.866 CACTUS 0.821 0.562 0.842 0.626 0.902 Table 3: Results on validation set using different architectures for entity set encoder. Proposed method (section 3.2) is indicated by *. Augmented triplet loss is used to train all models. Set encoder AMI ARI F1 Arts NIA 0.354 0.409 0.826 SIA (KV-mean) 0.398 0.450 0.840 SIA (first) 0.396 0.461 0.841 SIA (hid-mean)* 0.398 0.467 0.845 FIA 0.423 0.494 0.851 Office NIA 0.442 0.495 0.867 SIA (KV-mean) 0.470 0.526 0.875 SIA (first) 0.493 0.552 0.881 SIA (hid-mean) 0.513 0.568 0.885 FIA 0.493 0.553 0.879 Figure 4: GPU memory usage for inference using NIA, SIA (hid-mean), and FIA methods. 4.3 Ablation studies We conduct ablation experiments to assess the effectiveness of the various proposed components, including context-aware entity embeddings, augmented triplet loss function, and self-supervised clustering task. For faster training, we utilize 3K training entity sets instead of the whole dataset for ablation studies. We focus on AMI, ARI, and F1 scores and skip NMI and RI as the latter can sometimes be high for random clusterings and are not adjusted for chance unlike AMI and ARI (Vinh et al., 2009). Set encoder We compare five different methods of obtaining entity embeddings; the results are shown in Table 3. The NIA, SIA (hid-mean), and FIA methods are described in Section 3.2. We explore two more scalable attention mechanisms: SIA (KV-mean) where keys and values are pooled instead of the hidden representations, and SIA (first) where the first token in each entity is used as the representative token for inter-entity attention. Both SIA and FIA methods obtain better results than NIA which demonstrates the importance of capturing the context given by an entity set. The FIA method achieves the best results on the Arts dataset, while SIA (hid-mean) achieves the best results on the Office dataset. Among the three SIA methods, SIA (hid-mean) yields the highest metrics on both datasets. Figure 4 shows increasing GPU memory usage during inference from NIA to SIA (hid-mean) to FIA embedding methods. SIA achieves better results than FIA on the Office dataset, despite consuming 42% less memory. 8 \fPreprint. Under review. Table 4: Results on validation set using different supervised clustering loss functions for training. SIA (first) architecture is used for the set encoder. Loss AMI ARI F1 Arts cross-entropy 0.374 0.441 0.832 structural loss 0.385 0.441 0.835 triplet 0.389 0.444 0.837 augmented triplet 0.396 0.461 0.841 Office cross-entropy 0.488 0.548 0.876 structural loss 0.494 0.549 0.881 triplet 0.497 0.543 0.880 augmented triplet 0.493 0.552 0.881 Table 5: Results on validation set with and without self-supervision. SIA (hidmean) architecture is used for the set encoder. (SS: Self-supervision) SS AMI ARI F1 Arts \u2717 0.398 0.467 0.845 \u2713 0.446 0.502 0.855 Office \u2717 0.513 0.568 0.885 \u2713 0.552 0.608 0.894 1. l eat her r epai r doct or com pl et e di y ki t pr em i xed gl ue pai nt . . . 2. usa gl ue st i cks 5 l b box appr ox 90 st i cks f ul l si ze st i cks . . . 3. gel l y candl e wax 55 ounce cl ear 4. t he candl em aker s st or e 05417002641 nat ur al soy wax 10 l b bag whi t e 2 3 1 4 SI A 0. 43 0. 82 0. 34 0. 25 0. 09 0. 15 2 3 1 4 NI A 0. 00 0. 83 0. 02 0. 04 0. 16 0. 08 Figure 5: Case Study: Predicted clusterings with pairwise similarities using SIA and NIA methods. The SIA method correctly identifies the common cluster membership of the first two entities where NIA fails. The stopping threshold for agglomerative clustering is chosen based on the results of the validation set. Loss function We compare different loss functions including the triplet and augmented triplet loss functions described in Section 3.3, the structural loss Haponchyk & Moschitti (2021), and binary cross-entropy loss for pairwise classification. The results are shown in Table 4. The augmented triplet loss achieves the highest AMI, ARI, and F1 scores on the Arts dataset and the highest ARI and F1 scores on the Office dataset. Self-supervision Table 5 shows the clustering performance of our model with and without the proposed self-supervised pretraining phase as described in Section B. We initialize the model with pretrained FlanT5 weights in both cases but include an extra dataset-specific pretraining phase before finetuning while using self-supervision. Self-supervised clustering improves AMI, ARI, and F1 on both datasets. 4.4 Qualitative analysis We will qualitatively demonstrate the significance of context-aware embeddings using an example. Referring to Figure 5, using SIA embeddings, our model accurately identifies two products each under the \u2018Glue Products\u2019 and \u2018Candle Making Supplies\u2019 clusters. However, with NIA embeddings, the model fails to capture the similarity between the two glue products. Specifically, in the NIA embeddings, the first product, a leather repair glue paint, is placed closer to other leather repair products in the universal entity set but far away from products containing glue sticks. The SIA approach leverages the context provided by the current entity set and places the leather repair glue paint and glue sticks (the first two entities) in the same cluster. 5 Conclusion This paper presented a novel approach for supervised clustering of entity subsets using context-aware entity embeddings from LLMs. Context-awareness is achieved through a scalable inter-entity attention mechanism that facilitates interactions among different entities 9 \fPreprint. Under review. at each layer of the LLM. We also proposed an augmented triplet loss to address challenges encountered when directly applying triplet loss to supervised clustering. A self-supervised clustering task is introduced by drawing inspiration from text-augmentation techniques, which helps in enhancing the fine-tuning performance. We demonstrated that by integrating the proposed components, our model outperforms existing methods by a significant margin on extrinsic clustering evaluation metrics. Future research could investigate alternative techniques for inter-entity attention, explore additional loss functions and self-supervision tasks, and extend the current work to more application domains."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.01008v2.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01008v2",
|
| 3 |
+
"title": "On Mechanistic Knowledge Localization in Text-to-Image Generative Models",
|
| 4 |
+
"abstract": "Identifying layers within text-to-image models which control visual\nattributes can facilitate efficient model editing through closed-form updates.\nRecent work, leveraging causal tracing show that early Stable-Diffusion\nvariants confine knowledge primarily to the first layer of the CLIP\ntext-encoder, while it diffuses throughout the UNet.Extending this framework,\nwe observe that for recent models (e.g., SD-XL, DeepFloyd), causal tracing\nfails in pinpointing localized knowledge, highlighting challenges in model\nediting. To address this issue, we introduce the concept of Mechanistic\nLocalization in text-to-image models, where knowledge about various visual\nattributes (e.g., \"style\", \"objects\", \"facts\") can be mechanistically localized\nto a small fraction of layers in the UNet, thus facilitating efficient model\nediting. We localize knowledge using our method LocoGen which measures the\ndirect effect of intermediate layers to output generation by performing\ninterventions in the cross-attention layers of the UNet. We then employ\nLocoEdit, a fast closed-form editing method across popular open-source\ntext-to-image models (including the latest SD-XL)and explore the possibilities\nof neuron-level model editing. Using Mechanistic Localization, our work offers\na better view of successes and failures in localization-based text-to-image\nmodel editing. Code will be available at\nhttps://github.com/samyadeepbasu/LocoGen.",
|
| 5 |
+
"authors": "Samyadeep Basu, Keivan Rezaei, Priyatham Kattakinda, Ryan Rossi, Cherry Zhao, Vlad Morariu, Varun Manjunatha, Soheil Feizi",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-08",
|
| 8 |
+
"primary_cat": "cs.CV",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CV"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Diffusion AND Model",
|
| 14 |
+
"gt": "On Mechanistic Knowledge Localization in Text-to-Image Generative Models",
|
| 15 |
+
"main_content": "Introduction In recent years, substantial strides in conditional image generation have been made through diffusion-based text-toimage generative models, including notable examples like Stable-Diffusion (Rombach et al., 2021), Imagen (Saharia *Equal contribution 1University of Maryland 2Adobe Research. Correspondence to: Samyadeep Basu <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). et al., 2022), and DALLE (Ramesh et al., 2021). These models have captured widespread attention owing to their impressive image generation and editing capabilities, as evidenced by leading FID scores on prominent benchmarks such as MS-COCO (Lin et al., 2014). Typically trained on extensive billion-scale image-text pairs like LAION5B (Schuhmann et al., 2022), these models encapsulate a diverse array of visual concepts, encompassing color, artistic styles, objects, and renowned personalities. A recent work (Basu et al., 2023) designs an interpretability framework using causal tracing (Pearl, 2001) to trace the location of knowledge about various styles, objects or facts in text-to-image generative models. Essentially, causal tracing finds the indirect effects of intermediate layers (Pearl, 2001), by finding layers which can restore a model with corrupted inputs to its original state. Using this framework, the authors find that knowledge about various visual attributes is distributed in the UNet, whereas, there exists a unique causal state in the CLIP text-encoder where knowledge is localized. This unique causal state in the text-encoder can be leveraged to edit text-to-image models in order to remove style, objects or update facts effectively. However, we note that their framework is restricted to early Stable-Diffusion variants such as Stable-Diffusion-v1-5. In our paper, we first revisit knowledge localization for textto-image generative models, specifically examining the effectiveness of causal tracing beyond Stable-Diffusion-v1-5. While causal tracing successfully identifies unique localized states in the text-encoder for Stable-Diffusion variants, including v1-5 and v2-1, it fails to do so for recent models like SD-XL (Podell et al., 2023) and DeepFloyd1 across different visual attributes. In the UNet, causal states are distributed across a majority of open-source text-to-image models (excluding DeepFloyd), aligning with findings in Basu et al. (2023). Notably, for DeepFloyd, we observe a lack of strong causal states corresponding to visual attributes in the UNet. To address the universal knowledge localization framework absence across different text-to-image models, we introduce the concept of mechanistic localization that aims to identify a small number of layers which control the generation of distinct visual attributes, across a spectrum of text-to-image 1https://github.com/deep-floyd/IF 1 arXiv:2405.01008v2 [cs.CV] 8 May 2024 \fPreprint Text-Encoder Replace the embedding of original prompt with a target prompt (e.g., \u2018a painting of a house\u2019) for this layer Low Fidelity to \u2019Van Gogh\u2019 style (b) UNet layers Cross-Attn Layers which use a different prompt than other layers Original Prompt: \u2018A house in the style of Van Gogh\u2019 \u201cMechanistic localization\u201d Output Modified! Cross-Attn layers Causal Intervention (Prior Works) Text-Encoder Causal Layer Distributed Knowledge No \u201cMechanistic localization\u201d LocoGen (Ours) (a) Prompt: \u2018A house in the style of Van Gogh\u2019 Original Generation Figure 1. LOCOGEN: Identifying UNet layers that, when given different input, can alter visual attributes (e.g., style, objects, facts). (a) Earlier works (Basu et al., 2023) which show distributed knowledge using causal interventions. (b) LOCOGEN where a few cross-attention layers receive a different prompt-embedding than the original, leading to generation of images without the particular style. models. To achieve this, we propose LOCOGEN, a method that finds a subset of cross-attention layers in the UNet such that when the input to their key and value matrices is changed, output generation for a given visual attribute (e.g., \u201cstyle\u201d) is modified (see Figure 1). This intervention in the intermediate layers has a direct effect on the output \u2013 therefore LOCOGEN measures the direct effect of intermediate layers, as opposed to indirect effects in causal tracing. Leveraging LOCOGEN, we probe knowledge locations for different visual attributes across popular open-source text-to-image models such as Stable-Diffusion-v1, StableDiffusion-v2, OpenJourney2, SD-XL (Podell et al., 2023) and DeepFloyd. For all models, we find that unique locations can be identified for visual attributes (e.g., \u201cstyle\u201d, \u201cobjects\u201d, \u201cfacts\u201d). Using these locations, we then perform weight-space model editing to remove artistic \u201cstyles\u201d, modify trademarked \u201cobjects\u201d and update outdated \u201cfacts\u201d in text-to-image models. This weight-space editing is performed using LOCOEDIT which updates the key and value matrices using a closed-form update in the locations identified by LOCOGEN. Moreover, for certain attributes such as \u201cstyle\u201d, we show that knowledge can be traced and edited to a subset of neurons, therefore highlighting the possibilities of neuron-level model editing. Contributions. In summary, our contributions include: \u2022 We highlight the drawbacks of existing interpretability methods such as causal tracing for localizing knowledge in latest text-to-image models. \u2022 We introduce LOCOGEN which can universally identify layers that control for visual attributes across a large spectrum of open-source text-to-image models. \u2022 By examining edited models using LOCOEDIT along 2https://huggingface.co/prompthero/openjourney with LOCOGEN, we observe that this efficient approach is successful across a majority of text-to-image models. 2. Related Works Intepretability of Text-to-Image Models. To our understanding, there\u2019s limited exploration into the inner workings of text-to-image models, such as Stable-Diffusion. DAAM (Tang et al., 2023; Hertz et al., 2022) scrutinizes diffusion models through the analysis of cross-attention maps between text tokens and images, highlighting their semantic precision. (Chefer et al., 2023) understand the decomposition of concepts in diffusion models. (Basu et al., 2023) leverage causal tracing to understand how knowledge is stored in text-to-image models such as Stable-Diffusion-v1. Editing Text-to-Image Models. The capacity to modify a diffusion model\u2019s behavior without starting from scratch was initially investigated in Concept-Ablation (Kumari et al., 2023) and Concept-Erasure (Gandikota et al., 2023). Another method, TIME (Orgad et al., 2023), alters all the cross-attention layers\u2019 key and value matrices to translate between concepts, though lacks interpretability and applications on a real-use case of model editing. (Basu et al., 2023) edits text-to-image models in the text-encoder space by leveraging a singular causal state. However, existing works overlook newer text-to-image models (e.g., SD-XL and DeepFloyd), which we delve into in detail. 3. Preliminaries Diffusion models start with an initial random real image x0, the noisy image at time step t is expressed as xt = \u221a\u03b1tx0 + p (1 \u2212\u03b1t)\u03f5. Here, \u03b1t determines the strength of the random Gaussian noise, gradually diminishing as the time step increases, ensuring that xT \u223cN(0, I). The denoising network \u03f5\u03b8(xt, c, t), is pre-trained to denoise the noisy image xt and produce xt\u22121. Typically, the con2 \fPreprint Layers SD-v2-1 UNet SD-XL-UNet DeepFloyd-UNet 70 227 163 CLIP-Score Figure 2. Causal tracing for UNet. Similar to (Basu et al., 2023), we find that knowledge is causally distributed across the UNet for text-to-image models such as SD-v2-1 and SD-XL. For DeepFloyd we do not observe any significant causal state in the UNet. DeepFloyd Layers CLIP-Score 24 46 0 0 SD-XL Figure 3. Causal tracing for text-encoder. Unlike SD-v1-5 and SD-v2-1, we find that a singular causal states does not exist in the text-encoder for SD-XL and DeepFloyd. ditional input c for the denoising network \u03f5\u03b8(.) is a textembedding derived from a caption c through a text-encoder, denoted as c = v\u03b3(c). The noising as well as the denoising operation can also occur in a latent space defined by z = E(x) (Rombach et al., 2021) for better efficiency. The pre-training objective learns to denoise in the latent space as denoted by: L(z, c) = E\u03f5,t||\u03f5 \u2212\u03f5\u03b8(zt, c, t)||2 2, where zt = E(xt) and E is an encoder such as VQVAE (van den Oord et al., 2017). 4. On the Effectiveness of Causal Tracing for Text-to-Image Models In this section, we empirically observe the effectiveness of causal tracing to models beyond Stable-Diffusion-v15. In particular, we find the ability of causal tracing to identify localized control points in Stable-Diffusion-v2-1, OpenJourney, SD-XL and DeepFloyd. Causal Tracing in UNet. In Figure 2, we find that knowledge across different visual attributes is distributed in the UNet for all the text-to-image models (except for DeepFloyd), similar to Stable-Diffusion-v1-5. However, the degree of distribution varies between different text-to-image models. While knowledge about various visual attributes is densely distributed in Stable-Diffusion variants, for SDXL we find that the distribution is extremely sparse (e.g., only 5% of the total layers are causal). For DeepFloyd, we observe that there are no strong causal states in the UNet. We provide more qualitative visualizations on causal tracing across the these text-to-image models in Appendix A. Overall, these results reinforce the difficulty of editing knowledge in the UNet directly due to (i) distribution of causal states or (ii) absence of any. Causal Tracing in Text-Encoder. Basu et al. (2023) show that there exists a unique causal state in the text-encoder for Stable-Diffusion-v1-5 and Stable-Diffusion-v2-1 which can be used to perform fast model editing. In Figure 3, we find that such an unique causal state is absent in the text-encoder for DeepFloyd and SD-XL. We note that DeepFloyd uses a T5-text encoder, whereas SD-XL uses a a combination of CLIP-ViT-L and OpenCLIP-ViT-G (Radford et al., 2021). Our empirical results indicate that an unique causal state arises only when a CLIP text-encoder is used by itself in a text-to-image model. 5. LOCOGEN: Towards Mechanistic Knowledge Localization Given the lack of generalizability of knowledge localization using causal tracing as shown in Section 4, we introduce LOCOGEN , which can identify localized control regions for visual attributes across all text-to-image models. 5.1. Knowledge Control in Cross-Attention Layers During the inference process, the regulation of image generation involves the utilization of classifier-free guidance, as outlined in Ho & Salimans (2021) which incorporates scores from both the conditional and unconditional diffusion models at each time-step. Specifically, the classifier-free guidance is applied at each time-step to combine the conditional (\u03f5\u03b8(zt, c, t)) and unconditional score estimates (\u03f5\u03b8(zt, t)). The result is a combined score denoted as \u02c6 \u03f5(zt, c, t). \u02c6 \u03f5(zt, c, t) = \u03f5\u03b8(zt, c, t) + \u03b1 (\u03f5\u03b8 (zt, c, t) \u2212\u03f5\u03b8(zt, t)) , \u2200t \u2208[T, 1] . (1) This combined score is used to update the latent zt using DDIM sampling (Song et al., 2020) at each time-step to obtain the final latent code z0. We term the model \u03f5\u03b8(zt, c, t) as the Clean Model and the final image generated as Iclean. We note that text is incorporated in the process of generation using cross-attention layers denoted by {Cl}M l=1 within \u03f5\u03b8(zt, c, t) \u2200t \u2208[T, 1]. These layers include key and value matrices \u2013 {W K l , W V l }M l=1 that take text-embedding c of the input prompt and guide the generation toward the text prompt. Generally, the text-embedding c is same across all these layers. However, in order to localize and find control points for different visual attributes, we replace the original text-embedding c with a target prompt embedding c\u2032 across a small subset of the cross-attention layers and measure its direct effect on the generated image. 5.1.1. ALTERED INPUTS We say that a model receives altered input when a subset of cross-attention layers C\u2032 \u2282{Cl}M l=1 receive a different text-embedding c\u2032 than the other cross-attention layers that 3 \fPreprint SDv1-5 Open Journey SDXL SDv2-1 Layer 8 Original Layer 8 Original Layer 6 Original Layer 6 Original Layer 8 Original Layer 8 Original Layer 6 Original Layer 6 Original Layer 8 Original Layer 8 Original Layer 8 Original Layer 7 Original Layer 45 Original Layer 45 Original Layer 45 Original Layer 45 Original 'A house in the style of Van Gogh\u2019 'A town in the style of Monet\u2019 \u2018R2D2 in street\u2019 \u2018President of United States\u2019 'A house in the style of Van Gogh\u2019 'A house in the style of Van Gogh\u2019 'A house in the style of Van Gogh\u2019 'A town in the style of Monet\u2019 'A town in the style of Monet\u2019 'A tree in the style of Van Gogh\u2019 \u2018R2D2\u2019 \u2018R2D2\u2019 \u2018The British Monarch\u2019 \u2018The British Monarch\u2019 \u2018The British Monarch\u2019 \u2018President of United States\u2019 Original Prompt: Original Prompt: Original Prompt: Original Prompt: Figure 4. Interpretability Results: Images generated by intervening on the layers identified by LOCOGEN across various opensource text-to-image models. We compare the original generation vs. generation by intervening on the layers identified with LOCOGEN along with a target prompt. We find that across various text-to-image models, visual attributes such as style, objects, facts can be manipulated by intervening only on a very small fraction of cross-attention layers. take c as input. We name these layers as controlling layers. We denote by Ialtered the image generated using this model and Equation (1) with altered inputs when zT is given as the initial noise. We denote the model \u03f5\u03b8(zt, c, c\u2032, t) with the altered inputs as the Altered Model with the following inference procedure: \u02c6 \u03f5(zt, c, c\u2032, t) = \u03f5\u03b8(zt, c, c\u2032, t) + \u03b1(\u03f5\u03b8(zt, c, c\u2032, t) \u2212\u03f5\u03b8(zt, t)) . As an example, to find the layers where style knowledge corresponding to a particular artist is stored, {Cl}M l=1 \u2212C\u2032 receive text-embeddings corresponding to the prompt \u2018An <object> in the style of <artist>\u2019, whereas the layers in C\u2032 receive text-embeddings corresponding to the prompt \u2018An <object> in the style of painting\u2019. If the generated image with these inputs do not have that particular style, we realize that controlling layers C\u2032 are responsible for incorporating that specified style in the output (see Figure 1). In fact, this replacement operation enables finding locations across different cross-attention layers where various visual attribute knowledge is localized. 5.1.2. LOCOGEN ALGORITHM Our goal is to find controlling layers C\u2032 for different visual attributes. We note that the cardinality of the set |C\u2032| = m is a hyper-parameter and the search space for C\u2032 is exponential. Given |C\u2032| = m, there are \u0000M m \u0001 possibilities for C\u2032, thus, we restrict our search space to only adjacent cross-attention layers. In fact, we consider all C\u2032 such that C\u2032 = {Cl}j+m\u22121 l=j for j \u2208[1, M \u2212m + 1]. Selecting the hyper-parameter m. To select the cardinality of the set C\u2032, we run an iterative hyper-parameter search with m \u2208[1, M], where M is selected based on the maximum number of cross-attention layers in a given text-to-image generative model. At each iteration of the hyper-parameter search, we investigate whether there exists a set of m adjacent cross-attention layers that are responsible for the generation of the specific visual attribute. We find minimum m that such controlling layers for the particular attribute exists. To apply LOCOGEN for a particular attribute, we obtain a set of input prompts T = {Ti}N i=1 that include the particular attribute and corresponding set of prompts T \u2032 = {T \u2032 i}N i=1 where T \u2032 i is analogous to Ti except that the particular attribute is removed/updated. These prompts serve to create altered images and assess the presence of the specified attribute within them. Let ci be the text-embedding of Ti and c\u2032 i be that of T \u2032 i. Given m, we examine all M \u2212m + 1 possible candidates for controlling layers. For each of them, we generate N 4 \fPreprint SD-v1-5 SD-v2-1 SD-XL \u2013 Style, Facts SD-XL Objects Layers CLIP-Score Style Objects Facts Objects, Facts Style Facts Objects, Style Style, Facts Objects (a) (b) (c) (d) Figure 5. CLIP-Score of the generated images with original prompt for style, objects and target prompt for facts after intervening on layers through LOCOGEN. Lower CLIP-Score for objects, style indicate correct localization, whereas a higher CLIP-Score indicates such for facts. (a) For SD-v1-5 (m=2), objects, facts can be controlled from Layer 6, whereas style can be controlled from Layer 8. (b) For SD-v2-1(m=3), facts are controlled from Layer 7, style and objects from Layer 8. (c,d): For SD-XL, style (m=3), facts(m=5) are controlled from Layer 45, whereas objects are controlled from Layer 15. altered images where i-th image is generated by giving c\u2032 i as the input embedding to selected m layers and ci to other ones. Then we measure the CLIP-Score (Hessel et al., 2021) of original text prompt Ti to the generated image for style, objects and target text prompt T \u2032 i to the generated image for facts. For style and objects, drop in CLIP-Score shows the removal of the attribute while for facts increase in score shows similarity to the updated fact. We take the average of the mentioned score across all 1 \u2264i \u2264N. By doing that for all candidates, we report the one with minimum average CLIP-Score for style, objects and maximum average CLIP-Score for facts. These layers could be candidate layers controlling the generation of the specific attribute. Algorithm 1 provides the pseudocode to find the best candidate. Figure 5 shows CLIP-Score across different candidates. Algorithm 1 LOCOGEN Input: m, {Ti}N i=1, {T \u2032 i}N i=1, {ci}N i=1, {c\u2032 i}N i=1 Output: Candidate controlling set for j \u21901, . . . , M \u2212m do C\u2032 \u2190{Cl}j+m\u22121 l=j for i \u21901, . . . , N do si \u2190CLIP-SCORE (Ti, Ialtered) s\u2032 i \u2190CLIP-SCORE (T \u2032 i, Ialtered) aj \u2190AVERAGE \u0000{si}N i=1 \u0001 \u25b7for objects, style aj \u2190AVERAGE \u0000{s\u2032 i}N i=1 \u0001 \u25b7for facts j\u2217\u2190arg minj aj \u25b7for objects, style j\u2217\u2190arg maxj aj \u25b7for facts return aj\u2217, {Cl}j\u2217+m\u22121 l=j\u2217 We set a threshold for average CLIP-Score and find the minimum m such that there exists m adjacent cross-attention layers whose corresponding CLIP-Score meets the requirement. We point the reader to Appendix G for the values of m selected for different models and thresholds. Dataset for Prompts. We use the prompts used in (Basu et al., 2023; Kumari et al., 2023) to extract locations in the UNet which control for various visual attributes such as objects, style and facts. More details in Appendix C. 5.2. Empirical Results In this section, we provide empirical results highlighting the localized layers across various open-source text-to-image generative models: Stable-Diffusion Variants. Across both models, as depicted qualitatively in Figure 4 and quantitatively in Figure 5-(a), we observe the presence of a distinctive subset of layers that govern specific visual attributes. In the case of both SD-v1-5 and SD-v2-1, the control for \u201cstyle\u201d is centralized at l = 8 with m = 2. In SD-v1-5, the control for \u201cobjects\u201d and \u201cfacts\u201d emanates from the same locations: l = 6 and m = 2. However, in SD-v2-1, \u201cobjects\u201d are controlled from l = 8, while \u201cfacts\u201d are influenced by l = 7. Despite sharing a similar UNet architecture and undergoing training with comparable scales of pre-training data, these models diverge in the text-encoder utilized. This discrepancy in text-encoder choice may contribute to the variation in how they store knowledge concerning different attributes. Open-Journey. We note that Open-Journey exhibits control locations similar to SD-v1-5 for various visual attributes. As illustrated in Figure 4 and Figure 5-(a), \u201cobjects\u201d and \u201cfacts\u201d are governed from l = 6, while \u201cstyle\u201d is controlled from l = 8. Despite the architectural resemblance between Open-Journey and SD-v1-5, it\u2019s important to highlight that Open-Journey undergoes fine-tuning on a subset of images generated from Mid-Journey. This suggests that the control locations for visual attributes are more closely tied to the underlying model architecture than to the specifics of the training or fine-tuning data. SD-XL. Within SD-XL, our investigation reveals that both 5 \fPreprint Original Edited Original Edited Original Edited Original Edited SDv1-5 Open Journey SDv2-1 Edit: Remove Style of \u2018Van Gogh\u2019 Edit : Remove Style of \u2018Monet\u2019 Edit: Modify trademarked \u2018Snoopy\u2019 Edit: Update with correct \u2018British Monarch\u2019 SD-XL Figure 6. LOCOEDIT (Model editing) results at locations identified by LOCOGEN across various open-source text-to-image models. We observe that locations identified by our interpretability framework can be edited effectively to remove styles, objects and update facts in text-to-image models. We provide more visualizations in Appendix B. Original Layer 6 Original Layer 12 \u2018President of United States\u2019 \u2018The British Monarch\u2019 Original Prompt Original Layer 16 Original Layer 12 Original Prompt \u2018Ocean in Monet Style\u2019 \u2018Women working in garden in Van Gogh style\u2019 Figure 7. Interpretability Results for DeepFloyd. We find the control points for visual attributes to be dependent on the underlying prompts, rather than the visual attribute. \u201cstyle\u201d and \u201cfacts\u201d can be effectively controlled from l = 45, with m = 3 as evidenced in Figure 4 and Figure 5-(c). For the attribute \u201cobjects,\u201d control is situated at l = 15, albeit with a slightly larger value of m = 5. In summary, SD-XL, consisting of a total of 70 cross-attention layers, underscores a significant finding: various attributes in image generation can be governed by only a small subset of layers. DeepFloyd. Across SD-v1-5, SD-v2-1, Open-Journey, and SD-XL, our findings indicate that visual attributes like \u201cstyle\u201d, \u201cobjects\u201d and \u201cfacts,\u201d irrespective of the specific prompt used, can be traced back to control points situated within a limited number of layers. However, in the case of DeepFloyd, our observations differ. We find instead, that all attributes display localization dependent on the specific prompt employed. To illustrate, factual knowledge related to \u201cThe British Monarch\u201d is governed from l = 6 with m = 3, whereas factual knowledge tied to \u201cThe President of the United States\u201d is controlled from l = 12 (see Figure 7). This divergence in localization patterns highlights the nuanced behavior of DeepFloyd in comparison to the other models examined. More results can be referred in Appendix B.5. Human-Study Results. We run a human-study to verify that LOCOGEN can effectively identify controlling layers for different visual attributes. In our setup, evaluators assess 132 image pairs, each comprising an image generated by Clean Model and an image generated by Altered Model whose identified cross-attention layers takes different inputs. Evaluators determine whether the visual attribute is changed in the image generated by Altered Model(for instance, the artistic Van Gogh style is removed from the original image or not). Covering 33 image pairs, generated with different prompts per model, with five participating evaluators, our experiments reveal a 92.58% verification rate for the impact of LOCOGEN-identified layers on visual attributes. See more details in Appendix J. 6. LOCOEDIT: Editing to Ablate Concepts In this section, we analyse the effectiveness of closed-form edit updates in the layers identified by LOCOGEN across different text-to-image generative models. 6 \fPreprint SD-v1-5 SD-v2-1 OpenJourney SD-XL Original Edited Figure 8. Quantitative Model Editing Results for Text-toImage Models. We observe a drop in CLIP-Score for \u201cstyle\u201d and \u201dobjects\u201d, while an increase in CLIP-Score for \u201cfacts\u201d therefore highlighting correct edits. 6.1. Method Algorithm 1 extracts the exact set of cross-attention layers from which the knowledge about a particular visual attribute (e.g., style) is controlled. We denote this set as Cloc, where Cloc \u2282C and |Cloc| = m. This set of extracted crossattention layers Cloc, each containing value and key matrices is denoted as Cloc = { \u02c6 W K l , \u02c6 W V l }m l=1. The objective is to modify these weight matrices { \u02c6 W K l , \u02c6 W V l }m l=1 such that they transform the original prompt (e.g., \u2019A house in the style of Van Gogh\u2019) to a target prompt (e.g., \u2019A house in the style of a painting\u2019) in a way that the visual attribute in the generation is modified. Similar to Section 5.1.2, we use a set of input prompts Torig = {T o i }N i=1 consisting of prompts featuring the particular visual attribute. Simultaneously, we create a counterpart set Ttarget = {T t i }N i=1 where each T t i is identical to T o i but lacks the particular attribute in focus. Let co i \u2208Rd be the text-embedding of the last subject token in T o i and ct i \u2208Rd be that of T t i . We obtain matrix Xorig \u2208RN\u00d7d by stacking vectors co 1, co 2, . . . , co N and matrix Xtarget \u2208RN\u00d7d by stacking ct 1, ct 2, . . . , ct N. To learn a mapping between the key and the value embeddings, we solve the following optimization for each layer l \u2208[1, m] corresponding to the key matrices as: min W K l \u2225XorigW K l \u2212Xtarget \u02c6 W K l \u22252 2 + \u03bbK\u2225W K l \u2212\u02c6 W K l \u22252 2 where \u03bbK is the regularizer. Letting Yorig = XorigW K l the optimal closed form solution for the key matrix is: W K l = (XT origXorig + \u03bb1I)\u22121(XT origYtarget + \u03bbK \u02c6 W K l ) Same is applied to get optimal matrix for value embeddings. 6.2. Model Editing Results Stable-Diffusion Variants, Open-Journey and SD-XL. In Figure 6 and Figure 8, it becomes apparent that LOCOEDIT effectively integrates accurate edits into the locations identified by LOCOGEN. Qualitatively examining the visual edits in Figure 6, our method demonstrates the capability to remove artistic \u201cstyles\u201d, modify trademarked \u201cobjects,\u201d and update outdated \u201cfacts\u201d within a text-to-image model with accurate information. This visual assessment is complemented by the quantitative analysis in Figure 8, where we observe that the CLIP-Score of images generated by the edited model, given prompts containing specific visual attributes, consistently registers lower than that of the clean model for \u201cobjects\u201d and \u201cstyle.\u201d For \u201cfacts,\u201d we gauge the CLIP-Score of images from the model with the correct facts, wherein a higher CLIP-Score indicates a correct edit, as illustrated in Figure 8. Combining both qualitative and quantitative findings, these results collectively underscore the effectiveness of LOCOEDIT across SD-v1-5, SD-v2-1, Open-Journey, and SD-XL. However, it\u2019s noteworthy that the efficacy of closed-form edits varies among different textto-image models. Specifically, in the case of \u201cstyle,\u201d we observe the most substantial drop in CLIP-Score between the edited and unedited models for SD-v1-5 and Open-Journey, while the drop is comparatively less for SD-v2-1 and SDXL. Conversely, for \u201cfacts,\u201d we find that all models perform similarly in updating with new information. Limitations with DeepFloyd Closed-Form Edits. DeepFloyd, despite revealing distinct locations through LOCOGEN (albeit depending on the underlying prompt), exhibits challenges in effective closed-form edits at these locations. Appendix M provides qualitative visualizations illustrating this limitation. The model employs a T5-encoder with bidirectional attention, diverging from other text-to-image models using CLIP-variants with causal attention. Closedform edits, relying on mapping the last-subject token embedding to a target embedding, are typically effective in text-embeddings generated with causal attention, where the last-subject token holds crucial information. However, the T5-encoder presents a hurdle as tokens beyond the last subject token contribute essential information about the target attribute. Consequently, restricting the mapping to the lastsubject token alone proves ineffective for a T5-encoder. While LOCOGEN along with LOCOEDIT makes model editing more interpretable \u2013 we also find that localized-model editing is better than updating all layers in the UNet as shown in Appendix F. We also compare our method with existing editing methods (Basu et al., 2023; Kumari et al., 2023; Gandikota et al., 2023) in Appendix I. We find that our editing method is at par with existing baselines, with the added advantage of generalizability to models beyond Stable-Diffusion-v1-5. In Appendix L, we also show the 7 \fPreprint Original Image 50 Neurons Layer A painting of rocky ocean shore under the luminous night sky in the style of\u00a0Van Gogh Painting of a grieving woman in the style of\u00a0Pablo Picasso A painting of a river in the style of\u00a0Monet 100 Neurons Figure 9. Neuron-Level Model Editing Qualitative. Results when applying neuron-level dropout on identified neurons in layers specified with LOCOGEN on Stable Diffusion v1.5. The second and third columns display images with 50 and 100 modified neurons out of 1280 in controlling layers, respectively. The last column shows images with a different embedding in controlling layers. robustness of our method to generic prompts. 7. On Neuron-Level Model Editing In this section, we explore the feasibility of effecting neuronlevel modifications to eliminate stylistic attributes from the output of text-to-image models. According to layers identified with LOCOGEN, our objective is to ascertain whether the selective dropout of neurons at the activation layers within the specified cross-attention layers (key and value embeddings) can successfully eliminate stylistic elements. To accomplish this objective, we first need to identify which neurons are responsible for the generation of particular artistic styles, e.g., Van Gogh. We examine the activations of neurons in the embedding space of key and value matrices in identified cross-attention layers. More specifically, we pinpoint neurons that exhibit significant variations when comparing input prompts that include a particular style with the case that input prompts do not involve the specified style. To execute this process, we collect a set of N1 prompts that feature the specific style, e.g. Van Gogh. We gather text-embeddings of the last subject token of these prompts denoted by c1, c2, ..., cN1, where ci \u2208Rd. We also obtain a set of N2 prompts without any particular style and analogously obtain {c\u2032 1, c\u2032 2, ..., c\u2032 N2}, where c\u2032 i \u2208Rd. Next, for the key or value matrix W \u2208Rd\u00d7d\u2032, we consider key or value embedding of these input prompts, i.e., {zi}N1 i=1 \u222a{z\u2032 i}N2 i=1 where zi = ciW and z\u2032 i = c\u2032 iW. We Salvador Dali Van Gogh Monet Pablo Picasso Greg Rutkowski 0.1 0.2 0.3 Original 30 Neurons 50 Neurons 100 Neurons Layer Figure 10. Neuron-Level Model Editing Quantitative. Average CLIP-Score of generated images to text prompt \u2019style of <artist>\u2019. Brown bars show similarity to original generated image; red, orange, and green bars show similarity to generated image when 30, 50, and 100 neurons are modified, respectively; and blue bars refer to images when controlling layers receive other prompt. note that zi, z\u2032 i \u2208Rd\u2032. Subsequently, for each of these d\u2032 neurons, we assess the statistical difference in their activations between input prompts that include a particular style and those without it. Specifically, we compute the z-score for each neuron within two groups of activations: z1, z2, ..., zN1 and z\u2032 1, z\u2032 2, . .. , z\u2032 N2. The neurons are then ranked based on the absolute value of their z-score, with the top neurons representing those that exhibit significant differences in activations depending on the presence or absence of a particular concept in the input prompt. During generation, we drop-out these neurons and see if particular style is removed or not. As seen in Figure 9, neuron-level modification at inference time is effective at removing styles. This shows that knowledge about a particular style can be even more localized to a few neurons. It is noteworthy that the extent of style removal increases with the modification of more neurons, albeit with a trade-off in the quality of generated images. This arises because modified neurons may encapsulate information related to other visual attributes. To quantify the effectiveness of this approach, we measure the drop in CLIP-Score for modified images across various styles. Figure 10 presents a bar-plot illustrating these similarity scores. Notably, drop in CLIP-Score demonstrates that neuron-level model editing effectively removes the styles associated with different artists in the generated images. We refer to Appendix K.1 for more details on neuron-level model editing experiments. 8. Conclusion In our paper, we comprehensively examine knowledge localization across various open-source text-to-image models. We initially observe that while causal tracing proves effective for early Stable-Diffusion variants, its generalizability diminishes when applied to newer text-to-image models like DeepFloyd and SD-XL for localizing control points associated with visual attributes. To address this limitation, we introduce LOCOGEN, capable of effectively identifying lo8 \fPreprint cations within the UNet across diverse text-to-image models. Harnessing these identified locations within the UNet, we evaluate the efficacy of closed-form model editing across a range of text-to-image models leveraging LOCOEDIT , uncovering intriguing properties. Notably, for specific visual attributes such as \u201cstyle\u201d, we discover that knowledge can even be traced to a small subset of neurons and subsequently edited by applying a simple dropout layer, thereby underscoring the possibilities of neuron-level model editing. 9. Impact Statement This paper presents work to advance the understanding of the inner workings of open-source text-to-image generative models. Our interpretability method can advance the understanding of how knowledge is represented in generative models and does not have any potential negative implications on the society. Our editing method can address societal concerns (e.g., an artist asking the model owner to delete their style) in an effective way and to the best of our knowledge does not have any negative societal consequences."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01029v2.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01029v2",
|
| 3 |
+
"title": "MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts",
|
| 4 |
+
"abstract": "Learning to solve vehicle routing problems (VRPs) has garnered much\nattention. However, most neural solvers are only structured and trained\nindependently on a specific problem, making them less generic and practical. In\nthis paper, we aim to develop a unified neural solver that can cope with a\nrange of VRP variants simultaneously. Specifically, we propose a multi-task\nvehicle routing solver with mixture-of-experts (MVMoE), which greatly enhances\nthe model capacity without a proportional increase in computation. We further\ndevelop a hierarchical gating mechanism for the MVMoE, delivering a good\ntrade-off between empirical performance and computational complexity.\nExperimentally, our method significantly promotes zero-shot generalization\nperformance on 10 unseen VRP variants, and showcases decent results on the\nfew-shot setting and real-world benchmark instances. We further conduct\nextensive studies on the effect of MoE configurations in solving VRPs, and\nobserve the superiority of hierarchical gating when facing out-of-distribution\ndata. The source code is available at:\nhttps://github.com/RoyalSkye/Routing-MVMoE.",
|
| 5 |
+
"authors": "Jianan Zhou, Zhiguang Cao, Yaoxin Wu, Wen Song, Yining Ma, Jie Zhang, Chi Xu",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-06",
|
| 8 |
+
"primary_cat": "cs.AI",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.AI",
|
| 11 |
+
"cs.LG"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Mixture AND of AND Experts",
|
| 15 |
+
"gt": "MVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts",
|
| 16 |
+
"main_content": "Introduction Vehicle routing problems (VRPs) are a class of canonical combinatorial optimization problems (COPs) in operation research and computer science, with a wide spectrum of 1College of Computing and Data Science, Nanyang Technological University, Singapore 2School of Computing and Information Systems, Singapore Management University, Singapore 3Department of Information Systems, Eindhoven University of Technology, The Netherlands 4Institute of Marine Science and Technology, Shandong University, China 5Singapore Institute of Manufacturing Technology (SIMTech), Agency for Science, Technology and Research (A*STAR), Singapore. Correspondence to: Yaoxin Wu <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). applications in logistics (Cattaruzza et al., 2017), transportation (Wu et al., 2023), and manufacturing (Zhang et al., 2023). The intrinsic NP-hard nature makes VRPs exponentially expensive to be solved by exact solvers. As an alternative, heuristic solvers deliver suboptimal solutions within reasonable time, but need substantial domain expertise to be designed for each problem. Recently, learning to solve VRPs has received much attention (Bengio et al., 2021; Bogyrbayeva et al., 2024), with fruitful neural solvers being developed. Most of them apply deep neural networks to learn solution construction policies via various training paradigms (e.g., reinforcement learning (RL)). Besides gaining decent performance, they are characterized by less computational overhead and domain expertise than conventional solvers. However, prevailing neural solvers still need network structures tailored and trained independently for each specific VRP, instigating prohibitive training overhead and less practicality when facing multiple VRPs. In this paper, we aim to develop a unified neural solver, which can be trained for solving a range of VRP variants simultaneously, and has decent zero-shot generalization capability on unseen VRPs. A few recent works explore similar problem settings. Wang & Yu (2023) applies multi-armed bandits to solve multiple VRPs, while Lin et al. (2024) adapts the model pretrained on one base VRP to target VRPs by efficient fine-tuning. They fail to achieve zero-shot generalization to unseen VRPs due to the dependence on networks structured for predetermined problem variants. Liu et al. (2024) empowers the neural solver with such generalizability by the compositional zero-shot learning (Ruis et al., 2021), which treats VRP variants as different combinations of a set of underlying attributes and uses a shared network to learn their representations. However, it still leverages existing network structure proposed for simple VRPs, which is limited by its model capacity and empirical performance. Motivated by the recent advance of large language models (LLMs) (Kaplan et al., 2020; Floridi & Chiriatti, 2020; Touvron et al., 2023), we propose a multi-task VRP solver with mixture-of-experts (MVMoE). Typically, a mixture-ofexpert (MoE) layer replaces a feed-forward network (FFN) with several \"experts\" in a Transformer-based model, which are a group of FFNs with respective trainable parameters. An input to the MoE layer is routed to specific expert(s) by a gating network, and only parameters in selected expert(s) 1 arXiv:2405.01029v2 [cs.AI] 6 May 2024 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts are activated (i.e., conditional computation (Jacobs et al., 1991; Jordan & Jacobs, 1994)). In this manner, partially activated parameters can effectively enhance the model capacity without a proportional increase in computation, making the training and deployment of LLMs viable. Therefore, towards a more generic and powerful neural solver, we first propose an MoE-based neural VRP solver, and present a hierarchical gating mechanism for a good trade-off between empirical performance and computational complexity. We choose the setting from Liu et al. (2024) as a test bed due to its potential to solve an exponential number of new VRP variants as any combination of the underlying attributes. Our contributions are summarized as follows. 1) We propose a unified neural solver MVMoE to solve multiple VRPs, which first brings MoEs into the study of COPs. The sole MVMoE can be trained on diverse VRP variants, and facilitate a strong zero-shot generalization capability on unseen VRPs. 2) We develop a hierarchical gating mechanism for MVMoE to attain a favorable balance between empirical performance and computational overhead. Surprisingly, it exhibits much stronger out-of-distribution generalization capability than the base gating. 3) Extensive experiments demonstrate that MVMoE significantly improves the zeroshot generalization against baselines on 10 unseen VRP variants, and achieves decent results on the few-shot setting and real-world instances. We further provide extensive studies on the effect of MoE configurations (such as the position of MoEs, the number of experts, and the gating mechanism) on the zero-shot generalization performance. 2. Related Work Neural VRP Solvers. Two mainstreams exist in literature on learning to solve VRPs: 1) Construction-based solvers, which learn policies to construct solutions in an end-to-end manner. Vinyals et al. (2015) proposes Pointer Network to estimate the optimal solution to the traveling salesman problem (TSP) in an autoregressive way. The follow-up works apply RL to explore better approximate solutions to TSP (Bello et al., 2017) and capacitated vehicle routing problem (CVRP) (Nazari et al., 2018). Kool et al. (2018) proposes an attention-based model (AM) that uses Transformer to solve a series of VRPs independently. By leveraging the symmetry property in solutions, Kwon et al. (2020) proposes the policy optimization with multiple optima (POMO) to further promote the performance in solving TSP and CVRP. Other construction-based solvers are often developed on top of AM and POMO (Kwon et al., 2021; Li et al., 2021a; Kim et al., 2022; Berto et al., 2023; Chen et al., 2023; Grinsztajn et al., 2023; Chalumeau et al., 2023; Hottung et al., 2024). Besides the autoregressive manner, several works construct a heatmap to solve VRPs in a non-autoregressive manner (Joshi et al., 2019; Fu et al., 2021; Kool et al., 2022; Qiu et al., 2022; Sun & Yang, 2023; Min et al., 2023; Ye et al., 2023; Kim et al., 2024). 2) Improvement-based solvers, which learn policies to iteratively refine an initial solution until a termination condition is satisfied. The policies are often trained in contexts of classic local search (Croes, 1958; Shaw, 1998) or specialized heuristic solvers (Helsgaun, 2017) for obtaining more efficient or effective search components (Chen & Tian, 2019; Lu et al., 2020; Hottung & Tierney, 2020; d O Costa et al., 2020; Wu et al., 2021; Xin et al., 2021; Hudson et al., 2022; Zhou et al., 2023a; Ma et al., 2023). In general, constructionbased solvers can efficiently achieve desired performance, whereas improvement-based solvers have the potential to deliver better solutions given prolonged inference time. Recent research uncovers the deficient generalization capability of neural solvers, which suffer from drastic performance decrement on unseen data (Joshi et al., 2021). Previous works mainly focus on the cross-size generalization (Fu et al., 2021; Hou et al., 2023; Son et al., 2023; Luo et al., 2023; Drakulic et al., 2023) or cross-distribution generalization (Zhang et al., 2022; Geisler et al., 2022; Bi et al., 2022; Jiang et al., 2023) or both (Manchanda et al., 2022; Zhou et al., 2023b; Wang et al., 2024) on a single problem. In this paper, we step further to explore the generalization across different VRP variants (Wang & Yu, 2023; Liu et al., 2024; Lin et al., 2024). Mixture-of-Experts. The original idea of MoEs was proposed three decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994). In early concepts, the expert was defined as an entire neural network, and hence MoEs was similar to an ensemble of neural networks. Eigen et al. (2013) launchs the era when researchers start applying MoEs as components of neural networks. As an early success of MoEs applied in large neural networks, Shazeer et al. (2017) introduces the sparsely-gated MoEs in language modeling and machine translation, achieving state-of-the-art results at the time with only minor losses in computational efficiency. Follow-up works mainly focus on improving the gating mechanism (Lewis et al., 2021; Roller et al., 2021; Zuo et al., 2022; Zhou et al., 2022; Puigcerver et al., 2024; Xue et al., 2024) or applications to other domains (Lepikhin et al., 2020; Riquelme et al., 2021; Fedus et al., 2022b). We refer interested readers to Yuksel et al. (2012); Fedus et al. (2022a) for a comprehensive survey. 3. Preliminaries In this section, we first present the definition of CVRP, and then introduce its variants featured by additional constraints. Afterwards, we delineate recent construction-based neural solvers for VRPs (Kool et al., 2018; Kwon et al., 2020). VRP Variants. We define a CVRP instance of size n over 2 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts -0.5 +0.2 -0.3 +0.1 Linehaul Backhaul Depot [0, 3.0] (O) (B) (L) (TW) 0.2 0.2 0.2 0.1 0.3 < L [1.0, 1.5] [1.4, 1.8] [2.0, 2.4] [2.4, 2.8] Figure 1. Illustrations of sub-tours with various constraints: open route (O), backhaul (B), duration limit (L), and time window (TW). a graph G = {V, E}, where V includes a depot node v0 and customer nodes {vi}n i=1, and E includes edges e(vi, vj) between node vi and vj(i \u0338= j). Each customer node is associated with a demand \u03b4i, and a capacity limit Q is set for each vehicle. The solution (i.e., tour) \u03c4 is represented as a sequence of nodes, consisting of multiple sub-tours. Each sub-tour represents that a vehicle starts from the depot, visits a subset of customer nodes and returns to the depot. The solution is feasible if each customer node is visited exactly once, and the total demand in each sub-tour does not exceed the capacity limit Q. We consider the Euclidean space with the cost function c(\u00b7) defined as the total length of the tour. The objective is to find the optimal tour \u03c4 \u2217with the minimal cost: \u03c4 \u2217= arg min\u03c4\u2208\u03a6 c(\u03c4|G), where \u03a6 is the discrete search space that contains all feasible tours. On top of CVRP (featured by the capacity constraint (C)), several VRP variants involve additional practical constraints. 1) Open Route (O): The vehicle does not need to return to the depot v0 after visiting customers; 2) Backhaul (B): The demand \u03b4i is positive in CVRP, representing a vehicle unloads goods at the customer node. In practice, a customer can have a negative demand, requiring a vehicle to load goods. We name the customer nodes with \u03b4i > 0 as linehauls and the ones with \u03b4i < 0 as backhauls. Hence, VRP with backhaul allows the vehicle traverses linehauls and backhauls in a mixed manner, without strict precedence between them; 3) Duration Limit (L): To maintain a reasonable workload, the cost (i.e., length) of each route is upper bounded by a predefined threshold; 4) Time Window (TW): Each node vi \u2208V is associated with a time window [ei, li] and a service time si. A vehicle must start serving customer vi in the time slot from ei to li. If the vehicle arrives earlier than ei, it has to wait until ei. All vehicles must return to the depot v0 no later than l0. The aforementioned constraints are illustrated in Fig. 1. By combining them, we can obtain 16 typical VRP variants, which are summarized in Table 3. Note that the combination is not a trivial addition of different constraints. For example, when the open route is coupled with the time window, the vehicle does not need to return to the depot, and hence the constraint imposed by l0 at the depot is relaxed. We present more details of VRP variants and the associated data generation process in Appendix A. Learning to Solve VRPs. Typical neural solvers (Kool et al., 2018; Kwon et al., 2020) parameterize the solution construction policy by an attention-based neural network \u03c0\u03b8, which is trained to generate a solution in an autoregressive way. The feasibility of the generated solution is guaranteed by the masking mechanism during decoding. Without loss of generality, we consider RL training paradigm, wherein the solution construction process is formulated as a Markov Decision Process (MDP). Given an input instance, the encoder processes it and attains all node embeddings, which, with the context representation of the constructed partial tour, represent the current state. The decoder takes them as inputs and outputs the probabilities of valid nodes (i.e., actions) to be selected. After a complete solution \u03c4 is constructed, its probability can be factorized via the chain rule such that p\u03b8(\u03c4|G) = QT t=1 p\u03b8(\u03c0(t) \u03b8 |\u03c0(<t) \u03b8 , G), where \u03c0(t) \u03b8 and \u03c0(<t) \u03b8 denote the selected node and constructed partial tour at step t, and T is the number of total steps. The reward is defined as the negative tour length, i.e., R = \u2212c(\u03c4|G). Given a baseline function b(\u00b7) for training stability, the policy network \u03c0\u03b8 is often trained by REINFORCE (Williams, 1992) algorithm, which applies estimated gradients of the expected reward to optimize the policy as below, \u2207\u03b8La(\u03b8|G) = Ep\u03b8(\u03c4|G)[(c(\u03c4) \u2212b(G))\u2207\u03b8 log p\u03b8(\u03c4|G)]. (1) 4. Methodology In this section, we present the multi-task VRP solver with MoEs (MVMoE), and introduce the gating mechanism. Without loss of generality, we aim to learn a constructionbased neural solver (Kool et al., 2018; Kwon et al., 2020) for tackling VRP variants with the five constraints introduced in Section 3. The structure of MVMoE is illustrated in Fig. 2. 4.1. Multi-Task VRP Solver with MoEs Multi-Task VRP Solver. Given an instance of a specific VRP variant, the static features of each node vi are expressed by Si = {yi, \u03b4i, ei, li}, where yi, \u03b4i, ei, li denote the coordinate, demand, start and end time of the time window, respectively. The encoder takes these static node features as inputs, and outputs d-dimensional node embeddings hi. At the tth decoding step, the decoder takes as input the node embeddings and a context representation, including the embedding of the last selected node and dynamic features Dt = {ct, tt, lt, ot}, where ct, tt, lt, ot denote the remaining capacity of the vehicle, the current time, the length of the current partial route, and the presence indicator of the open route, respectively. Thereafter, the decoder outputs the probability distribution of nodes, from which a valid node is selected and appended to the partial solution. A complete solution is constructed in an autoregressive manner by iterating the decoding process. 3 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts Multi-Head Attention with MoE Node Embedding V Q Probability Update & Loop Solution N x Multi-Head Self-Attention Add & Normalize Add & Normalize Encoder Linear MoE Dot-Product & Softmax Dynamic Feature Last Node Decoder K Static Feature MoE Layer Top 2 -\" $& = ['&, )&, *&, +&] -% -) -* \u210e\" \u210e% \u210e) \u210e* \u210e) \u210e\" \u210e\" \u210e% \u210e* /+ 0+ 1+ 2+ %\" %# %$ %) -(!)# 0#(!) 3 0'(!) -(!)' MOEs(*) & Figure 2. The model structure of MVMoE. [Green part]: Given an input instance, the encoder and decoder output node embeddings and probabilities of nodes to be selected, respectively. The gray nodes are masked to satisfy problem-specific constraints for feasibility. The node with a deeper color denote a later node embedding. [Yellow part]: In an MoE layer, where we take the (node-level) input-choice Top2 gating as an example, the input x (i.e., node) is routed to two experts that derive the two largest probabilities from the gating network G. In each training step, we randomly select a VRP variant, and train the neural network to solve associated instances in a batch. In this way, MVMoE is able to learn a unified policy that can tackle different VRP tasks. If only a subset of static or dynamic features are involved in the current selected VRP variant, the other features are padded to the default values (e.g., zeros). For example, given a CVRP instance, the static features of the ith customer node are S(C) i = {yi, \u03b4i, 0, 0}, and the dynamic features at the tth decoding step are D(C) t = {ct, 0, lt, 0}. In summary, motivated by the fact that different VRP variants may include some common attributes (e.g., coordinate, demand), we define the static and dynamic features as the union set of attributes that exist in all VRP variants. By training on a few VRP variants with these attributes, the policy network has the potential to solve unseen variants, which are characterized by different combinations of these attributes, i.e., the zero-shot generalization capability (Liu et al., 2024). Mixture-of-Experts. Typically, an MoE layer consists of 1) m experts {E1, E2, . . . , Em}, each of which is a linear layer or FFN with independent trainable parameters, and 2) a gating network G parameterized by WG, which decides how the inputs are distributed to experts. Given a single input x, G(x) and Ej(x) denote the output of the gating network (i.e., an m-dimensional vector), and the output of the jth expert, respectively. In light of this, the output of an MoE layer is calculated as, MoE(x) = m X j=1 G(x)jEj(x). (2) Intuitively, a sparse vector G(x) only activates a small subset of experts with partial model parameters, and hence saves the computation. Typically, a TopK operator can achieve such sparsity by only keeping the K-largest values while setting others as the negative infinity. In this case, the gating network calculates the output as G(x) = Softmax(TopK(x \u00b7 WG)). Given the fact that larger sparse models do not always lead to better performance (Zuo et al., 2022), it is crucial yet tricky to design effective and efficient gating mechanisms to endow each expert being sufficiently trained, given enough training data. To this effect, some works have been put forward in language and vision domains, such as designing an auxiliary loss (Shazeer et al., 2017) or formulating it as a linear assignment problem (Lewis et al., 2021) in pursuit of the load balancing. MVMoE. By integrating the above parts, we obtain the multi-task VRP solver with MoEs. The overall model structure is shown in Fig. 2, where we employ MoEs in both the encoder and decoder. In specific, we substitute MoEs for the FFN layer in the encoder, and substitute MoEs for the final linear layer of multi-head attention in the decoder. We refer more details of the structure of MVMoE to Appendix B. We empirically find our design is effective in generating high-quality solutions, and especially employing MoEs in the decoder tends to exert a greater influence on performance (see Section 5.2). We jointly optimize all trainable parameters \u0398, with the objective formulated as follows, min \u0398 L = La + \u03b1Lb, (3) where La denotes the original loss function of the VRP solver (e.g., the REINFORCE loss used to train the policy for solving VRP variants in Eq. (1)), Lb denotes the loss function associated with MoEs (e.g., the auxiliary loss used to ensure load balancing in Eq. (19) in Appendix B), and \u03b1 is a hyperparameter to control its strength. 4.2. Gating Mechanism We mainly consider the node-level (or token-level) gating, by which each node is routed independently to experts.1 In 1In addition, we also investigate another two gating levels, i.e., instance-level and problem-level gating, which are presented in Section 5.2 and Appendix B. 4 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts 2.31 0.45 0.74 0.65 1.43 -0.68 1.20 -1.02 0.98 -0.53 0.76 1.97 -0.35 1.26 0.87 0.36 2.31 0.45 0.74 0.65 1.43 -0.68 1.20 -1.02 0.98 -0.53 0.76 1.97 -0.35 1.26 0.87 0.36 !! !\" !# !$ \"! \"\" \"# \"$ !! !\" !# !$ \"! \"\" \"# \"$ Figure 3. An illustration of the score matrix and gating algorithm. Left panel: Input-choice gating. Right panel: Expert-choice gating. The selected experts or nodes are in color. The arrow marks the dimension, along which the TopK experts or nodes are selected. each MoE layer, the extra computation originates from the forward pass of the gating network and the distribution of nodes to the selected experts. While employing MoEs in the decoder can significantly improve the performance, the number of decoding steps T increases as the problem size n scales up. It suggests that compared to the encoder with a fixed number of gating steps N (\u226aT), applying MoEs in the decoder may substantially increase the computational complexity. In light of this, we propose a hierarchical gating mechanism to make the better use of MoEs in the decoder for gaining a good trade-off between empirical performance and computational complexity. Next, we detail the nodelevel and hierarchical gating mechanism. Node-Level Gating. The node-level gating routes inputs at the granularity of nodes. Let d denote the hidden dimension and WG \u2208Rd\u00d7m denote trainable parameters of the gating network in MVMoE. Given a batch of inputs X \u2208RI\u00d7d, where I is the total number of nodes (i.e., batch size B \u00d7 problem scale n), each node is routed to the selected experts based on the score matrix H = (X \u00b7 WG) \u2208RI\u00d7m predicted by the gating network. We illustrate an example of the score matrix in Fig. 3, where xi denotes the ith node, and Ej denotes the jth expert in the node-level gating. In this paper, we mainly consider two popular gating algorithms (Shazeer et al., 2017; Zhou et al., 2022): 1) Inputchoice gating: Each node selects TopK experts based on H. Typically, K is set to 1 or 2 to retain a reasonable computational complexity. The input-choice gating is illustrated in the left panel of Fig. 3, where each node is routed to two experts with the largest scores (i.e., Top2). However, this method cannot guarantee load balancing. An expert may receive much more nodes than the others, resulting in a dominant expert while leaving others underfitting. To address this issue, most works employ an auxiliary loss to equalize quantities of nodes sent to different experts during training. Here we use the importance & load loss (Shazeer et al., 2017) as Lb in Eq. (3) to mitigate load imbalance (see Appendix B). 2) Expert-choice gating: Each expert selects Base Gating Top 2 Dense Layer Sparse Layer If Else Hierarchical Gating &\" &# ' %\" %# %$ %) %\" %# %) %$ 3 3 & Figure 4. A base gating (i.e., the input-choice gating with K = 2) and its hierarchical gating counterpart. In the latter, the gating network G1 routes inputs to the sparse layer ({G2, E1, E2, E3, E4}) or the dense layer D. If the sparse layer is chosen, the gating network G2 routes nodes to experts accoring to the base gating. TopK nodes based on H. Typically, K is set to I\u00d7\u03b2 m , where \u03b2 is the capacity factor reflecting the average number of experts utilized by a node. The expert-choice gating is illustrated in the right panel of Fig. 3, where each expert selects two nodes with the largest scores given \u03b2 = 2. While this gating algorithm explicitly ensures load balancing, some nodes may not be chosen by any expert. We refer more details of the above gating algorithms to Appendix B. Hierarchical Gating. In the VRP domain, it is computationally expensive to employ MoEs in each decoding step, since 1) the number of decoding steps T increases as the problem size n rises; 2) the problem-specific feasibility constraints must be satisfied during decoding. To tackle the challenges, we propose to employ MoEs only in partial decoding steps. Accordingly, we present a hierarchical gating, which learns to effectively and efficiently utilize MoEs during decoding. We illustrate the proposed hierarchical gating in Fig. 4. An MoE layer with the hierarchical gating includes two gating networks {G1, G2}, m experts {E1, E2, . . . , Em}, and a dense layer D (e.g., a linear layer). Given a batch of inputs X \u2208RI\u00d7d, the hierarchical gating routes them in two stages. In the first stage, G1 decides to distribute inputs X to either the sparse or dense layer according to the problem-level representation X1. In specific, we obtain X1 by applying the mean pooling along the first dimension of X, and process it to obtain the score matrix H1 = (X1 \u00b7 WG1) \u2208R1\u00d72. Then, we route the batch of inputs X to the sparse or dense layer by sampling from the probability distribution G1(X) = Softmax(H1). Here we employ the problem-level gating in G1 for the generality and efficiency of the hierarchical gating (see Appendix D for further discussions). In the second stage, if X is routed to the sparse layer, the gating network G2 is activated to route nodes to experts on the node-level by using aforementioned gating algorithms (e.g., the input-choice gating). Otherwise, X is routed to the dense layer D and transformed into D(X) \u2208RI\u00d7d. In summary, the hierarchical gating learns to output G1(X)0 Pm j=1 G2(X)jEj(X) or G1(X)1D(X) 5 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts Table 1. Performance on 1K test instances of trained VRPs. * represents 0.000%, with which the gaps are computed. Method n = 50 n = 100 Method n = 50 n = 100 Obj. Gap Time Obj. Gap Time Obj. Gap Time Obj. Gap Time CVRP HGS 10.334 * 4.6m 15.504 * 9.1m VRPTW HGS 14.509 * 8.4m 24.339 * 19.6m LKH3 10.346 0.115% 9.9m 15.590 0.556% 18.0m LKH3 14.607 0.664% 5.5m 24.721 1.584% 7.8m OR-Tools 10.540 1.962% 10.4m 16.381 5.652% 20.8m OR-Tools 14.915 2.694% 10.4m 25.894 6.297% 20.8m OR-Tools (x10) 10.418 0.788% 1.7h 15.935 2.751% 3.5h OR-Tools (x10) 14.665 1.011% 1.7h 25.212 3.482% 3.5h POMO 10.418 0.806% 3s 15.734 1.488% 9s POMO 14.940 2.990% 3s 25.367 4.307% 11s POMO-MTL 10.437 0.987% 3s 15.790 1.846% 9s POMO-MTL 15.032 3.637% 3s 25.610 5.313% 11s MVMoE/4E 10.428 0.896% 4s 15.760 1.653% 11s MVMoE/4E 14.999 3.410% 4s 25.512 4.903% 12s MVMoE/4E-L 10.434 0.955% 4s 15.771 1.728% 10s MVMoE/4E-L 15.013 3.500% 3s 25.519 4.927% 11s OVRP LKH3 6.511 0.198% 4.5m 9.828 * 5.3m VRPL LKH3 10.571 0.790% 7.8m 15.771 * 16.0m OR-Tools 6.531 0.495% 10.4m 10.010 1.806% 20.8m OR-Tools 10.677 1.746% 10.4m 16.496 4.587% 20.8m OR-Tools (x10) 6.498 * 1.7h 9.842 0.122% 3.5h OR-Tools (x10) 10.495 * 1.7h 16.004 1.444% 3.5h POMO 6.609 1.685% 2s 10.044 2.192% 8s POMO 10.491 -0.008% 2s 15.785 0.093% 9s POMO-MTL 6.671 2.634% 2s 10.169 3.458% 8s POMO-MTL 10.513 0.201% 2s 15.846 0.479% 9s MVMoE/4E 6.655 2.402% 3s 10.138 3.136% 10s MVMoE/4E 10.501 0.092% 3s 15.812 0.261% 10s MVMoE/4E-L 6.665 2.548% 3s 10.145 3.214% 9s MVMoE/4E-L 10.506 0.131% 3s 15.821 0.323% 10s VRPB OR-Tools 8.127 0.989% 10.4m 12.185 2.594% 20.8m OVRPTW OR-Tools 8.737 0.592% 10.4m 14.635 1.756% 20.8m OR-Tools (x10) 8.046 * 1.7h 11.878 * 3.5h OR-Tools (x10) 8.683 * 1.7h 14.380 * 3.5h POMO 8.149 1.276% 2s 11.993 0.995% 7s POMO 8.891 2.377% 3s 14.728 2.467% 10s POMO-MTL 8.182 1.684% 2s 12.072 1.674% 7s POMO-MTL 8.987 3.470% 3s 15.008 4.411% 10s MVMoE/4E 8.170 1.540% 3s 12.027 1.285% 9s MVMoE/4E 8.964 3.210% 4s 14.927 3.852% 11s MVMoE/4E-L 8.176 1.605% 3s 12.036 1.368% 8s MVMoE/4E-L 8.974 3.322% 4s 14.940 3.941% 10s based on both problem-level and node-level representations. Overall, the hierarchical gating improves the computational efficiency with a minor loss on the empirical performance. To balance the efficiency and performance of MVMoE, we use the base gating in the encoder and its hierarchical gating counterpart in the decoder. Note that the hierarchical gating is applicable to different gating algorithms, such as the input-choice gating (Shazeer et al., 2017) and expert-choice gating (Zhou et al., 2022). We also explore a more advanced gating algorithm (Puigcerver et al., 2024) for reducing the number of routed nodes and thus the computational complexity. But its empirical performance is unsatisfactory in the VRP domain (see Section 5.3). 5. Experiments In this section, we empirically verify the superiority of the proposed MVMoE, and provide insights into the application of MoEs to solve VRPs. We consider 16 VRP variants with five constraints. Due to page limit, we present more experimental results in Appendix C. All experiments are conducted on a machine with NVIDIA Ampere A100-80GB GPU cards and AMD EPYC 7513 CPU at 2.6GHz. Baselines. Traditional solvers: We employ HGS (Vidal, 2022) to solve CVRP and VRPTW instances with default hyperparameters (i.e., the maximum number of iterations without improvement is 20000). We run LKH3 (Helsgaun, 2017) to solve CVRP, OVRP, VRPL and VRPTW instances with 10000 trails and 1 run. OR-Tools (Furnon & Perron, 2023) is an open source solver for complex optimization problems. It is more versatile than LKH and HGS, and can solve all 16 VRP variants considered in this paper. We use the parallel cheapest insertion as the first solution strategy, and use the guided local search as the local search strategy in OR-Tools. For n = 50/100, we set the search time limit as 20s/40s to solve an instance, and also provide its results given 200s/400s (i.e., OR-Tools (x10)). For all traditional solvers, we use them to solve 32 instances in parallel on 32 CPU cores following Kool et al. (2018). Neural solvers: We compare our method to POMO (Kwon et al., 2020) and POMO-MTL (Liu et al., 2024). While POMO is trained on each single VRP, POMO-MTL is trained on multiple VRPs by multi-task learning. Note that POMO-MTL is the dense model counterpart of MVMoE, which is structured by dense layers (e.g., FFNs) rather than sparse MoEs. In specific, POMO-MTL and MVMoE/4E possess 1.25M and 3.68M parameters, but they activate a similar number of parameters for each single input. Training. We follow most setups in (Kwon et al., 2020). 1) For all neural solvers: Adam optimizer is used with the learning rate of 1e \u22124, the weight decay of 1e \u22126, and the batch size of 128. The model is trained for 5000 epochs, with each containing 20000 training instances (i.e., 100M training instances in total). The learning rate is decayed by 10 for the last 10% training instances. We consider two problem scales n \u2208{50, 100} during training, according to (Liu et al., 2024). 2) For multi-task solvers: The training problem set includes CVRP, OVRP, VRPB, VRPL, VRPTW, and OVRPTW (see Appendix C.1 for further discussions). In each batch of training, we randomly sample a problem from the set and generate its instances. Please refer to Appendix A for details of the generation procedure. 3) For our method: We employ m = 4 experts with K = \u03b2 = 2 in each MoE layer, and set the the weight \u03b1 of the auxiliary loss Lb as 0.01. The default gating mechanism of MVMOE/4E is the node-level input-choice gating in both the encoder 6 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts Table 2. Zero-shot generalization on 1K test instances of unseen VRPs. * represents 0.000%, with which the gaps are computed. Method n = 50 n = 100 Method n = 50 n = 100 Obj. Gap Time Obj. Gap Time Obj. Gap Time Obj. Gap Time OVRPB OR-Tools 5.764 0.332% 10.4m 8.522 1.852% 20.8m OVRPL OR-Tools 6.522 0.480% 10.4m 9.966 1.783% 20.8m OR-Tools (x10) 5.745 * 1.7h 8.365 * 3.5h OR-Tools (x10) 6.490 * 1.7h 9.790 * 3.5h POMO-MTL 6.116 6.430% 2s 8.979 7.335% 8s POMO-MTL 6.668 2.734% 2s 10.126 3.441% 9s MVMoE/4E 6.092 5.999% 3s 8.959 7.088% 9s MVMoE/4E 6.650 2.454% 3s 10.097 3.148% 10s MVMoE/4E-L 6.122 6.522% 3s 8.972 7.243% 9s MVMoE/4E-L 6.659 2.597% 3s 10.106 3.244% 9s VRPBL OR-Tools 8.131 1.254% 10.4m 12.095 2.586% 20.8m VRPBTW OR-Tools 15.053 1.857% 10.4m 26.217 2.858% 20.8m OR-Tools (x10) 8.029 * 1.7h 11.790 * 3.5h OR-Tools (x10) 14.771 * 1.7h 25.496 * 3.5h POMO-MTL 8.188 1.971% 2s 11.998 1.793% 8s POMO-MTL 16.055 8.841% 3s 27.319 7.413% 10s MVMoE/4E 8.172 1.776% 3s 11.945 1.346% 9s MVMoE/4E 16.022 8.600% 4s 27.236 7.078% 11s MVMoE/4E-L 8.180 1.872% 3s 11.960 1.473% 9s MVMoE/4E-L 16.041 8.745% 4s 27.265 7.190% 10s VRPLTW OR-Tools 14.815 1.432% 10.4m 25.823 2.534% 20.8m OVRPBL OR-Tools 5.771 0.549% 10.4m 8.555 2.459% 20.8m OR-Tools (x10) 14.598 * 1.7h 25.195 * 3.5h OR-Tools (x10) 5.739 * 1.7h 8.348 * 3.5h POMO-MTL 14.961 2.586% 3s 25.619 1.920% 12s POMO-MTL 6.104 6.306% 2s 8.961 7.343% 8s MVMoE/4E 14.937 2.421% 4s 25.514 1.471% 13s MVMoE/4E 6.076 5.843% 3s 8.942 7.115% 9s MVMoE/4E-L 14.953 2.535% 4s 25.529 1.545% 12s MVMoE/4E-L 6.104 6.310% 3s 8.957 7.300% 9s OVRPBTW OR-Tools 8.758 0.927% 10.4m 14.713 2.268% 20.8m OVRPLTW OR-Tools 8.728 0.656% 10.4m 14.535 1.779% 20.8m OR-Tools (x10) 8.675 * 1.7h 14.384 * 3.5h OR-Tools (x10) 8.669 * 1.7h 14.279 * 3.5h POMO-MTL 9.514 9.628% 3s 15.879 10.453% 10s POMO-MTL 8.987 3.633% 3s 14.896 4.374% 11s MVMoE/4E 9.486 9.308% 4s 15.808 9.948% 11s MVMoE/4E 8.966 3.396% 4s 14.828 3.903% 12s MVMoE/4E-L 9.515 9.630% 3s 15.841 10.188% 10s MVMoE/4E-L 8.974 3.488% 4s 14.839 3.971% 10s VRPBLTW OR-Tools 14.890 1.402% 10.4m 25.979 2.518% 20.8m OVRPBLTW OR-Tools 8.729 0.624% 10.4m 14.496 1.724% 20.8m OR-Tools (x10) 14.677 * 1.7h 25.342 * 3.5h OR-Tools (x10) 8.673 * 1.7h 14.250 * 3.5h POMO-MTL 15.980 9.035% 3s 27.247 7.746% 11s POMO-MTL 9.532 9.851% 3s 15.738 10.498% 10s MVMoE/4E 15.945 8.775% 4s 27.142 7.332% 12s MVMoE/4E 9.503 9.516% 4s 15.671 10.009% 11s MVMoE/4E-L 15.963 8.915% 4s 27.177 7.473% 11s MVMoE/4E-L 9.518 9.682% 4s 15.706 10.263% 10s 0 2 4 6 8 10 Epoch 8.3 8.4 8.5 8.6 8.7 8.8 8.9 9.0 Gap (%) VRPBLTW POMO-MTL MVMoE/4E MVMoE/4E-L 0 2 4 6 8 10 Epoch 9.0 9.2 9.4 9.6 9.8 Gap (%) OVRPBLTW POMO-MTL MVMoE/4E MVMoE/4E-L Figure 5. Few-shot generalization on unseen VRPs. and decoder layers. MVMoE/4E-L is a computationally light version that replaces the input-choice gating with its hierarchical gating counterpart in the decoder. Inference. For all neural solvers, we use greedy rollout with x8 instance augmentation following Kwon et al. (2020). We report the average results (i.e., objective values and gaps) over the test dataset that contains 1K instances, and the total time to solve the entire test dataset. The gaps are computed with respect to the results of the best-performing traditional VRP solvers (i.e., * in Tables 1 and 2). 5.1. Empirical Results Performance on Trained VRPs. We evaluate all methods on 6 trained VRPs and gather all results in Table 1. The single-task neural solver (i.e., POMO) achieves better performance than multi-task neural solvers on each single problem, since it is restructured and retrained on each VRP independently. However, its average performance over all trained VRPs is quite inferior as shown in Table 4 in Appendix C, since each trained POMO is overfitted to a specific VRP. For example, the average performance of POMO solely trained on CVRP is 16.815%, while POMO-MTL and MVMoE/4E achieve 2.102% and 1.925%, respectively. Notably, our neural solvers consistently outperform POMO-MTL. MVMoE/4E performs slightly better than MVMoE/4E-L at the expense of more computation. Despite that, MVMoE/4E-L exhibits stronger out-of-distribution generalization capability than MVMoE/4E (see Tables 7 and 8 in Appendix C). Generalization on Unseen VRPs. We evaluate multi-task solvers on 10 unseen VRP variants. 1) Zero-shot generalization: We directly test the trained solvers on unseen VRPs. The results in Table 2 reveal that the proposed MVMoE significantly outperforms POMO-MTL across all VRP variants. 2) Few-shot generalization: We also consider the few-shot setting on n = 50, where a trained solver is fine-tuned on the target VRP using 10K instances (0.01% of total training instances) in each epoch. Without loss of generality, we conduct experiments on VRPBLTW and OVRPBLTW following the training setups. The results in Fig. 5 showcase MVMoE generalizes more favorably than POMO-MTL. 5.2. Ablation on MoEs Here we explore the effect of different MoE settings on the zero-shot generalization of neural solvers, and provide insights on how to effectively apply MoEs to solve VRPs. Due to the fast convergence, we reduce the number of epochs to 2500 on VRPs of the size n = 50, while leaving other setups unchanged. We set MVMoE/4E as the default baseline, 7 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts None Raw Enc_Every_Two Enc_Last_Two Enc_All Dec Enc_All_Dec Position of MoEs 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 Gap (%) (a) POMO-MTL_50M MVMoE/4E_50M MVMoE/8E_50M MVMoE/16E_50M MVMoE/8E_100M MVMoE/16E_200M Number of Experts 5.6 5.7 5.8 5.9 6.0 6.1 6.2 6.3 6.4 Gap (%) (b) POMO-MTL Node-Input Node-Expert Node-Random Instance-Input Instance-Expert Instance-Random Problem-Random Gating Mechanism 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 Gap (%) (c) POMO-MTL Input_Choice Soft MoEs Neural Hierarchical Heuristic Hierarchical Advanced Gating 5.7 5.8 5.9 6.0 6.1 6.2 6.3 Gap (%) (d) POMO-MTL Input_Choice Soft MoEs Neural Hierarchical Heuristic Hierarchical Training Efficiency 0 20 40 60 80 100 120 140 160 Time (hr) (e) n = 50 n = 100 Figure 6. Left three panels: The effect of MoE settings on the average zero-shot generalization performance (a) the position of MoEs; (b) the number of experts; (c) the gating mechanism. Right two panels: Further analyses (d) average zero-shot generalization performance of each method employing various gating algorithms in the decoder; (e) training efficiency of each gating algorithm. and ablate on different components of MoEs below. Position of MoEs. We consider three positions to apply MoEs in neural solvers: 1) Raw feature processing (Raw): The linear layer, which projects raw features into initial embeddings, are replaced by MoEs. 2) Encoder (Enc): The FFN in an encoder layer is replaced by MoEs. Typically, MoEs are widely used in every-two or last-two layers (i.e., every or last two layers with even indices \u2113\u2208[0, N \u2212 1]) (Riquelme et al., 2021). Besides, we further attempt to use MoEs in all encoder layers. 3) Decoder (Dec): The final linear layer of the multi-head attention is replaced by MoEs in the decoder. We show the average performance over 10 unseen VRPs in Fig. 6(a). The results reveal that applying MoEs at the shallow layer (e.g., Raw) may worsen the model performance, while using MoEs in all encoder layers (Enc_All) or decoder (Dec) can benefit the zero-shot generalization. Therefore, in this paper, we employ MoEs in both encoder and decoder to pursue a strong unified model architecture to solve various VRPs. Number of Experts. We increase the number of experts in each MoE layer to 8 and 16, and compare the derived MVMoE/8E/16E models to MVMoE/4E. We first train all models using the same number (50M) of instances. After that, we also train MVMoE/8E/16E with more data and computation to explore potential better results, based on the scaling laws (Kaplan et al., 2020). In specific, we provide MVMoE/8E/16E with more data by using larger batch sizes, which linearly scale up against the number of experts (i.e., MVMoE/4E/8E/16E are trained on 50M/100M/200M instances with batch sizes 128/256/512, respectively). The results in Fig. 6(b) show that increasing the number of experts with more training data further unleashes the power of MVMoE, indicating the efficacy of MoEs in solving VRPs. Gating Mechanism. We investigate the effect of different gating levels and algorithms, including three levels (i.e., node-level, instance-level and problem-level) and three algorithms (i.e., input-choice, expert-choice and random gatings), with their details presented in Appendix B. As shown in Fig. 6(c), the node-level input-choice gating performs the best, while the node-level expert-choice gating performs the worst. Interestingly, we observe that the expert-choice gating in the decoder makes MVMoE hard to be optimized. It may suggest that each gating algorithm could have its most suitable position to serve MoEs. However, after an attempt to tune this configuration (i.e., by using MoEs only in the encoder), its performance is still inferior to the baseline, with an average gap of 7.190% on unseen VRPs. 5.3. Additional Results We further provide experiments and discussions on more advanced gating algorithms, training efficiency, benchmark performance, and scalability. We refer readers to more empirical results (e.g., sensitivity analyses) in Appendix C. Advanced Gating. Besides the input-choice and expertchoice gating algorithms evaluated above, we further consider soft MoEs (Puigcerver et al., 2024), which is a recent advanced gating algorithm. Specifically, it performs an implicit soft assignment by distributing K slots (i.e., convex combinations of all inputs) to each expert, rather than a hard assignment between inputs and experts as done by the conventional sparse and discrete gating networks. Since only K (e.g., 1 or 2) slots are distributed to each expert, it can save much computation. We train MVMoE on n = 50 by using node-level soft MoEs in the decoder, following training setups. We also show the result of employing heuristic (random) hierarchical gating in the decoder. However, their results are unsatisfactory as shown in Fig. 6(d). Training Efficiency. Fig. 6(e) shows the training time of employing each gating algorithm in the decoder, combining with their results reported in Fig. 6(d), demonstrating the efficacy of the proposed hierarchical gating in reducing the training overhead with only minor losses in performance. Benchmark Performance. We further evaluate the out-ofdistribution (OOD) generalization performance of all neural solvers on CVRPLIB benchmark instances. Detailed results can be found in Tables 7 and 8 in Appendix C. Surprisingly, we observe that MVMoE/4E performs poorly on large-scale 8 \fMVMoE: Multi-Task Vehicle Routing Solver with Mixture-of-Experts instances (e.g., n > 500). It may be caused by the generalization issue of sparse MoEs when transferring to new distributions or domains, which is still an open question in the MoE literature (Fedus et al., 2022a). In contrast, MVMoE/4E-L mostly outperforms MVMoE/4E, demonstrating more favourable potential of the hierarchical gating in promoting the OOD generalization capability. It is worth noting that all neural solvers are only trained on the simple uniformly distributed instances with the size n = 100. Embracing more varied problem sizes (cross-size) and attribute distributions (cross-distribution) into the multi-task training (cross-problem) may further consolidate their performance. Scalability. Given that supervised learning based approaches appear to be more scalable than RL-based approaches in the current literature, we try to build upon a more scalable method, i.e., LEHD (Luo et al., 2023). Concretely, we train a dense model LEHD and a light sparse model with 4 experts LEHD/4E-L on CVRP. The training setups are kept the same as Luo et al. (2023), except that we train all models for only 20 epochs for the training efficiency. We use the hierarchical MoE in each decoder layer of LEHD/4E-L. The results are shown in Table 8, which demonstrates the potential of MoE as a general idea that can further benefit recent scalable methods. Moreover, during the solution construction process, recent works (Drakulic et al., 2023; Gao et al., 2023) typically constrain the search space within a neighborhood of the currently selected node, which is shown to be effective in handling large-scale instances. Integrating MVMoE with these simple yet effective techniques may further improve large-scale performance. 6. Conclusion Targeting a more generic and powerful neural solver for solving VRPs, we propose a multi-task vehicle routing solver with MoEs (MVMoE), which can solve a range of VRPs concurrently, even in a zero-shot manner. We provide valuable insights on how to apply MoEs in neural VRP solvers, and propose an effective and efficient hierarchical gating mechanism. Empirically, MVMoE demonstrates strong generalization capability on zero-shot, few-shot settings, and real-world benchmark. Despite this paper presents the first attempt towards a large VRP model, the scale of parameters is still far less than LLMs. We leave 1) the development of scalable MoE-based models in solving large-scale VRPs, 2) the venture of generic representations for different problems, 3) the exploration of interpretability of gating mechanisms (Nguyen et al., 2023; 2024), and 4) the investigation of scaling laws in MoEs (Krajewski et al., 2024) to the future work. We hope our work benefit the COP community in developing large optimization (or foundation) models2. 2https://github.com/ai4co/awesome-fm4co Acknowledgements This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG3-RP-2022-031), the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant, the National Natural Science Foundation of China (Grant 62102228), and the Natural Science Foundation of Shandong Province (Grant ZR2021QF063). We would like to thank the anonymous reviewers and (S)ACs of ICML 2024 for their constructive comments and dedicated service to the community. Jianan Zhou would like to personally express deep gratitude to his grandfather, Jinlong Hu, for his meticulous care and love during last 26 years. Eternal easy rest in sweet slumber. Impact Statements This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.01035v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01035v1",
|
| 3 |
+
"title": "LOQA: Learning with Opponent Q-Learning Awareness",
|
| 4 |
+
"abstract": "In various real-world scenarios, interactions among agents often resemble the\ndynamics of general-sum games, where each agent strives to optimize its own\nutility. Despite the ubiquitous relevance of such settings, decentralized\nmachine learning algorithms have struggled to find equilibria that maximize\nindividual utility while preserving social welfare. In this paper we introduce\nLearning with Opponent Q-Learning Awareness (LOQA), a novel, decentralized\nreinforcement learning algorithm tailored to optimizing an agent's individual\nutility while fostering cooperation among adversaries in partially competitive\nenvironments. LOQA assumes the opponent samples actions proportionally to their\naction-value function Q. Experimental results demonstrate the effectiveness of\nLOQA at achieving state-of-the-art performance in benchmark scenarios such as\nthe Iterated Prisoner's Dilemma and the Coin Game. LOQA achieves these outcomes\nwith a significantly reduced computational footprint, making it a promising\napproach for practical multi-agent applications.",
|
| 5 |
+
"authors": "Milad Aghajohari, Juan Agustin Duque, Tim Cooijmans, Aaron Courville",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.GT",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.GT",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.LG"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "Multi AND Agent AND Reinforcement AND Learning",
|
| 16 |
+
"gt": "LOQA: Learning with Opponent Q-Learning Awareness",
|
| 17 |
+
"main_content": "INTRODUCTION A major difficulty in reinforcement learning (RL) and multi-agent reinforcement learning (MARL) is the non-stationary nature of the environment, where the outcome of each agent is determined not only by their own actions but also those of other players von Neumann (1928). This difficulty often results in the failure of traditional algorithms converging to desirable solutions. In the context of general-sum games, independent RL agents often converge to sub-optimal solutions in the Pareto sense, when each of them seeks to optimize their own utility Foerster et al. (2018b). This situation draws parallels with many real-world scenarios, in which individuals pursuing their own selfish interests leads them to a worse outcome than cooperating with others. Thus one of the objectives of MARL research must be to develop decentralized agents that are able to cooperate while avoiding being exploited in partially competitive settings. We call this reciprocity-based cooperation. Previous work has resulted in algorithms that train reciprocity-based cooperative agents by differentiating through the opponent\u2019s learning step (Foerster et al., 2018b; Letcher et al., 2021; Zhao et al., 2022; Willi et al., 2022) or by modeling opponent shaping as a meta-game in the space of agent policies (Al-Shedivat et al., 2018; Kim et al., 2021; Lu et al., 2022; Cooijmans et al., 2023). However, both of these approaches have important drawbacks with respect to computational efficiency. On one hand, differentiating through even just a few of the opponent\u2019s learning steps, can only be done sequentially and requires building large computation graphs. This is computationally costly when dealing with complex opponent policies. On the other hand, meta-learning defines the problem as a meta-state over the product space of policies of the agent and opponent, and learns a meta-policy that maps from the meta-state to the agent\u2019s updated policy. The complexity of the problem then scales with the policy parameterization which is usually a neural network with many parameters. In this paper we introduce Learning with Opponent Q-Learning Awareness (LOQA), which stands because it avoids computing gradients w.r.t. optimization steps or learning the dynamics of a metagame, resulting in significantly improved computational efficiency. LOQA performs opponent shaping by assuming that the opponent\u2019s behavior is guided by an internal action-value function Q. This assumption allows LOQA agents to build a model of the opponent policy that can be shaped by influencing its returns for different actions. Controlling the return by differentiating through stochastic 1 arXiv:2405.01035v1 [cs.GT] 2 May 2024 \fPublished as a conference paper at ICLR 2024 objectives is a key idea in RL and can be done using the REINFORCE estimator Williams (1992). LOQA is strongly inspired by Best Response Shaping Aghajohari et al. (2024). In BRS, the detective approximates the optimal response to an agent by conditioning on the returns from simulated game trajectories between the agent and a random opponent. The agent then differentiates through the detective through these differentiable returns. 2 BACKGROUND We consider general-sum, n-player, Markov games, also referred as stochastic games Shapley (1953). Markov games are defined by a tuple M = (N, S, A, P, R, \u03b3) where S denotes the state space, A := A1 \u00d7 . . . \u00d7 An, is the joint action space of all players, P : S \u00d7 A \u2192\u2206(S), defines a mapping from every state and joint action to a probability distribution over states, R = {r1, . . . , rn} is the set of reward functions where each ri : S \u00d7 A \u2192R maps every state and joint action to a scalar return and \u03b3 \u2208[0, 1] is the discount factor. We use the notation and definitions for standard RL algorithms of Agarwal et al. (2021). Consider two agents, 1 (agent) and 2 (opponent) that interact in an environment with neural network policies \u03c01 := \u03c0(\u00b7|\u00b7; \u03b81), \u03c02 := \u03c0(\u00b7|\u00b7; \u03b82) parameterized by \u03b81 and \u03b82 respectively. We denote \u03c4 to be a trajectory with initial state distribution \u00b5 and probability measure Pr\u03c01,\u03c02 \u00b5 given by Pr\u03c01,\u03c02 \u00b5 (\u03c4) = \u00b5(s0)\u03c01(a0|s0)\u03c02(b0|s0)P(s1|s0, a0, b0) . . . here b \u2208A2 denotes the action of the opponent. In multi-agent reinforcement learning, each agent seeks to optimize their expected discounted return R, for the agent this is given by: V 1(\u00b5) := E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \u0002 R1(\u03c4) \u0003 = E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \" \u221e X t=0 \u03b3tr1(st, at, bt) # The key observation is that under the definitions above, V 1 is dependent on the policy of the opponent through the reward function r1(st, at, bt). V 1 is thus differentiable with respect to the parameters of the opponent via the REINFORCE estimator (Williams, 1992) \u2207\u03b82V 1(\u00b5) = E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \" R1(\u03c4) \u221e X t=0 \u2207\u03b82log \u03c02(bt|st) # = E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \" \u221e X t=0 \u03b3tr1(st, at, bt) X k<t \u2207\u03b82log \u03c02(bk|sk) # (1) we will rely on this observation to influence the opponent\u2019s Q value and incentivize reciprocitybased cooperation in partially competitive environments. 3 RELATED WORK 3.1 OPPONENT SHAPING Learning with opponent learning awareness (LOLA), (Foerster et al., 2018b) introduces the concept of opponent shaping, i.e. the idea of steering the other agent throughout its learning process. LOLA assumes that the opponent is a naive learner and attempts to shape it by considering one step in its optimization process. Rather than optimizing the value under the current policies at the current iteration i, V 1(\u03b81 i , \u03b82 i ), LOLA optimizes V 1(\u03b81 i , \u03b82 i +\u2206\u03b82 i ) where \u2206\u03b82 i is a learning step of the opponent. \u2206\u03b82 i is as a function that depends on the agent\u2019s parameters and that is thus differentiable with respect to \u03b81. Since the derivative of function V 1(\u03b81 i , \u03b82 i + \u2206\u03b82 i ) is difficult to compute, the authors consider the surrogate value given by its first order Taylor expansion. POLA, (Zhao et al., 2022) builds an idealized version LOLA that, unlike its predecessor, is invariant to the parameterization of the policy. In a similar fashion to proximal policy optimization (PPO) (Schulman et al., 2017), each agent increases the probability of actions that increase their expected return, while trying to minimize the l2 distance between the updated policy and the old policy. This combined objective of maximizing the return and minimizing the l2 distance in policy space is equivalent to the Proximal 2 \fPublished as a conference paper at ICLR 2024 Agent Opponent Cooperate Defect Cooperate (\u22121, \u22121) (\u22123, 0) Defect ( 0, \u22123) (\u22122, \u22122) Table 1: Payoff (or reward) matrix for the Prisoner\u2019s Dilemma game, where the numbered pairs corresponds to the payoffs of the Agent and the Opponent respectively. Point method, hence the name Proximal-LOLA or POLA. Other modifications to the original LOLA algorithm attempt to resolve its shortcomings. Consistent learning with opponent learning awareness (COLA), (Willi et al., 2022) attempts to resolve the inherent inconsistency of LOLA assuming that the other agent is a naive learner instead of another LOLA agent. Stable opponent shaping (SOS), (Letcher et al., 2021) introduces an interpolation between LOLA and a more stable variant called look ahead, which has strong theoretical convergence guarantees. 3.2 META LEARNING Other methods have been used to generate agents that have near optimal behavior in social dilemmas. First used by Al-Shedivat et al. (2018) for this setting, meta learning redefines the MARL problem as a meta-game in the space of policy parameters in an attempt to deal with the non-stationary nature of the environment. In this meta-game, the meta-state is the joint policy, the meta-reward is the return on the underlying game, and a meta-action is a change to the inner policy (i.e. the policy in the original game). Model free opponent shaping (M-FOS) (Lu et al., 2022) applies policy gradient methods to this meta-game to find a strong meta-policy. Meta-Value Learning (Cooijmans et al., 2023) applies value learning to model the long-term effects of policy changes, and uses the gradient of the value as an improvement direction. 4 SOCIAL DILEMMAS Social dilemmas are a type of decision problem where each party\u2019s miopic efforts to maximize their own benefit lead to a less favorable outcome compared to when all parties cooperate. Designed primarily as thought experiments, these dilemmas demonstrate the trade-offs that are often inherent in multi-agent decision-making scenarios. As such, they have been used to model real-life situations in diverse fields such as economics, ecology and policy making. One example of such a decision problem is the famous Iterated Prisoner\u2019s Dilemma (IPD). The Prisoner\u2019s Dilemma (PD) PD is a game in which each of the two agents or prisoners must decide to either cooperate with one another or defect. The dilemma the prisoners face originates from the reward structure, given in Table 1. With with reward structure, a rational agent will choose to defect no matter what the other agent chooses. As a result, both agents become locked in the defect-defect Nash equilibrium, even though they would achieve greater utility by both choosing to cooperate. Iterated Prisoner\u2019s Dilemma (IPD) As the name implies IPD is simply an (infinitely) repeated version of the Prisoner\u2019s Dilemma. Unlike standard PD, IPD offers some hope for rational cooperative behaviour. Originally popularized by Axelrod (1980), the IPD has been used to model many hypothetical and real-world scenarios. It has also become a popular test-bed for MARL algorithms attempting to achieve reciprocity based cooperation policies. A simple but effective strategy in the IPD is tit-for-tat (TFT), which consists in cooperating at the first turn and copying the opponent\u2019s action thereafter. The Coin Game Initially described in (Lerer & Peysakhovich, 2018), the Coin Game is a grid world environment in which two agents take turns taking coins. At the beginning of each episode a coin of a particular color (red or blue), corresponding to that of one of the two agents, spawns at a random location in the grid. Agents are rewarded for any coin taken, but are punished if the other agent takes the coin corresponding to their color. The reward structure of the Coin Game is designed to incentivize cooperation between agents, as each one would be better off if both take only the coin 3 \fPublished as a conference paper at ICLR 2024 corresponding to their color. In such way the Coin Game mirrors the IPD, therefore policies that cooperate reciprocally are highly desirable as they achieve better individual and social outcomes. However unlike IPD, the Coin Game is embedded in a non-trivial environment and requires nontrivial policy models. It can be seen as an extension of IPD towards more complex and realistic scenarios. 5 METHOD DESCRIPTION Intuitively, a change in either the agent\u2019s or the opponent\u2019s policy results in a change in the probability measure over the trajectories that are observed when both of them interact in an environment. Since the value function of the opponent is an expectation over said probability measure, it is controllable by the agent\u2019s policy (and vice versa). LOQA leverages this observation to exert influence over the policy that the opponent will learn. As an illustration, consider an instance of the IPD game where a LOQA agent and the opponent are initialized to be random agents, i.e. they samples actions from a uniform distribution. If the LOQA agent increases its probability of defection after the opponent defects, it implicitly decreases the action-value function of the opponent for defecting. The opponent will then learn this and reduce its probability of defecting. Similarly, if the LOQA agent cooperates after the opponent cooperates, it increases the action-value of cooperation for the opponent. In response, the opponent will learn to cooperate more. This reciprocity-based cooperative behavior is the structure behind tit-for-tat. 5.1 MODELING THE OPPONENT\u2019S POLICY Let \u03c01(b|s) := \u03c0(b|s; \u03b81) refer to the policy of the agent and \u03c02(b|s) := \u03c0(b|s; \u03b82) refer to the policy of the opponent, which are neural networks with parameters \u03b81 and \u03b82. Similarly, Q2(s, b) := Q(s, b; \u03d52) denotes the action-value function of the opponent, which is a neural network with parameters \u03d52. LOQA relies on a key assumption about the opponent\u2019s policy. Similar to Soft Actor Critic (SAC) (Haarnoja et al., 2018), the assumption is that the opponents\u2019 actions are sampled from a distribution that is proportional to its action-value function Q2(\u00b7). More formally, at time t, we can write this assumption as \u03c02(bt|st) \u2248 exp(Q2(st, bt)) P b\u2032 exp(Q2(st, b\u2032)) More specifically we approximate Q2 with Monte Carlo rollouts T of length T, where every trajectory \u03c4 \u2208T , \u03c4 \u223cPr\u03c01,\u03c02 \u00b5 , starts at state st with the opponent taking action bt \u02c6 Q2(st, bt) = E\u03c4\u223cT \u0002 R2(\u03c4)|s = st, b = bt \u0003 = 1 |T | X \u03c4\u2208T T X k=t \u03b3k\u2212tr2(sk, ak, bk) where r2(s, a, b) denotes the opponent\u2019s reward at state s after taking action b and the opponent taking action a. This empirical expectation of the Q function is controllable by the agent using the reinforce estimator \u2207\u03b81 \u02c6 Q2(st, bt) = E\u03c4\u223cT \uf8ee \uf8f0 T X k=t+1 \u03b3k\u2212tr2(sk, ak, bk) X t<j<k \u2207\u03b81log \u03c01(aj|sj) \uf8f9 \uf8fb The opponent\u2019s policy evaluated at state st can now be approximated using the Monte Carlo rollout estimate \u02c6 Q2 and the action-value function Q2 as follows \u02c6 \u03c02(bt|st) := exp( \u02c6 Q2(st, bt)) exp( \u02c6 Q2(st, bt)) + P b\u2032\u0338=bt exp(Q2(st, b\u2032)) (2) Notice that we assume access to the opponent\u2019s real action-value function Q2. To have a fully decentralized algorithm we can simply replace Q2 with the agent\u2019s own estimate of the opponent\u2019s action-value function. We now integrate these ideas into the Actor-Critic formulation. 4 \fPublished as a conference paper at ICLR 2024 Algorithm 1 LOQA Initialize: Discount factor \u03b3, agent action-value parameters \u03d51, target action-value parameters \u03d51 target, actor parameters \u03b81, opponent action-value parameters \u03d52, target action-value parameters \u03d52 target, actor parameters \u03b82 for iteration= 1, 2, . . . do Run policies \u03c01 and \u03c02 for T timesteps in environment and collect trajectory \u03c4 L1 Q \u21900, L2 Q \u21900 for t = 1, 2, . . . , T \u22121 do L1 Q \u2190L1 Q+ HUBER LOSS(rt + \u03b3Q1 target(st+1, at+1) \u2212Q1(st, at)) L2 Q \u2190L2 Q+ HUBER LOSS(rt + \u03b3Q2 target(st+1, bt+1) \u2212Q2(st, at)) end for Optimize L1 Q w.r.t. \u03d51 and L2 Q w.r.t. \u03d52 with optimizer of choice Compute advantage estimates {A1 1, . . . , A1 T }, {A2 1, . . . , A2 T } L1 a \u2190LOQA ACTOR LOSS(\u03c4, \u03c01, \u03b3, {A1 1, . . . , A1 T }) L2 a \u2190LOQA ACTOR LOSS(\u03c4, \u03c02, \u03b3, {A2 1, . . . , A2 T }) Optimize L1 a w.r.t. \u03b81 and L2 a w.r.t. \u03b82 with optimizer of choice end for Algorithm 2 LOQA ACTOR LOSS Input: Trajectory \u03c4 of length T, actor policy \u03c0i, opponent action-value function Q\u2212i, discount factor \u03b3, advantages {Ai 1, . . . , Ai T } La \u21900 for t = 1, 2, . . . , T \u22121 do \u02c6 Q\u2212i(st, bt) \u2190PT k=t \u03b3k\u2212tr\u2212i(sk, ak, bk) \u25b7r\u2212i made differentiable using DiCE Compute \u02c6 \u03c0\u2212i using \u02c6 Q\u2212i(st, bt) and Q\u2212i(st, bt) according to equation (2) La \u2190La + Ai t \u0002 log \u03c0i(at|st) + log \u02c6 \u03c0\u2212i(bt|st) \u0003 end for return: La 5.2 OPPONENT SHAPING In order to shape the opponent behavior, we factor in the opponent\u2019s policy approximation \u02c6 \u03c02 as well as the agent\u2019s policy \u03c01 in the probability measure over trajectories. Adapting the original Actor-Critic formulation (Konda & Tsitsiklis, 2000) to the joint agent-opponent policy space we have: \u2207\u03b81V 1(\u00b5) = E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \uf8ee \uf8f0 T X t=0 A1(st, at, bt)\u2207\u03b81 \uf8eb \uf8edlog \u03c01(at|st) + log \u03c02(bt|st) | {z } =0 \uf8f6 \uf8f8 \uf8f9 \uf8fb where A1(st, at, bt) is the advantage of the first agent, and \u03c02 is constant w.r.t. \u03b81. LOQA approximates the opponent\u2019s policy using Equation 2. This approximated policy is differentiable with respect to agent parameters since it is computed based on the opponent\u2019s action-value, which is also differentiable (see Equation 1). Consequently, a second term emerges in LOQA\u2019s update \u2207\u03b81V 1(\u00b5) = E\u03c4\u223cPr\u03c01,\u03c02 \u00b5 \" T X t=0 A1(st, at, bt)\u2207\u03b81 \u0000log \u03c01(at|st) + log \u02c6 \u03c02(bt|st) \u0001 # (3) The first log term comes from the Actor-Critic update and the second log term is a shaping component that pushes the opponent\u2019s return in a beneficial direction for the agent (in policy space). This second term comes from the assumption that the opponent\u2019s policy can be influenced by the agent\u2019s parameters. For a derivation refer to section F in the appendix. In practice we use DiCE (Foerster et al., 2018a) and loaded-DiCE (Farquhar et al., 2019) on the action-value estimate \u02c6 Q2 to compute the gradient \u2207\u03b81log \u02c6 \u03c02 and reduce its variance. Also, the current trajectory \u03c4 itself is used for \u02c6 Q2 estimation. (See appendix B) 5 \fPublished as a conference paper at ICLR 2024 START CC CD DC DD State 0.0 0.2 0.4 0.6 0.8 1.0 Probability of Cooperation LOQA (Ours) Figure 1: Probability of cooperation of a sigmoid LOQA agent at each possible state in the one-step history IPD after 7000 training iterations. LOQA agents\u2019 resulting policy is similar to tit-for-tat, a policy that cooperates at the first step and copies the previous action of the opponent at subsequent time-steps. 5.3 SELF PLAY AND REPLAY BUFFER OF PREVIOUS AGENTS Because the environments are symmetric we can use self-play to train a single LOQA agent against itself. We also maintain a replay buffer and, for each optimization step (which requires generating environment rollouts), we sample uniformly from previously encountered agents. This increases the diversity of the opponents the agent faces in the training. The replay buffer has a certain capacity and receives a new agent every nth iteration where n is a hyperparameter. 6 EXPERIMENTS We consider two general-sum environments to evaluate LOQA against the current state-of-the-art, namely, the Iterated Prisoner\u2019s Dilemma (IPD) and the Coin Game. We compare with POLA and M-FOS, the only methods to the best of our knowledge that generate reciprocity-based cooperative policies in the Coin Game. 6.1 ITERATED PRISONER\u2019S DILEMMA We train an agent consisting of a sigmoid over logits for each possible state in the one-step history IPD. There are 5 possible states in this configuration, namely START (the starting state), CC, CD, DC and DD where C stands for cooperation and D stands for defection. The training is done for 4500 iterations (approximately 15 minutes on an Nvidia A100 gpu) using a batch size of 2048. We empirically observe that a LOQA agent is able to reach a tit-for-tat like policy as shown by looking at the probability of cooperation at each state. We believe that the probabilities are not fully saturated for two reasons. First, the normalization over the action-values in the opponent\u2019s policy approximation makes it numerically impossible to reach a probability of one for either action. Second, we observed that after some time the trajectories become homogeneous \u201dalways cooperate\u201d trajectories that ultimately degenerate the quality of the resulting policy by making it less likely to retaliate after a defection by the opponent. 6.2 COIN GAME Like (Zhao et al., 2022), we use a GRU policy that has access to the current observation of the game plus both agents\u2019 actions in the previous turn to train a LOQA agent in the Coin Game. We run trajectories of length 50 with a discount factor \u03b3 = 0.7 in parallel with a batch size of 8192. For evaluation we run 10 seeds of fully trained agents for 50 episodes in a league that involves other 6 \fPublished as a conference paper at ICLR 2024 LOQA POLA MFOS AC AD Random LOQA POLA MFOS AC AD Random 0.30 0.30 0.24 0.17 0.06 0.16 0.32 0.35 0.12 -0.05 -0.15 0.26 0.17 0.24 0.19 0.19 0.01 0.19 0.29 0.38 0.07 -0.03 -0.13 0.24 0.16 0.06 0.19 0.01 -0.01 -0.01 0.03 0.32 0.41 -0.42 -0.21 0.20 0.35 0.32 0.38 0.29 0.32 0.03 0.35 0.35 0.63 -0.27 0.11 0.05 -0.05 0.12 -0.03 0.07 -0.42 0.41 -0.27 0.63 0.00 -0.00 -0.53 0.53 0.26 -0.15 0.24 -0.13 0.20 -0.21 0.05 0.11 0.53 -0.53 -0.01 0.01 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 0.5 Return Figure 2: Average rewards after evaluating 10 fully trained LOQA, POLA, and M-FOS seeds against different agents in a 3x3 sized Coin Game lasting 50 episodes. AC for always Cooperate, AD for always defect. Notice that a fully cooperative agent achieves an average reward of 0.35 against itself. LOQA is able to generate a policy that demonstrates reciprocity-based cooperation. agents and plot the time-averaged reward. The results are shown in Figure 2. We also experimented with ablations of LOQA that either removed self-play during training or removed the replay buffer of past policy weights to train against. These two ablations showed that these two elements, although not essential, improve the performance of LOQA agents in the Coin Game by making them more cooperative with themselves and less exploitable by always-defect agents. For details refer to the Appendix section C. In Figure 2, we observe that LOQA agents are able to cooperate with themselves as indicated by the high average reward of 0.3 which is close to the 0.35 of an always cooperate agent against itself. LOQA agents are able to achieve high social welfare without being exploited by an always defect agent as they achieve an average reward of -0.05, which is comparable to POLA\u2019s own -0.03. More importantly our agents are fully trained after only 2 hours of compute time in an Nvidia A100 gpu, compared to the 8 hours of training it takes POLA to achieve the results shown in Figure 2. It should be noted that as compared to IPD, in the Coin Game, cooperation and defection consist of sequences of actions, therefore an agent must learn to take coins before learning whether they should cooperate with their opponent or not. We also consider full histories as opposed to one-step, making the state space significantly larger. 6.3 SCALABILITY EXPERIMENTS In this section, we test LOQA, POLA, and M-FOS on larger grid sizes in the Coin Game to evaluate their scalability. As grid size increases, the average distance between the agents and the coin also grows This added complexity challenges the learning of reciprocal cooperative behavior. For example, when the opponent takes the agent\u2019s coin, the agent must learn to take multiple steps to retaliate. This behavior is less likely to be discovered by random actions on larger grids. Our experiments with different grid sizes illustrate LOQA\u2019s scalability properties compared to POLA and M-FOS. In assessing the performance of the agents, we consider two metrics: the achievement of a predetermined performance threshold and the time taken to reach this threshold. For the latter, it is critical to acknowledge that the conceptualization of a \u2019step\u2019 differs between POLA and LOQA. The steps in POLA encompass numerous inner optimization steps, rendering a direct comparison of performance 7 \fPublished as a conference paper at ICLR 2024 Threshold Normalized Return against Each Other Normalized Return against Always Defect Weak \u22650.05 \u2265-1.2 Medium \u22650.1 \u2265-0.5 Strong \u22650.2 \u2265-0.2 Table 2: Thresholds based on two main criteria of a reciprocity-based cooperative policy. The weak and medium thresholds are designed such that all agents pass them, while the strong threshold represents good performance. 5 4 3 6 7 Grid Size 0 3000 6000 9000 12000 15000 Wall Clock Time Weak Threshold LOQA POLA M-FOS 5 4 3 6 7 Grid Size 0 3000 6000 9000 12000 15000 Medium Threshold LOQA POLA M-FOS 5 4 3 6 7 Grid Size 0 6000 12000 18000 24000 30000 Strong Threshold LOQA POLA M-FOS Figure 3: Wall clock time vs grid size for three seeds of LOQA, POLA, and M-FOS on reaching different thresholds. Each data point indicates the first time its corresponding seed passed a certain threshold. The wall clock time is measured in seconds. Red triangles indicate LOQA\u2019s performance while blue circles visualize LOQA\u2019s performance. Dashed lines pass through the average time for runs that passed for their respective algorithm. per step inconclusive. To facilitate an equitable comparison, we employ the wall clock time of both algorithms, executed under identical computational configurations (GPU, memory, etc.). The thresholds are defined based on two principal criteria pertaining to a reciprocity-based cooperative policy. Firstly, we evaluate the agent\u2019s return against an \u201cAlways Defect\u201d opponent; this serves to test the cessation of cooperation following a lack of reciprocation from the opponent. Secondly, we consider the agents\u2019 return against each other, which serves as a measure of cooperative behavior. This dual-threshold approach is pragmatic as it discerns the distinctive behaviors; an \u2018Always Defect\u2019 agent meets the first threshold but fails the second, whereas an \u2018Always Cooperate\u2019 agent satisfies the second but not the first. However, a policy resembling \u2018Tit-for-Tat\u2019 satisfies both thresholds. Furthermore, as the grid size grows the average returns of agents per step decrease since it takes longer to reach the coin. Therefore, it is crucial to ensure our thresholds are consistent when evaluating for different grid sizes. We normalize the returns to make our thresholds standard over all grid sizes. Specifically, we multiply the return by the maximum distance possible between the agent and the coin for a given grid size. Since the grid is wrapped, the normalization value N is given by the Manhattan distance between the center of the grid and one of the corners. We call this the normalized return and we calculate our thresholds based on it. We have three thresholds for our evaluation. The weak and medium thresholds are designed so that all the algorithms are able to reach them. However, as LOQA reaches much higher return on large grid sizes as compared to POLA and M-FOS, we set the strong threshold to a high performance. The specification of the thresholds values is shown in Table 2. The results of our experiments on larger grid sizes are illustrated in Figure 3. All LOQA runs meet the strong threshold for grid sizes up to 6, but at a grid size of 7, one run falls short of the strong threshold. In contrast, every POLA and M-FOS run fail to reach the strong threshold for grid sizes above 3. For further details on the training curves from these experiments, please see the appendix section D. Figure 6 also provides the evaluation metrics for each algorithm, detailing their behaviors. Additionally, LOQA consistently achieves each threshold substantially faster, by at least one order of magnitude because of its lower time and memory complexity. 8 \fPublished as a conference paper at ICLR 2024 0 1000 2000 3000 4000 5000 6000 7000 0.00 0.25 0.50 Each Other weak medium strong weak medium strong weak medium strong Grid Size: 7 Algorithm: LOQA 0 1000 2000 3000 4000 5000 6000 7000 1.0 0.5 Always Defect weak medium strong weak medium strong weak medium strong Grid Size: 7 Algorithm: LOQA 0 5000 10000 15000 20000 25000 30000 35000 0.1 0.2 Each Other weak medium strong weak medium strong weak medium strong Grid Size: 7 Algorithm: POLA 0 5000 10000 15000 20000 25000 30000 35000 1.5 1.0 0.5 Always Defect weak medium strong weak medium strong weak medium strong Grid Size: 7 Algorithm: POLA 0 2500 5000 7500 10000 12500 15000 Wall Clock Time 0.0 0.1 0.2 Each Other weak medium strong weak medium strong weak medium strong Grid Size: 7 Algorithm: M-FOS 0 2500 5000 7500 10000 12500 15000 Wall Clock Time 1.0 0.5 Always Defect weak medium strong weak medium strong weak medium strong Grid Size: 7 Algorithm: M-FOS Figure 4: Training curves for 3 seeds of POLA, LOQA, and M-FOS on the two evaluation metrics for a 7x7 grid size: Normalized return vs. themselves (Self) and vs. always defect (AD). The wall clock time is measured in seconds. Note that the range of the x-axis is different for each algorithm as POLA and M-FOS need more time. The complexity of LOQA is equivalent to the calculation of a REINFORCE estimator, as is standard in RL. Unlike POLA, LOQA does not involve calculations for opponent optimization steps nor does it differentiate through a computational graph of said optimizations during each training step, giving LOQA greater computationoal efficiency compared to POLA. Additionally, the absence of a secondorder gradient in LOQA reduces the variance of its gradient estimators. The model of opponent learning in POLA is restricted to a limited number of optimization steps. In scenarios with increased complexity and varying opponent policies, additional steps may be necessary to accurately represent the opponent\u2019s learning. This increase necessitates extended runtimes and increased memory allocation for storing the computational graphs required for differentiation, positioning LOQA as more efficient and economical in memory usage. 7 LIMITATIONS LOQA is primarily limited by the assumption that the other player acts accordingly to an inner action-value function. As such, it is unable to shape other opponents that do not necessarily follow this assumption. In a similar way, LOQA agents are designed for environments with discrete action spaces. Future work could explore relaxations that allow LOQA agents to shape other types of agents and learn in continuous action spaces. 8 CONCLUSION In this paper we have introduced LOQA, a decentralized reinforcement learning algorithm that is able to learn reciprocity-based cooperation in general sum environments at a lower computational cost than its predecessors. To do so, LOQA agents rely on the observation that their opponent\u2019s action-value function is controllable, and the assumption that their opponent\u2019s policy closely mirrors it. As a result, LOQA agents are able to shape other LOQA agents by performing REINFORCE updates that can be efficiently computed in hindsight after collecting environment trajectories. This is especially advantageous as demonstrated in the experimental setup, where LOQA agents confidently outperform POLA and M-FOS agents in terms of optimality and efficiency in the Coin Game. Therefore, LOQA stands out as a promising algorithm for tackling more complex and empiricallygrounded social dilemmas. 9 \fPublished as a conference paper at ICLR 2024 9 ACKNOWLEDGMENTS The authors would like to thank Mila and Compute Canada for providing the computational resources used for this paper. We would like to thank Olexa Bilaniuk for his invaluable technical support throughout the project. We acknowledge the financial support of Hitachi Ltd, Aaron\u2019s CIFAR Canadian AI chair and Canada Research Chair in Learning Representations that Generalize Systematically. Special thanks to Shunichi Akatsuka for his insightful discussions. We would like to thank the JAX ecosystem Bradbury et al. (2018)."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.01051v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01051v1",
|
| 3 |
+
"title": "Generating User Experience Based on Personas with AI Assistants",
|
| 4 |
+
"abstract": "Traditional UX development methodologies focus on developing ``one size fits\nall\" solutions and lack the flexibility to cater to diverse user needs. In\nresponse, a growing interest has arisen in developing more dynamic UX\nframeworks. However, existing approaches often cannot personalise user\nexperiences and adapt to user feedback in real-time. Therefore, my research\nintroduces a novel approach of combining Large Language Models and personas, to\naddress these limitations. The research is structured around three areas: (1) a\ncritical review of existing adaptive UX practices and the potential for their\nautomation; (2) an investigation into the role and effectiveness of personas in\nenhancing UX adaptability; and (3) the proposal of a theoretical framework that\nleverages LLM capabilities to create more dynamic and responsive UX designs and\nguidelines.",
|
| 5 |
+
"authors": "Yutan Huang",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.SE",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.SE",
|
| 11 |
+
"cs.HC"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM Fairness",
|
| 15 |
+
"gt": "Generating User Experience Based on Personas with AI Assistants",
|
| 16 |
+
"main_content": "Introduction User Interface (UI) and User Experience (UX) are integral components in software engineering (SE) that serve to bridge the gap between human requirements and system functionalities. UI and UX aim to optimise the interaction between the computer and the human via the interface to ensure ease of use and intuitiveness. A well-implemented UI/UX not only diminishes the cognitive load on the user but also reduces the time and e\ufb00ort required for users to understand and navigate through a system [8]. Hence, properly designed UI/UX signi\ufb01cantly a\ufb00ects system e\ufb03ciency, user satisfaction, and overall performance [20]. In the rapidly advancing technological landscape, users\u2019 desire for customised options and personalised experiences has surged, emphasising the importance of customisable and adaptive UX [23]. In addition, there is a growing recognition of the necessity for human-centric requirements that cater to individuals with speci\ufb01c needs, such as those with disabilities or diverse backgrounds [5]. Customizable UX allows users to control and tailor the design based on their preferences. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro\ufb01t or commercialadvantage and that copies bearthis notice and the full citation on the \ufb01rstpage. Copyrights for third-party components of this work mustbe honored. For all other uses, contact the owner/author(s). ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0502-1/24/04. https://doi.org/10.1145/3639478.3639810 It represents an important step toward user-centric interfaces but often fails to deliver a truly personalised experience [6, 12]. Adaptive UX goes beyond customisation, employing the ability to understand user behaviours, preferences and context [14]. Consequently, the system proactively alters the elements of UI to serve users better, e.g., visual appearance, typography, colour schemes, iconography and interactive elements like buttons, forms, and navigation menus [9]. While the idea of a truly adaptive system seems appealing, its practical implementation is challenging due to the diverse needs of users. Additionally, manually designing such a system is laborious, compounded by the need to maintain consistency due to business requirements, e.g., branding and aesthetics. Personas are often used in the \ufb01eld of UX as archetypical user pro\ufb01les to inform designers about speci\ufb01c user behaviours, needs and goals from the system [18]. Their strength lies in providing a clear, focused understanding of end-users, especially when direct access to human bene\ufb01ciaries is limited, enabling designers to make informed decisions. The recent advances in arti\ufb01cial intelligence (AI) techniques o\ufb00er great potential for adaptive UI and addressing the challenges mentioned above via automation. Large Language Models (LLMs) are the recent successors in the area of AI techniques that have shown considerable promise in automating di\ufb00erent SE tasks, e.g., code generation [10], requirements management [2], test generation [16], and persona generation [25, 26]. LLMs, trained on vast amounts of data, are excellent candidates for generating adaptive designs due to their ability to understand context, infer user intentions, and generate coherent responses [4]. This PhD research intends to explore the potential of LLMs combined with rich personas, which are more comprehensive and detailed than standard personas, to develop adaptive UX for diverse users. Speci\ufb01cally, I aim to create an adaptive UX framework that tailorsuser interfaces according to individual preferences and needs, focusing on the design, adapting and leveraging personas (and user requirements). Next, I discuss the related work on adaptive UX and the use of personas (Section 2), and the research plan with research questions (RQs) (Section 3). This PhD project is in the early stages; hence, in Section 4, I discuss the proposed approach and research directions. 2 Related work Adaptive UI/UX design uses a model-based approach as well as an AI-based approach [19, 21]. The model-based approach involves the creation of adaptive designs using architectural models. These models consist of one or multiple layers of architecture that \fICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal Yutan Huang process multimodal data to generate adaptive UXs [11]. This approach primarily focuses on enhancing UX features such as layout, content, and modality, however, while it achieves diversi\ufb01cation by leveraging di\ufb00erent models, it often lacks the invaluable input of user feedback and iterative re\ufb01nement derived from legacy systems [1]. Additionally, the methodology for runtime feature selection is often underdeveloped in this approach, which limits its ability to adapt to changing user needs and preferences [7]. This model-based approach seeks to create variations in UX but may fall short in addressing real-time user interactions and feedback [7]. In contrast, the AI-based approach has gained prominence in recent years, capitalizing on the capabilities of AI to generate both text and graphics. Researchers have employed AI tools such as Sketch2Code, MetaMorph, and ChatGPT to dynamically generate UIs based on user interactions and requirements [17, 22]. The use of AI in adaptive UX design introduces a range of possibilities. Yang et al. identi\ufb01ed four key channels through which AI augments the value of adaptive UX: self-inferences, world inferences, optimal inferences, and utility inferences. These channels represent AI\u2019s ability to provide users with self-understanding, contextual understanding, optimal solutions, and utility-based responses, signi\ufb01cantly enriching the user experience [24]. These four channels serve as foundational concepts for adaptive UX generation with AI and are essential for guiding designers to create more personalized and user-centric interfaces [3]. Despite the potential of AI-based approaches, it\u2019s becoming increasingly evident that solutions utilizing Large Language Models (LLMs) are at the forefront of this technology\u2019s application. These LLMs, which are now among the most commonly implemented forms of AI, heavily rely on the quality of prompts provided to them [15]. In the context of user experience (UX) design, these prompts\u2019 precision and relevance directly impact the outcomes\u2019 quality, as demonstrated in recent studies [13]. E\ufb00ective prompt engineering is a critical aspect of AI-driven adaptive UX requirements, and it is an area that requires careful consideration and re\ufb01nement [2]. The model-based and AI-based approaches in adaptive UX design have illustrated diverse possibilities. However, it\u2019s important to note that these approaches commonly lack rigorous evaluation and iterative feedback from users and designers, forming a signi\ufb01cant gap in the existing research landscape. This review provides the context for understanding the need for our research, which aims to address these limitations and enhance the \ufb01eld of adaptive UX design by constructing an intelligent User interface that uses ML techniques with a framework to guide experts through the process of creating adaptive UI with user experience. 3 Research Plan The main research aim of this PhD research is to develop a framework for generating adaptive UX using LLMs and personas structured in the following steps (guided by the research questions mentioned under each step). Foundational Understanding: How is adaptive UX de\ufb01ned and understood in the current literature? Which UX fragments can be adapted and generated automatically? Role of Personas in Adaptive UX: What are the critical elements within personas that lend themselves to the creation of adaptive UX? Are there gaps or limitations in current persona models that could hinder the development of adaptive UX designs? Role of LLMs in Adaptive UX: To what degree can LLMs contribute to the development of adaptive UX? How do LLMs interpret and utilise persona information to generate UX designs? Which prompting techniques in LLMs yield the best adaptive UX results? Framework Development and Evaluation: Do users and practitioners \ufb01nd the adaptive UX generated by our framework useful? What are the challenges when leveraging LLMs for adaptive UX? 4 Solution Approach Foundational Understanding Systematic Literature Review and UX experiment: My foundational understanding begins with a systematic literature review on adaptive UI/UX, exploring de\ufb01nitions, methods, and applications in academic and professional contexts to identify aspects of UX that have been automated previously. Concurrently, I will conduct experiments to create UI automatically using LLMs, with insights from the literature, to validate my \ufb01ndings and identify potential UI fragments that can be adapted easily (e.g., interface designs, colours, buttons). This will establish a foundation for developing an informed adaptive UI/UX framework. Role of Personas in Adaptive UI/UX Expert Insight and Model Comparison: To \ufb01gure out the important parts of personas that help create adaptive UI and \ufb01nd any shortcomings in current persona representations, I will \ufb01nd key persona elements related to adaptive UI in practice by interviewing experienced UX designers. I will then compare di\ufb00erent representations of persona contents and prioritise what is important to include in a persona for adaptive UX generation. The comparative analysis and interviews in parallel will help re\ufb01ne persona representations and triangulate our \ufb01ndings. Role of LLMs in Adaptive UI/UX Exploring LLM\u2019s Capability in Adaptive UI Creation: I plan to carry out a set of experiments revolving around prompt engineering, an example would be using GPT-model-based LLMs and feeding them user preference and background information with personas. These experiments can examine the e\ufb00ectiveness of LLMs in generating user-tailored designs. Framework Development and Evaluation Evolving UI/UX Framework through User and Practitioner Feedback: I aim to develop a UX framework based on LLMs to guide adaptive UX creation. This framework will be dynamic, evolving through iterative enhancements for robustness and e\ufb00ective adaptive UX design. Leveraging LLM capabilities, I seek to establish a foundational, adaptable tool for UX development. Assessment and Re\ufb01nementof the UI/UX Framework through UserCentric Feedback: The evaluation of adaptive UI design and UI/UX framework will involve engaging users and experts to interact with and test the developed UIs by using them as a daily routine and provide users with tasks to complete.Their feedback will inform the integration of prompt engineering into our framework, and enhancing a smooth transition from design-time to run-time approach. \fGenerating User Experience Based on Personas with AI Assistants ICSE-Companion \u201924, April 14\u201320, 2024, Lisbon, Portugal 5 Conclusion In conclusion, the research aims to address a gap in adaptive UX design by integrating LLMs and personas, striking a balance between technical capabilities and a human-centric approach. The challenge lies in aligning LLMs\u2019 technical prowess with the nuanced insights of personas. The expectedimpact includes enhanced adaptability and personalization in UX designs, and setting new standards in UX methodology. Acknowledgement Yutan is supported by ARC Laureate Fellowship FL190100035."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.01063v1.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01063v1",
|
| 3 |
+
"title": "Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach",
|
| 4 |
+
"abstract": "As recommender systems are indispensable in various domains such as job\nsearching and e-commerce, providing equitable recommendations to users with\ndifferent sensitive attributes becomes an imperative requirement. Prior\napproaches for enhancing fairness in recommender systems presume the\navailability of all sensitive attributes, which can be difficult to obtain due\nto privacy concerns or inadequate means of capturing these attributes. In\npractice, the efficacy of these approaches is limited, pushing us to\ninvestigate ways of promoting fairness with limited sensitive attribute\ninformation.\n Toward this goal, it is important to reconstruct missing sensitive\nattributes. Nevertheless, reconstruction errors are inevitable due to the\ncomplexity of real-world sensitive attribute reconstruction problems and legal\nregulations. Thus, we pursue fair learning methods that are robust to\nreconstruction errors. To this end, we propose Distributionally Robust Fair\nOptimization (DRFO), which minimizes the worst-case unfairness over all\npotential probability distributions of missing sensitive attributes instead of\nthe reconstructed one to account for the impact of the reconstruction errors.\nWe provide theoretical and empirical evidence to demonstrate that our method\ncan effectively ensure fairness in recommender systems when only limited\nsensitive attributes are accessible.",
|
| 5 |
+
"authors": "Tianhao Shi, Yang Zhang, Jizhi Zhang, Fuli Feng, Xiangnan He",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.IR",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.IR",
|
| 11 |
+
"cs.CY",
|
| 12 |
+
"cs.LG"
|
| 13 |
+
],
|
| 14 |
+
"label": "Original Paper",
|
| 15 |
+
"paper_cat": "LLM Fairness",
|
| 16 |
+
"gt": "Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach",
|
| 17 |
+
"main_content": "INTRODUCTION Recommender system has become a vital technology in various Web applications, including job searching [20], online advertising [39], and e-commerce [33]. By providing personalized information filtering based on user interests, recommender systems have significant social influence [44]. Unfair recommendations that discriminate against specific user groups [35] would negatively impact the Web and society. For instance, previous research [20] on job recommendations found that women were exposed to fewer advertisements for high-paying jobs and career coaching services compared to men, perpetuating gender pay gap in the workforce. Clearly, it is of paramount importance to maintain fairness in recommender systems to ensure their trustworthiness and regulatory compliance [32]. The existing research to optimize the fairness of recommendations can be further classified into four main categories: \u2022 Data-oriented methods [6, 30], which adjusts the training data according to sensitive attributes by resampling [6] and adding antidote data [30], etc. \u2022 Adversarial learning-based methods [39, 45], which learn fair representations with a min-max game to prevent encoding of sensitive attributes in user embeddings or graph structures. \u2022 Regularization-based methods [16, 24, 41, 47], which incorporate fairness metrics such as absolute difference in utility between groups with different sensitive attributes into the objective function [16, 41, 47] or the reward function [24]. \u2022 Re-ranking methods [22, 40], which directly adjusts recommendation results to enhance fairness across user groups with different sensitive attributes. We contend that the current methods are significantly constrained by their reliance on full access to sensitive attributes [35], a condition that is often difficult to satisfy in real-world scenarios. Firstly, users have the entitlement, as per regulatory frameworks arXiv:2405.01063v1 [cs.IR] 2 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. like the General Data Protection Regulation (GDPR) [32] of the European Union, to decline the disclosure of their sensitive data at any moment. For instance, only 17% of participants reported willing of sharing their income information in electronic commerce [1]. Moreover, many online retail and advertising recommendations can be accessed without registering or submitting personal information [33]. Consequently, a conundrum emerges: how can fair recommendation be obtained when only limited sensitive attributes are available [35]? To address this issue, a default choice is reconstructing the missing sensitive attributes of users from the available personal information such as historical interactions [25]. However, reconstruction errors are inevitable due to the inherent complexity and noise of user information in recommender systems. Moreover, striving for perfectly reconstructed attributes can potentially violate user privacy, raising ethical concerns and being constrained by legal limitations. For instance, GDPR [32] mandates obtaining user consent for modeling user profiles (including sensitive attributes). Consequently, a subset of users may not have their sensitive attributes reconstructed1. The aforementioned challenges underscore the significance of devising alternative approaches that are robust to the reconstruction errors of sensitive attributes. To this end, we propose a new framework to pursue fair recommendations under reconstructed sensitive attributes with errors called Distributionally Robust Fair Optimization (DRFO). Instead of optimizing fairness over the reconstructed sensitive attributes, DRFO minimizes the worst-case unfairness over an ambiguity set of all potential distributions of missing sensitive attributes to account for the impact of reconstruction errors. Furthermore, for users who forbid the reconstruction of their sensitive attributes, DRFO can provide fair recommendations for them by considering a larger ambiguity set of distributions. We theoretically demonstrate that DRFO ensures recommendation fairness in the face of sensitive attribute reconstruction errors, even in the absence of such reconstruction. Extensive experiments on two datasets verify the effectiveness of our approaches. The main contributions are summarized as follows: \u2022 We propose a new framework for optimizing the user-side fairness in recommender systems with limited sensitive attributes. \u2022 We provide theoretical evidence that our method can ensure fair recommendations in the face of sensitive attribute reconstruction errors, even in the absence of such reconstruction. \u2022 We conduct extensive experiments on two datasets, validating the rationality and effectiveness of the proposed framework. 2 PRELIMINARIES In this study, we aim to achieve fairness in recommender systems with limited sensitive attributes. We consider the widely used Demographic Parity (DP) as an exemplary fairness criterion to investigate this problem. In the following, we first introduce the notation used in this paper and then provide a brief overview of DP fairness. 2.1 Notation In this study, we use uppercase letters (e.g., \ud835\udc45) to represent random variables, lowercase letters (e.g., \ud835\udc5f) for specific values of these 1This can also be regarded as a special scenario with significant reconstruction errors. variables, and calligraphic fonts (e.g., R) to represent the sample space of the variable. Let \ud835\udc48, \ud835\udc49, \ud835\udc46, and \ud835\udc45denote the user, item, user sensitive attribute, and rating, respectively. Let D denote the historical data. Each sample within D is denoted as (\ud835\udc62, \ud835\udc63,\ud835\udc60,\ud835\udc5f), where \ud835\udc62\u2208U, \ud835\udc63\u2208V, \ud835\udc5f\u2208R, and \ud835\udc60\u2208S. In this work, we consider the binary rating, i.e., R = {0, 1}. Additionally, we assume the sensitive feature to be binary but potentially unknown for a subset of users. We split D into two parts: D\ud835\udc58with known sensitive features and D\ud835\udc5awith missing sensitive features, and we have D = D\ud835\udc58\u222aD\ud835\udc5a and D\ud835\udc58\u2229D\ud835\udc5a= \u2205. For convenience, we denote the samples in D with the sensitive feature \ud835\udc46= \ud835\udc60as D (\ud835\udc60), similar for D (\ud835\udc60) \ud835\udc58 and D (\ud835\udc60) \ud835\udc5a. 2.2 Fairness Demographic Parity [5] is a widely studied fairness criterion in recommendation [16, 47]. DP\u2019s definition is the model\u2019s rating prediction \u02c6 \ud835\udc45should be independent of the sensitive attribute\ud835\udc46. A model achieving DP fairness would generate recommendations without relying on \ud835\udc46, thereby satisfying legal requirements or user demands against discrimination on model output [16]. We follow previous work [47] to quantify DP with the mean absolute difference (MAD) between ratings of different groups: \f \fE \u0002 \u02c6 \ud835\udc45|\ud835\udc46= 0 \u0003 \u2212E \u0002 \u02c6 \ud835\udc45|\ud835\udc46= 1 \u0003\f \f, (1) where E \u0002 \u02c6 \ud835\udc45|\ud835\udc46= 0 \u0003 and E \u0002 \u02c6 \ud835\udc45|\ud835\udc46= 1 \u0003 denote the expectation of prediction \u02c6 \ud835\udc45over groups with \ud835\udc46= 0 and \ud835\udc46= 1, respectively. A diminished MAD level signifies a heightened degree of DP fairness. Fair learning. To achieve DP fairness, we could take the regularizationbased method [41], which directly incorporates the MAD metric into the training objective. Formally, min \ud835\udf03 \ud835\udc3f(\ud835\udf03) + \ud835\udf06 \f \fED(0) \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED(1) \u0002 \u02c6 \ud835\udc45 \u0003\f \f, (2) where \ud835\udf06is a hyper-parameter to control the strength of the fairness regularization term, ED (\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 is the average predicted rating over D (\ud835\udc60), i.e., ED (\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 = 1 | D (\ud835\udc60) | \u00cd (\ud835\udc62,\ud835\udc63)\u2208D (\ud835\udc60) \u02c6 \ud835\udc5f\ud835\udc62,\ud835\udc63, and \ud835\udc3f(\ud835\udf03) is a recommendation loss (e.g., binary cross-entropy loss [12]). Typically, the regularization can be transformed into a set of constraints that minimize the discrepancy between the average predictions of a specific group and the overall predictions, min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. ED(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \ud835\udc60= 0, 1, (3) where the constraint ensures the expected predicted rating in D (\ud835\udc60) is equal to the expected predicted rating in the entire dataset D. Preserving any single constraint in Equation (3) is sufficient to promote fairness under the binary-sensitive attribute scenario while preserving multiple constraints is intended for non-binary cases. 3 METHODOLOGY To build a fair recommender system that addresses the chanllenge of missing sensitive attributes among some users, a seemingly workable solution is to directly apply Fair Learning with Reconstructed Sensitive Attributes (FLrSA). In this section, we outline this solution and highlight its limitations. Subsequently, we introduce the proposed Distributionally Robust Fairness Optimization to overcome the impact of reconstruction errors. Finally, we discuss the extension of DRFO for situations where certain users are reluctant to have their sensitive attributes reconstructed. \fFair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 3.1 FLrSA To achieve fair recommendations in the presence of missing sensitive attributes, a seemingly workable solution involves reconstructing the missing attributes and subsequently applying fair learning methods based on the reconstructed sensitive attributes. Specifically, we can follow the two steps below: Step 1: Sensitive attribute reconstruction. Previous research has shown that user sensitive attributes can be reconstructed using available user information, such as historical interactions [33, 36] and zip codes [7]. Therefore, we can train a classifier over D\ud835\udc58 to predict missing user sensitive attributes in D\ud835\udc5a. Let \u02c6 \ud835\udc46\u2208{0, 1} denote the reconstructed sensitive attribute. Subsequently, we can incorporate the reconstructed sensitive attributes into D\ud835\udc5aand generate a new dataset \u02c6 D\ud835\udc5a. Step 2: Fair learning. Next, we perform fair learning over D\ud835\udc58and \u02c6 D\ud835\udc5abased on Equation (3), which is reformulated as: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02c6 D(\ud835\udc60) \ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \ud835\udc60= 0, 1, (4) where \u02c6 D (\ud835\udc60) \ud835\udc5a is a subset of \u02c6 D\ud835\udc5awith the reconstructed attribute \u02c6 \ud835\udc46= \ud835\udc60, ED (\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 is the average predicted rating over D (\ud835\udc60) \ud835\udc58 , and: \ud835\udf02(\ud835\udc60) \ud835\udc58 = |D (\ud835\udc60) \ud835\udc58 | | D (\ud835\udc60) \ud835\udc58 | + | \u02c6 D (\ud835\udc60) \ud835\udc5a| , \ud835\udf02(\ud835\udc60) \ud835\udc5a= | \u02c6 D (\ud835\udc60) \ud835\udc5a| |D (\ud835\udc60) \ud835\udc58 | + | \u02c6 D (\ud835\udc60) \ud835\udc5a| . (5) Limitations. This method relies on the accuracy of sensitive attribute reconstruction. However, achieving an entirely accurate classifier poses challenges due to the inherent complexity and noise of user information in recommender systems. This leads to reconstruction errors, which could compromise the fairness of recommendations for some users. Furthermore, this method relies on the permission of sensitive attribute reconstruction from users, which may not be achievable for all users due to legal restrictions. 3.2 DRFO Reconstruction errors significantly constrain the vanilla FLrSA, as they introduce a discrepancy between the reconstructed and the unknown true distribution concerning sensitive attributes. Consequently, relying solely on the reconstructed distribution may compromise fairness performance. Nevertheless, the unknown true distribution lies within the proximity of the reconstructed distribution. By ensuring fairness in the vicinity of the reconstructed distribution, the model could achieve robust fairness for the unknown true distribution [34]. This inspires the development of DRFO, a novel approach to fairness with limited sensitive attributes. DRFO has two main parts: 1) building the ambiguity set which encompasses the unknown true distribution based on the reconstructed sensitive attributes, and 2) ensuring fairness within the entire ambiguity set using DRO. Figure 1 provides an overview of DRFO. For our discussion convenience, we assume that all users grant permission for the reconstruction of sensitive attributes in this subsection. The scenario where some users do not permit reconstruction due to privacy concerns will be discussed in Section 3.3. Building ambiguity set. An ambiguity set is a set of distributions centered around the reconstructed distribution. We denote the ambiguity set as B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60))={ \u02dc \ud835\udc44(\ud835\udc60) |\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61( \u02dc \ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udf0c\ud835\udc60}, where \u02c6 \ud835\udc44(\ud835\udc60) 1 0 \u22ee 1 ? ? \u22ee ? 0 0 \u22ee 1 Reconstructed Attributes Unknown Attributes Known Attributes Ambiguity Set (DRFO) Reconstructed Distribution (FLrSA) Unknown True Distribution DRO Mechanism Known Distribution Reconstructor Fair Learning Reconstructed Known Figure 1: Illustration of FLrSA and DRFO for providing fair recommendations with limited sensitive attributes. After the reconstruction of unknown sensitive attributes, the FLrSA directly applies fair learning with the reconstructed distribution. Conversely, DRFO builds an ambiguity set that encompasses the unknown true distribution and guarantees fairness across the entire ambiguity set. denotes the reconstructed distribution, \ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61(\u00b7) denotes a distance metric, and \ud835\udf0c\ud835\udc60denotes robust radius. By setting an appropriate value \ud835\udf0c\ud835\udc60, we can ensure that this set encompasses the unknown true distribution \ud835\udc44(\ud835\udc60). In the context of facilitating fair learning, as expressed in Equation (4) involving E \u02c6 D (\ud835\udc60) \ud835\udc5a[ \u02c6 \ud835\udc45(\ud835\udc48,\ud835\udc49)], our method focuses on building an ambiguity set of joint distributions (\ud835\udc48,\ud835\udc49) conditioned on a specific sensitive attribute value \ud835\udc60. Specifically, we assume that the distribution of (\ud835\udc48,\ud835\udc49) conditioned on \ud835\udc46=\ud835\udc60follows \ud835\udc44(\ud835\udc60). And we represent the joint distribution of (\ud835\udc48,\ud835\udc49) conditioned on the reconstructed sensitive attribute \u02c6 \ud835\udc46=\ud835\udc60as \u02c6 \ud835\udc44(\ud835\udc60). Robust fair learning. Robust learning solves an optimization problem that satisfies the fairness constraints in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) as: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \u2200\u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ) ,\ud835\udc60= 0, 1, (6) where E(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003 denotes the expectation of \u02c6 \ud835\udc45under the distribution \u02dc \ud835\udc44(\ud835\udc60). Here, we transform the fairness constraints in Equation (4) from solely relying on the reconstructed distribution to guaranteeing fairness across the entire ambiguity set2. In this way, as long as the ambiguity set encompasses the unknown true distribution, we can achieve robust fairness. Apparently, the key of DRFO lies in 1) building an appropriate ambiguity set which encompasses the unknown true distribution, and 2) solving the optimization problem in Equation (6). Next, we elaborate how to achieve them. 3.2.1 Building Ambiguity Set. We now consider establishing an appropriate ambiguity set B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) that encompasses the unknown true distribution \ud835\udc44(\ud835\udc60). However, direct estimation of the exact distance between \ud835\udc44(\ud835\udc60) and the reconstructed distribution 2We do not directly solve the optimization problem under these equality constraints in Equation (6). Instead, we convert it into an optimization of the worst-case scenario problem, as expressed in Equation (11). \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. \u02c6 \ud835\udc44(\ud835\udc60) is impossible as true sensitive attributes are unavailable. Instead, we estimate an upper bound on the distance between them. Similar to previous work [34], using the Total Variation (TV) distance [21] (\ud835\udc47\ud835\udc49\u2208[0, 1], Appendix A.1) as the distance metric, we could estimate an upper bound based on the error rate of sensitive attribute reconstruction. This is described in the following theorem. Theorem 1. Assuming that the reconstructed sensitive attributes \u02c6 \ud835\udc46have the same prior distribution as the true sensitive attributes \ud835\udc46, i.e., \ud835\udc43( \u02c6 \ud835\udc46) = \ud835\udc43(\ud835\udc46), the TV distance between \ud835\udc44(\ud835\udc60) and \u02c6 \ud835\udc44(\ud835\udc60) is upperbounded by the probability of incorrectly reconstructing the sensitive attributes, i.e., \ud835\udc47\ud835\udc49(\ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60). The proof is provided in Appendix A.1. This theorem suggests that, assuming the distribution of the reconstructed sensitive attributes \u02c6 \ud835\udc46is identical to the true distribution of \ud835\udc463, we can create the ambiguity set B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) as follows: B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \u0011 = n \u02dc \ud835\udc44(\ud835\udc60) |\ud835\udc47\ud835\udc49( \u02dc \ud835\udc44(\ud835\udc60), \u02c6 \ud835\udc44(\ud835\udc60) ) \u2264\ud835\udf0c\ud835\udc60 o , \ud835\udf0c\ud835\udc60= \ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60), (7) where \ud835\udf0c\ud835\udc60can be approximately estimated using the validation set4, following existing works [34]. This ambiguity set encompasses the true distribution\ud835\udc44(\ud835\udc60), meaning that we successfully build the target ambiguity set with \u02c6 \ud835\udc44(\ud835\udc60) and \ud835\udf0c\ud835\udc60. 3.2.2 Robust Fair Learning. We next consider solving the optimization problem in Equation (6). Following [13, 28], we convert it into an empirical form (i.e., representing it using the empirical distribution), enabling us to optimize it in a data-driven manner. Meanwhile, to tackle the challenges posed by the complexity of solving the constrained optimization problem, we transform it into a solvable Lagrangian problem with the algorithm proposed in [27]. Empirical form. To solve Equation (6) with learning methods, the key step involves using \u02c6 D\ud835\udc5ato estimate E(\ud835\udc48,\ud835\udc49)\u223c\u02dc \ud835\udc44(\ud835\udc60) \u0002 \u02c6 \ud835\udc45 \u0003. Denote the empirical estimation as E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003, we can obtain the empirical form of the optimization problem as follows: min \ud835\udf03 \ud835\udc3f(\ud835\udf03) s.t. \ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 = 0, \u2200\u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ), \ud835\udc60= 0, 1, (8) where \u02c6 \ud835\udc44(\ud835\udc60), \u02dc \ud835\udc44(\ud835\udc60) are also converted into empirical distributions [4]. \u02c6 \ud835\udc44(\ud835\udc60) = { \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208\u02c6 D\ud835\udc5a}, where \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63denotes the probability weight for the sample (\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208\u02c6 D\ud835\udc5aand \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1/| \u02c6 D (\ud835\udc60) \ud835\udc5a| if \u02c6 \ud835\udc60= \ud835\udc60 else \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 0; similarly, we have \u02dc \ud835\udc44(\ud835\udc60) = { \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63, \u02c6 \ud835\udc60) \u2208 \u02c6 D\ud835\udc5a}; E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 represents the empirical expectation of \u02c6 \ud835\udc45in \u02c6 D\ud835\udc5aunder the distribution \u02dc \ud835\udc44(\ud835\udc60), and B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)\u0011 denotes the empirical form of ambiguity set defined in Equation (7), formally, we have: \u2022 The empirical expectation of \u02c6 \ud835\udc45, i.e., E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 : E \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 = \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u00b7 \u02c6 \ud835\udc5f\ud835\udc62,\ud835\udc63; (9) 3If the assumption is violated, a more lenient TV distance upper bound estimation is needed (further discussed in Appendix A.1). In our experiments, as the observed commendable accuracy in reconstructing sensitive attributes leads to a modest disparity between \ud835\udc43(\ud835\udc46) and \ud835\udc43( \u02c6 \ud835\udc46), we approximate the assumption holds true here. 4In practice, by assessing the difference between the validation and test sets and incorporating it with the error rate of the validation set, we can set an upper bound on the reconstruction of sensitive attribute errors, thus enabling estimation of \ud835\udf0c\ud835\udc60. Algorithm 1: DRFO Input: Dataset with known sensitive attributes D\ud835\udc58, dataset with unkown sensitive attributes D\ud835\udc5a. Hyper-parameters \ud835\udf06\ud835\udc60, and learning rate \ud835\udefc\ud835\udf03for \ud835\udf03, learning rate \ud835\udefc\ud835\udc5efor \u02dc \ud835\udc44(\ud835\udc60) (\ud835\udc60= 0, 1). 1 Random split D\ud835\udc58into training and validation sets, and train a sensitive attribute classifier; 2 Reconstruct sensitive attributes \u02c6 \ud835\udc46for D\ud835\udc5a, getting \u02c6 D\ud835\udc5a, and for each sample (\ud835\udc62, \ud835\udc63) \u2208\u02c6 D\ud835\udc5a, compute \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63(=1/| \u02c6 D\ud835\udc5a| if \u02c6 \ud835\udc46= \ud835\udc60else 0), obtaining \u02c6 \ud835\udc44(\ud835\udc60) = { \u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63|(\ud835\udc62, \ud835\udc63) \u2208\u02c6 D\ud835\udc5a} ; 3 Estimate \ud835\udf0c\ud835\udc60\u2248\ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60) with the validation set held out by D\ud835\udc58; 4 for \ud835\udc61= 1, . . . ,\ud835\udc47do 5 Update \ud835\udf03using gradient descent: \ud835\udf03\u2190\ud835\udf03\u2212\ud835\udefc\ud835\udf03\u2207\ud835\udf03\ud835\udc3f(\ud835\udf03) \u2212\u00cd1 \ud835\udc60=0 \ud835\udf06\ud835\udc60\ud835\udefc\ud835\udf03\u2207\ud835\udf03\ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)); 6 for \ud835\udc60= 0, 1 do 7 Update \u02dc \ud835\udc44(\ud835\udc60) using gradient ascent: \u02dc \ud835\udc44(\ud835\udc60) \u2190\u02dc \ud835\udc44(\ud835\udc60) + 1 \u00cd \ud835\udc60=0 \ud835\udf06\ud835\udc60\ud835\udefc\ud835\udc5e\u2207\u02dc \ud835\udc44(\ud835\udc60) \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) ; 8 Project \u02dc \ud835\udc44(\ud835\udc60) onto B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)); 9 end 10 end \u2022 The empirical form of ambiguity set B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)\u0011 : B \u0010 \ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \u0011 = n \u02dc \ud835\udc44(\ud835\udc60) \u2208R| \u02c6 D\ud835\udc5a| : 1 2 \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \f \f \f \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u2212\u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63 \f \f \f \u2264\ud835\udf0c\ud835\udc60, \u2211\ufe01 (\ud835\udc62,\ud835\udc63) \u2208\u02c6 D\ud835\udc5a \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1, \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u22650 o , (10) where 1 2 \u00cd (\ud835\udc62,\ud835\udc63)\u2208\u02c6 D\ud835\udc5a \f \f \f \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u2212\u02c6 \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63 \f \f \f \u2264\ud835\udf0c\ud835\udc60represents the empirical implementation of the constraint \ud835\udc47\ud835\udc49( \u02dc \ud835\udc44(\ud835\udc60) , \u02c6 \ud835\udc44(\ud835\udc60)) \u2264\ud835\udf0c\ud835\udc60(Equation (14) in Appendix A.1), \u00cd (\ud835\udc62,\ud835\udc63)\u2208\u02c6 D\ud835\udc5a\u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63= 1 and \u02dc \ud835\udc5e(\ud835\udc60) \ud835\udc62,\ud835\udc63\u22650 are used to ensure that the empirical distribution \u02dc \ud835\udc44(\ud835\udc60) represents a valid probability distribution. Empirical Lagrangian form. The optimization problem in Equation (8) is difficult to solve because it involves fairness constraints for all potential distributions \u02dc \ud835\udc44(\ud835\udc60) in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)). To overcome this challenge, we convert the fairness constraints into regularizers and optimize the worst-case unfairness within B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) to ensure fairness for the entire set [11]. Finally, we reformulate Equation (8) as a min-max optimization problem, given by: min \ud835\udf03 max \u02dc \ud835\udc44(\ud835\udc60) \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) ), \ud835\udc60=0,1 \ud835\udc3f(\ud835\udf03) + \u2211\ufe01 \ud835\udc60 \ud835\udf06\ud835\udc60\ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60) ), (11) where \ud835\udf06\ud835\udc60is the hyper-parameter to control the strength of the fairness regularizer \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) obtained via Lagrangian trick: \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60) ) = \f \f \f \f\ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5aE \u02dc \ud835\udc44(\ud835\udc60), \u02c6 D\ud835\udc5a \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003\f \f \f \f . (12) Here, \u02dc \ud835\udc44(\ud835\udc60) becomes learnable parameters. We update it in B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) via \u2018max\u2019 optimization to find the worst case for fairness. \fFair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Learning algorithm. To solve the optimization problem in Equation (11), we use the algorithm in [27] for solving DRO, which is summarized in Algorithm 1. At each iteration of the algorithm, we first update the model parameters \ud835\udf03to minimize the total loss in Equation (11) (line 5). Then, for each sensitive attribute value \ud835\udc60\u2208{0, 1}, we update \u02dc \ud835\udc44(\ud835\udc60) to maximize the fairness-related loss \ud835\udc3f\ud835\udc60(\ud835\udf03, \u02dc \ud835\udc44(\ud835\udc60)) in Equation (11) (line 7), and project the updated result onto B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60)) (line 8, avoiding \u02dc \ud835\udc44(\ud835\udc60) beyond the set) to obtain the final \u02dc \ud835\udc44(\ud835\udc60) for that iteration. 3.3 Discussion We now discuss the extension of DRFO for situations where certain users are not willing to reconstruct their sensitive attributes. The main challenge lies in the inability to reconstruct sensitive attributes, which is essential for building the ambiguity set in DRFO. However, even with a random distribution, we can build a broad ambiguity set that covers the true distribution of these users by using a large robust radius. Thereafter, by optimizing the worstcase unfairness with the broad ambiguity set, we could still achieve fairness. Let D\ud835\udc4fdenote the interaction data of such users. To implement this approach, we first randomly assign sensitive attributes for users in D\ud835\udc4f, serving as reconstructed sensitive attribute \u02c6 \ud835\udc46in DRFO and obtaining \u02c6 D\ud835\udc4f. Then, we define empirical \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f , \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f , and B(\ud835\udf0c\u2032 \ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f ) for \u02c6 D\ud835\udc4f, referring to those of \u02c6 D\ud835\udc5a. To build the broad ambiguity set, we set the robust radius \ud835\udf0c\u2032 \ud835\udc60to 1. For users with missing sensitive attributes and permitting reconstruction, we use D\ud835\udc5f to denote their interactions (D\ud835\udc5f\u222aD\ud835\udc4f= D\ud835\udc5a, D\ud835\udc5f\u2229D\ud835\udc4f= \u2205). Then we define \u02c6 D\ud835\udc5f, \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , and B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f ) for D\ud835\udc5fin the same way as that of \ud835\udc37\ud835\udc5adescribed in Section 3.2. Finally, we could optimize the fairness for \u02c6 D\ud835\udc4fand \u02c6 D\ud835\udc5fin a similar way to \u02c6 D\ud835\udc5ausing DRO. Adding such optimization parts into Equation (8), we obtain the final optimization problem: min \ud835\udf03 max \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f \u2208B(\ud835\udf0c\ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc5f ) \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f \u2208B(\ud835\udf0c\u2032 \ud835\udc60; \u02c6 \ud835\udc44(\ud835\udc60) \ud835\udc4f ) \ud835\udc60=0,1 \ud835\udc3f(\ud835\udf03) + 1 \u2211\ufe01 \ud835\udc60=0 \ud835\udf06\ud835\udc60 \f \f \f \f\ud835\udf02(\ud835\udc60) \ud835\udc58ED(\ud835\udc60) \ud835\udc58 \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc5f E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02c6 D\ud835\udc5f \u0002 \u02c6 \ud835\udc45 \u0003 + \ud835\udf02(\ud835\udc60) \ud835\udc4fE \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f , \u02c6 D\ud835\udc4f \u0002 \u02c6 \ud835\udc45 \u0003 \u2212ED \u0002 \u02c6 \ud835\udc45 \u0003 \f \f \f \f, (13) where E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f , \u02c6 D\ud835\udc5f \u0002 \u02c6 \ud835\udc45 \u0003 and E \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f, \u02c6 D\ud835\udc4f \u0002 \u02c6 \ud835\udc45 \u0003 represents the empirical expectation of \u02c6 \ud835\udc45under the distribution \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc5f and \u02dc \ud835\udc44(\ud835\udc60) \ud835\udc4f, respectively, computed similarly to Equation (9), and\ud835\udf02(\ud835\udc60) \ud835\udc58 = | D (\ud835\udc60) \ud835\udc58 | | D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | , \ud835\udf02(\ud835\udc60) \ud835\udc5f = | \u02c6 D (\ud835\udc60) \ud835\udc5f | |D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | , and \ud835\udf02(\ud835\udc60) \ud835\udc4f = | \u02c6 D (\ud835\udc60) \ud835\udc4f | | D (\ud835\udc60) \ud835\udc58 |+| \u02c6 D (\ud835\udc60) \ud835\udc5f |+| \u02c6 D (\ud835\udc60) \ud835\udc4f | . The learning algorithm can be formulated following the structure outlined in Algorithm 1. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following research questions: RQ1: Can DRFO provide fair recommendations with limited sensitive attributes? RQ2: How do reconstruction errors affect the effectiveness of DRFO in ensuring fairness? What is the performance of DRFO in terms of fairness Table 1: Statistics of the evaluation datasets. Dataset #Items #Users #Interactions Mean Ratings \ud835\udc46= 0 \ud835\udc46= 1 \ud835\udc46= 0 \ud835\udc46= 1 \ud835\udc46= 0 \ud835\udc46= 1 ML-1M 3,244 1,153 3,144 228,191 713,590 0.5866 0.5661 Tenrec 14,539 3,108 2,299 308,217 337,958 0.4849 0.4676 on both users with known sensitive attributes and those with unknown sensitive attributes? RQ3: Can DRFO ensure fairness when some users prohibit the reconstruction of sensitive attributes? 4.1 Experimental Settings 4.1.1 Datasets. We conduct experiments on two open benchmark datasets: MovieLens 1M5 [10] and Tenrec6 [42]. \u2022 MoviveLens 1M (ML-1M): This is a movie recommendation dataset including user ratings of movies on a scale of 1-5 and sensitive user attributes such as \u2018gender\u2019. We select \u2018gender\u2019 as the sensitive attribute and transform the rating values into binary labels using a threshold of 3, where ratings greater than three are labeled as positive (\u20181\u2019) and the rest as negative (\u20180\u2019). \u2022 Tenrec: This dataset contains user feedback on articles or videos alongside anonymized user information from Tencent\u2019s recommendation platforms. We focus on the \u2018QB-video\u2019 sub-dataset for our experiments, using the \u2018like\u2019 feedback as the rating label and considering \u2018gender\u2019 as the sensitive attribute. In this work, we adopt the approach proposed in [36] to reconstruct sensitive attributes only using user historical interactions. To ensure the effectiveness of the reconstruction, it is crucial for users to have a sufficient number of historical interactions in the two datasets. Therefore, we apply 50-core filtering [26, 38] to select users with more than 50 interactions while performing 10-core filtering for items. Table 1 presents the statistical details of the processed datasets after filtering. We partition the datasets into training, validation, and testing sets using a ratio of 0.7:0.15:0.15. 4.1.2 Compared Methods. Achieving fair recommendations with limited sensitive attributes is a novel research problem, and no existing method is specifically designed to address it in recommender systems (to our knowledge). To establish baselines, we select two intuitive methods (RegK and FLrSA) and a method (CGL) proposed in other fields for solving this problem. \u2013 RegK [41] is a regularization-based approach that applies fair regularization only to users with known sensitive attributes. \u2013 FLrSA [41] first reconstructs missing sensitive attributes and then applies the fairness regularization method using both the reconstructed and the known sensitive attributes. \u2013 CGL [14] is a representative method for fairness with limited sensitive attributes in computer vision. It reconstructs sensitive attributes and addresses low-confidence reconstructions via random replacement. Subsequently, it employs a regularizationbased approach using the obtained attributes. We adapt it to recommender systems. Although RegK and FLrSA are intuitively designed, similar methods have been employed in other fields to study fairness with limited sensitive attributes [14]. Additionally, we include two fairnessunawareness methods as reference points. 5https://grouplens.org/datasets/movielens/ 6https://static.qblv.qq.com/qblv/h5/algo-frontend/tenrec_dataset.html \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. Table 2: Comparison between baselines and DRFO on ML-1M and Tenrec w.r.t. the fairness metric DP and recommendation performance metric RMSE under varying known sensitive attribute ratios. Lower DP and RMSE indicate better results. Dataset Model DP \u2193 RMSE \u2193 10% 30% 50% 70% 90% 10% 30% 50% 70% 90% ML-1M Basic MF 0.0225 0.0225 0.0225 0.0225 0.0225 0.4147 0.4147 0.4147 0.4147 0.4147 Oracle 0.0009 0.0009 0.0009 0.0009 0.0009 0.4148 0.4148 0.4148 0.4148 0.4148 RegK 0.0159 0.0132 0.0058 0.0014 0.0013 0.4144 0.4147 0.4147 0.4150 0.4149 FLrSA 0.0095 0.0119 0.0038 0.0035 0.0003 0.4147 0.4147 0.4148 0.4151 0.4147 CGL 0.0082 0.0145 0.0056 0.0035 0.0005 0.4147 0.4144 0.4148 0.4150 0.4149 DRFO 0.0034 0.0042 0.0011 0.0013 0.0006 0.4201 0.4201 0.4191 0.4223 0.4198 Tenrec Basic MF 0.0068 0.0068 0.0068 0.0068 0.0068 0.4503 0.4503 0.4503 0.4503 0.4503 Oracle 0.0001 0.0001 0.0001 0.0001 0.0001 0.4504 0.4504 0.4504 0.4504 0.4504 RegK 0.0053 0.0052 0.0053 0.0040 0.0051 0.4558 0.4520 0.4530 0.4500 0.4500 FLrSA 0.0073 0.0073 0.0073 0.0032 0.0013 0.4503 0.4503 0.4503 0.4597 0.4594 CGL 0.0073 0.0073 0.0074 0.0029 0.0009 0.4503 0.4503 0.4503 0.4518 0.4596 DRFO 0.0019 0.0018 0.0013 0.0018 0.0009 0.4590 0.4575 0.4565 0.4595 0.4596 \u2013 Basic MF [18] refers to the basic Matrix Factorization (MF) model that is trained without addressing fairness issues. \u2013 Oracle [41] assumes knowledge of all users\u2019 sensitive attributes and employs the regularization-based method to achieve fair recommendations. Its results serve as an upper bound on fairness. To ensure a fair comparison, we implement the fairness regularization term based on Equation (2) to pursue demographic parity fairness, using the classical Matrix Factorization model as the backbone recommendation model for all methods. 4.1.3 Evaluation Protocol. In our experiments, we simulate scenarios involving unknown sensitive attributes by randomly retaining the sensitive attribute for a subset of users while masking it for others. The retention ratio, varying in {0.1, 0.3, 0.5, 0.7, 0.9}, determines the proportion of users whose sensitive attributes are preserved during both training and validation phases. During testing, however, the sensitive attributes of all users are accessible to evaluate fairness. We use the MAD metric in Equation (1) to measure fairness (DP) and the root mean squared error (RMSE) to measure recommendation performance. A smaller value for both metrics indicates better performance in terms of fairness or recommendation. 4.1.4 Implementation Details. For a fair comparison, we optimize all models using the Adam optimizer [17] with the default embedding size of 32. Before applying the fairness methods, we pre-train the MF model using grid search to determine the optimal learning rate from the range of {1\ud835\udc52\u22122, 1\ud835\udc52\u22123} and the best weight decay from the range of {1\ud835\udc52\u22121, 1\ud835\udc52\u22122, ..., 1\ud835\udc52\u22127}. For the fairness models, we initialize their backbone MF model with the pre-trained one and then fine-tune them with a fixed learning rate of 1\ud835\udc52\u22123. We tune the fairness regularization coefficient in the range of {0.01, 0.05, 0.1, 0.5, 1, 5, 10} for the baselines. For DRFO, we set the hyper-parameter that controls the strength of fairness constraints to 10 for ML-1M and 1 for Tenrec, and set the learning rate \ud835\udefc\ud835\udc5efor updating \u02dc \ud835\udc44(\ud835\udc60) to 1\ud835\udc52\u22123 for ML-1M and 1\ud835\udc52\u22122 for Tenrec. Additionally, for methods that involve reconstructing unknown sensitive attributes, we use the same classifier proposed in [36] that is trained to fit known sensitive attributes using historical user interactions. To ensure a fair comparison, we select hyper-parameters that achieve the best fairness (measured by DP) while also maintaining at least 98% of the best recommendation performance (measured 10 30 50 70 90 100 Known Senstive Attributes Ratio (%) 0.000 0.005 0.010 0.015 0.020 Fairness (DP) (a) Fairness results on ML-1M. 10 30 50 70 90 100 Known Senstive Attributes Ratio (%) 0.000 0.002 0.004 0.006 0.008 Fairness (DP) (b) Fairness results on Tenrec. RegK FLrSA CGL DRFO Oracle Basic MF Figure 2: Fairness comparison between baselines and DRFO on ML-1M and Tenrec for varying known sensitive attribute ratios. Lower DP values indicate better fairness. by RMSE) achieved by the basic MF on the validation set. This approach is commonly used in previous studies to balance the tradeoff between fairness and recommendation performance [15, 35]. We release our code at: https://github.com/TianhaoShi2001/DRFO. 4.2 Overall Performance (RQ1) We first evaluate methods when all users\u2019 sensitive attributes are allowed to be reconstructed. We summarize the results in Table 2 for varying proportions of users with known sensitive attributes. The corresponding fairness curves for each method are shown in Figure 2 to illustrate the impact of the retention ratio. From the table and figure, we have drawn the following observations: \u2022 Basic MF method exhibits notably inferior fairness compared to regularization-based methods (e.g., FLrSA) with a substantial portion (\u226590%) of known sensitive attributes, particularly when compared to the Oracle model. This emphasizes the effectiveness of regularization-based approaches in enhancing recommendation fairness. However, as the proportion of known sensitive attributes decreases (\u226450%), the fairness performance of regularization-based methods rapidly declines, highlighting the importance of addressing limited sensitive attribute issues for improving fairness. \u2022 When the proportion of known sensitive attributes is small (\u226450%), FLrSA successfully outperforms RegK on ML-1M but fails on Tenrec in fairness. This can be attributed to the fact that reconstructing sensitive attributes for Tenrec is more difficult7, thus 7It is supported by the lower reconstruction accuracy (AUC) observed on Tenrec. \fFair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 10 20 30 40 50 Reconstruction Error Rate (%) 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 Fairness (DP) 30% Known Sensitive Attributes 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.002 0.004 0.006 0.008 0.010 0.012 Fairness (DP) 50% Known Sensitive Attributes RegK FLrSA DRFO Oracle (a) Fairness results on ML-1M. 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 30% Known Sensitive Attributes 10 20 30 40 50 Reconstruction Error Rate (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 50% Known Sensitive Attributes (b) Fairness results on Tenrec. Figure 3: Fairness performance under different levels of reconstruction errors for sensitive attributes. suffering more reconstruction errors and invalidating the vanilla reconstruction-based method FLrSA. These findings affirm the significant impact of reconstruction errors on the efficacy of reconstruction-based approaches to improve fairness. \u2022 Despite taking reconstruction errors into consideration, CGL performs similarly to FLrSA in fairness. This may be due to the fact that CGL randomly assigns sensitive attributes to samples with low reconstruction confidence, which does not ensure the resolution of the reconstruction error problem. \u2022 DRFO consistently achieves a higher level of fairness, compared to all baselines except for Oracle, even when the proportion of users with known sensitive attributes is low. This confirms its effectiveness in addressing reconstruction errors to achieve fairer recommendations. The superiority of the approach can be attributed to its DRO-based fair optimization, which minimizes the worst-case unfairness over a distribution set to achieve fairness for the true sensitive attributes contained in the set. \u2022 DRFO achieves slightly lower recommendation performance than baselines due to its aggressive fairness pursuit by minimizing worst-case unfairness across potential distributions. Nevertheless, our early stopping strategy selects the fairest model within a maximum 2% drop in RMSE on the validation, ensuring a low drop (around 2%) in RMSE on the testing for all methods. Despite this modest sacrifice in recommendation performance, DRFO improves fairness by over 100% in many cases, particularly when numerous users lack sensitive attributes. 4.3 In-depth Analyses (RQ2) We will conduct further experiments to analyze the influence of reconstruction errors on different methods and study the fairness performance of different users. S=0 (K) S=1 (K) S=0 (U) S=1 (U) 0.000 0.004 0.008 0.012 (a) Fairness results on ML-1M. S=0 (K) S=1 (K) S=0 (U) S=1 (U) 0.000 0.002 0.004 0.006 0.008 (b) Fairness results on Tenrec. RegK FLrSA CGL DRFO Figure 4: Absolute difference of average predicted scores of different groups from global average predictions. Higher difference means more unfairness. \u2018K\u2019 stands for \u2018known\u2019, and \u2018U\u2019 stands for \u2018unknown\u2019. \u2018S=0 (K)\u2019 denotes the users with the known sensitive attribute of 0, similarly for others. 4.3.1 The Effect of Reconstruction Errors. In this subsection, we directly investigate the impact of reconstruction errors on the fairness achievement of DRFO and baselines. We set the ratio of users with known sensitive attributes to be either 30% or 50%, and for the remaining users, we randomly flip the true sensitive attributes of some users to form the reconstructed sensitive attributes, thus introducing reconstruction errors. The flipping ratio varies from 10% to 50%, allowing us to simulate different levels of reconstruction errors8. We compare the fairness of RegK, FLrSA, and DRFO under different reconstruction error levels, excluding CGL due to the lack of reconstruction confidence, which is a necessary condition for CGL. The resulting fairness comparison is presented in Figure 3, where we exclude the recommendation performance as the observed drop is limited to approximately 2% (we also omit these results in the following experiments for the same reasons). Based on the figure, we make the following observations: \u2022 As the reconstruction error increases, both DRFO and FLrSA encounter a decline in fairness. Nevertheless, DRFO\u2019s smoother curve and consistent superiority over RegK highlight its robustness to reconstruction errors. The deterioration in DRFO\u2019s fairness can be explained by Equation (7), where increases in reconstruction error cause DRFO to select a larger ambiguity set, intensifying optimization challenges for achieving fairness. \u2022 Upon surpassing a 20% reconstruction error, FLrSA no longer surpasses RegK in fairness. Even with just a 10% reconstruction error, FLrSA fails to match Oracle in fairness. These results emphasize the necessity of addressing reconstruction errors to uphold the reconstruction method\u2019s validity. 4.3.2 Case Study. We next investigate whether our DRFO ensures fairness for both users with known sensitive attributes and users with unknown sensitive attributes. To achieve this, we propose a metric that measures the absolute difference between a specific group\u2019s average predicted rating and the overall average rating, similar to MAD in Equation (1). Larger values of this metric indicate more unfairness. We compute this metric for four different groups: users with \ud835\udc46= 1 but \ud835\udc46is unknown, users with \ud835\udc46= 0 but \ud835\udc46is unknown, users with known \ud835\udc46= 1, and users with known \ud835\udc46= 0. 8A 50% reconstruction error is significant, akin to random guessing, and we flip attributes for males and females at a 1:1 ratio to maintain unchanged gender distributions. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al. 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.005 0.010 0.015 0.020 Fairness (DP) 30% Known Sensitive Attributes 0 25 50 75 100 Reconstrution Ratio (%) 0.002 0.004 0.006 0.008 Fairness (DP) 50% Known Sensitive Attributes RegK FLrSA CGL DRFO Oracle (a) Fairness results on ML-1M. 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 30% Known Sensitive Attributes 0 25 50 75 100 Reconstrution Ratio (%) 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 Fairness (DP) 50% Known Sensitive Attributes (b) Fairness results on Tenrec. Figure 5: Fairness results in scenarios where a portion of users does not allow reconstruction of their attributes among the users with unknown sensitive attributes. We summarize the result of the compared method in Figure 4, where 30% of users have known sensitive attributes. In Figure 4, baselines reveal significant absolute differences between groups\u2019 average and the global average, particularly noticeable for those with unknown sensitive attributes, implying higher unfairness. These results highlight baselines\u2019 inefficacy in achieving fair recommendations, especially for users with unknown attributes. In contrast, our proposed DRFO approach exhibits a small unfairness for all groups. Furthermore, the unfairness of the groups with known and unknown sensitive attributes is comparable, indicating that our method can successfully overcome the challenge of limited sensitive attributes and provide fair recommendations for users both with known and unknown sensitive attributes. 4.4 Fairness for Users Refusing to Reconstruct Sensitive Attributes (RQ3) To delve deeper into DRFO\u2019s ability in ensuring fairness for users refusing sensitive attribute reconstruction due to privacy concerns, we perform supplementary experiments. We explore scenarios where a fraction of users (30% or 50%) possess known sensitive attributes. Among the remaining users, we randomly designate a subset, varying from 0% to 100%, as individuals permitting sensitive attribute reconstruction, while others opt out of it. To accommodate non-reconstructable sensitive attributes, we adapt the baselines FLrSA and CGL. For FLrSA, fair constraints are omitted for users not permitting sensitive attribute reconstruction. In the case of CGL, sensitive attributes are randomly assigned to users disallowing reconstruction, mimicking its low-confidence strategy for reconstructed sensitive attributes. A summary of fairness performance across all compared methods is presented in Figure 5. Figure 5 shows baselines\u2019 failures in ensuring fairness when some users do not grant sensitive attribute reconstruction. Conversely, DRFO maintains relatively fair results. This validates DRFO\u2019s fairness improvement without full sensitive attribute reconstruction by employing a sufficiently large ambiguity set to encompass potential attributes and minimizing the worst-case unfairness within the set. 5 RELATED WORK 5.1 Fairness in Recommendation Fairness research in recommender systems employs two primary paradigms: individual fairness [39, 45], which treats similar individuals similarly, and group fairness [16, 43], which ensures equitable treatment of different groups. Our study specifically focuses on user-side fairness in recommendation at a group level, using Demographic Parity [2, 5, 16] as a representative criterion. To promote fairness in recommendations, existing approaches can be broadly categorized into three types [23, 35]: fair pre-processing, fair learning, and fair adjustment. Fair pre-processing methods mitigate bias in data by resampling [6] or adding antidote data [30]. Fair learning methods modify optimization targets to learn fair recommendation models, either through fairness criteria as regularizers [16, 41, 47] or adversarial learning [39, 45]. Fair adjustment methods [22, 40] reorder the recommendation results to attain a fairer outcome. However, these methods typically require full access to sensitive attributes, which may not be feasible in real-world situations. In contrast, our work tackles the challenge of addressing unfairness when only limited sensitive attributes are available. 5.2 Fairness with Missing Sensitive Attributes Fair recommendation research has paid less attention to fairness when some sensitive attributes are missing, but progress has been made in other fairness-focused domains. Some methods are specifically designed to enhance Rawlsian Fairness [31] using DRO [11] or Adversarial Reweighting Learning [19], but they cannot extend to other fairness metrics like DP. Another direction utilizes observable attributes (e.g., zip code) as proxies for sensitive attributes [9, 46], which differs from our method of reconstructing sensitive attributes. The effectiveness of these approaches relies on the strength of the correlation between the sensitive attribute and the observable nonsensitive one. Furthermore, some works enhance fairness by reconstructing missing sensitive attributes [3, 14] or sensitive representations [8] and leveraging this reconstructed information. However, they may have limited effectiveness due to reconstruction errors. In contrast, our DRO-based fair learning method ensures fairness in the presence of reconstruction errors. Within recommender systems, FairLISA [45] first studies the problem of missing sensitive attributes. It utilizes a discriminator trained with known attributes to remove the sensitive information from embeddings of users whose sensitive attributes are unavailable. Unlike it, our method employs the DRO paradigm to offer robust fair recommendations for users with missing sensitive attributes. Additionally, FairLISA and our work explore the fairness of limited sensitive attributes from different fairness metrics [35]. While FairLISA focuses on the fairness of removing sensitive information from user embeddings, our focus is on the fairness of inter-group metric differences. \fFair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 5.3 Distributionally Robust Optimization Distributionally Robust Optimization (DRO) [27, 29] is a well-known framework that handles uncertainty. It seeks a solution that performs well under all possible distributions within a defined set by optimizing for the worst-case expected loss [27, 29]. Previous studies have utilized DRO to investigate fairness. Hashimoto et al. [11] employ DRO to improve the performance of groups with high classification errors, and Wen et al. [37] use DRO to improve worst-case user experience in recommendations. These approaches focus on Rawlsian Fairness [31], which cannot be extended to other fairness criteria (e.g., DP). RobFair [37] utilizes DRO to provide robust fair recommendations, overcoming the challenge of user preference shift which may compromise the fairness achieved during the training phase. Unlike RobFair, our work investigates fair recommendations when some sensitive attributes are missing and uses DRO to handle the uncertainty of missing sensitive attributes. Additionally, Wang et al. [34] introduce two methods that utilize DRO to learn fair classifiers with noisy sensitive attributes, which is most closely related to our work. In difference, our work considers scenarios where sensitive attributes cannot be reconstructed and discusses how to estimate the upper bound of TV distance when the assumption in Theorem 1 is slightly violated. 6 CONCLUSION In this study, we aim to enhance fairness in recommender systems with limited sensitive attributes. We point out that directly reconstructing sensitive attributes may suffer from reconstruction errors and have privacy concerns. To overcome the challenges, we propose a DRO-based fair learning framework. It builds an ambiguity set based on reconstructed sensitive attributes of users who grant reconstruction, and subsequently optimizes the worst-case unfairness within the entire set. We provide theoretical evidence that our methods could build an appropriate ambiguity set that encompasses the unknown true distribution, thus providing robust fair recommendations with limited sensitive attributes. Additionally, extensive experiments on two real-world datasets confirm the efficacy of our methods. In the future, we plan to extend our approach to address fairness in non-binary sensitive attributes. Furthermore, our current work only considers using DP as a fairness objective, and we aim to extend the approach to other group fairness metrics. A APPENDIX A.1 Total Variation Distance In this section, we first present an introduction to the Total Variation (TV) distance, including its definition and some basic properties. Subsequently, we proceed to prove Theorem 2, and from this proof, we will be able to deduce Theorem 1. Afterward, we discuss the situation when the assumptions of Theorem 2 are not satisfied. Definition A.1 (Total Variation Distance). The total variation distance between probability measures \ud835\udc43and \ud835\udc44on a measurable space (\u03a9, F ) is defined as the supremum of the absolute difference between the probabilities assigned to a measurable event \ud835\udc34\u2208F by \ud835\udc43and \ud835\udc44. If \u03a9 is countable, the total variation distance can also be expressed in terms of the L1-norm [21] as follows: \ud835\udc47\ud835\udc49(\ud835\udc43,\ud835\udc44) = sup \ud835\udc34\u2208F |\ud835\udc43(\ud835\udc34) \u2212\ud835\udc44(\ud835\udc34) | = 1 2 \u2225\ud835\udc43\u2212\ud835\udc44\u22251 . (14) Theorem 2. Given a measurable event \ud835\udc34, assuming \ud835\udc43(\ud835\udc46= \ud835\udc60) = \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60) for a given \ud835\udc60\u2208{0, 1}, the total variation distance between two conditional probabilities \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60) and \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60) is bounded by the probability that the sensitive attribute is incorrectly reconstructed, i.e., \ud835\udc47\ud835\udc49(\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60), \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60)) \u2264\ud835\udc43(\ud835\udc46\u2260\u02c6 \ud835\udc46|\ud835\udc46= \ud835\udc60). Proof. Assuming \ud835\udc43(\ud835\udc46=\ud835\udc60)=\ud835\udc43( \u02c6 \ud835\udc46=\ud835\udc60), the following equation holds: \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60|\ud835\udc46= \ud835\udc60) = \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60,\ud835\udc46= \ud835\udc60) \ud835\udc43(\ud835\udc46= \ud835\udc60) = \ud835\udc43(\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60) \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60) = \ud835\udc43(\ud835\udc46= \ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60), and we can also deduce \ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc60|\ud835\udc46=\ud835\udc60)=\ud835\udc43(\ud835\udc46\u2260\ud835\udc60| \u02c6 \ud835\udc46=\ud835\udc60). Then, given a measurable event \ud835\udc34and two conditional probabilities \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60) and \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60). We have: \ud835\udc47\ud835\udc49(\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60), \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60)) = sup \ud835\udc34 \f \f\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60) \f \f = sup \ud835\udc34 \f \f\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60|\ud835\udc46= \ud835\udc60)+\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60)\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc60|\ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udc43(\ud835\udc46= \ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc34|\ud835\udc46\u2260\ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udc43(\ud835\udc46\u2260\ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \f \f = sup \ud835\udc34 \f \f \f\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60|\ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc46= \ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \u0011 \u2212\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60) \u2212\ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60,\ud835\udc46\u2260\ud835\udc60) \u0011\f \f \f = sup \ud835\udc34 \f \f \f0 \u2212\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60) \u2212\ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60,\ud835\udc46\u2260\ud835\udc60) \u0011\f \f \f \u2264\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60). \u25a1 The above derivation completes the proof of Theorem 2, thereby deriving Theorem 1. Note that we assume \ud835\udc43(\ud835\udc46) = \ud835\udc43( \u02c6 \ud835\udc46) in proving Theorem 2. If the assumption is violated, as the general reconstructive ability to accurately restore true sensitive attributes, we presume a slight deviation between \ud835\udc43(\ud835\udc46) and \ud835\udc43( \u02c6 \ud835\udc46), that is, \ud835\udc43( \u02c6 \ud835\udc46) \u2212\ud835\udc43(\ud835\udc46) = \ud835\udeff\ud835\udc5d(|\ud835\udeff\ud835\udc5d| \u226a\ud835\udc43(\ud835\udc46)), then \ud835\udc47\ud835\udc49(\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60), \ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60)) = sup \ud835\udc34 \f \f \f\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43( \u02c6 \ud835\udc46= \ud835\udc60|\ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc46= \ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \u0011 \u2212 \u0010 \ud835\udc43(\ud835\udc34|\ud835\udc46\u2260\ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udc43(\ud835\udc46\u2260\ud835\udc60| \u02c6 \ud835\udc46= \ud835\udc60) \u2212\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60)\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc60|\ud835\udc46= \ud835\udc60) \u0011\f \f \f \u2248sup \ud835\udc34 \f \f \f \u0010 \ud835\udc58\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46= \ud835\udc60)\ud835\udeff\ud835\udc5d\u2212\ud835\udc58\ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60)\ud835\udeff\ud835\udc5d \u0011 \u2212\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60) \u0010 \ud835\udc43(\ud835\udc34|\ud835\udc46= \ud835\udc60, \u02c6 \ud835\udc46\u2260\ud835\udc60) \u2212\ud835\udc43(\ud835\udc34| \u02c6 \ud835\udc46= \ud835\udc60,\ud835\udc46\u2260\ud835\udc60) \u0011\f \f \f \u2264\ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60) + \ud835\udc58|\ud835\udeff\ud835\udc5d| , where \ud835\udc58= \ud835\udc43(\ud835\udc46=\ud835\udc60, \u02c6 \ud835\udc46=\ud835\udc60) \ud835\udc43(\ud835\udc46=\ud835\udc60)\ud835\udc43( \u02c6 \ud835\udc46=\ud835\udc60) , and the approximate equality is obtained through the probability formula and first-order approximation (details omitted for simplicity). From the derivation, we know that when there is a marginal discrepancy between \ud835\udc43(\ud835\udc46) and \ud835\udc43( \u02c6 \ud835\udc46), the estimated upper bound on the TV distance should be looser. However, the impact on the upper bound is modest, rendering its omission reasonable. Also, in our experiments, using only the first term (i.e., \ud835\udc43( \u02c6 \ud835\udc46\u2260\ud835\udc46|\ud835\udc46= \ud835\udc60)) to build ambiguity sets has proven to achieve robust fairness, implying the rationale of directly utilizing the conclusion of Theorem 2 to estimate the upper bound of the TV distance. ACKNOWLEDGMENTS This work is supported by the National Key Research and Development Program of China (2022YFB3104701), the National Natural Science Foundation of China (62272437), and the CCCD Key Lab of Ministry of Culture and Tourism. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Tianhao Shi, et al."
|
| 18 |
+
}
|
title_10K/test_title_short_2405.01097v1.json
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01097v1",
|
| 3 |
+
"title": "Silencing the Risk, Not the Whistle: A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification",
|
| 4 |
+
"abstract": "Whistleblowing is essential for ensuring transparency and accountability in\nboth public and private sectors. However, (potential) whistleblowers often fear\nor face retaliation, even when reporting anonymously. The specific content of\ntheir disclosures and their distinct writing style may re-identify them as the\nsource. Legal measures, such as the EU WBD, are limited in their scope and\neffectiveness. Therefore, computational methods to prevent re-identification\nare important complementary tools for encouraging whistleblowers to come\nforward. However, current text sanitization tools follow a one-size-fits-all\napproach and take an overly limited view of anonymity. They aim to mitigate\nidentification risk by replacing typical high-risk words (such as person names\nand other NE labels) and combinations thereof with placeholders. Such an\napproach, however, is inadequate for the whistleblowing scenario since it\nneglects further re-identification potential in textual features, including\nwriting style. Therefore, we propose, implement, and evaluate a novel\nclassification and mitigation strategy for rewriting texts that involves the\nwhistleblower in the assessment of the risk and utility. Our prototypical tool\nsemi-automatically evaluates risk at the word/term level and applies\nrisk-adapted anonymization techniques to produce a grammatically disjointed yet\nappropriately sanitized text. We then use a LLM that we fine-tuned for\nparaphrasing to render this text coherent and style-neutral. We evaluate our\ntool's effectiveness using court cases from the ECHR and excerpts from a\nreal-world whistleblower testimony and measure the protection against\nauthorship attribution (AA) attacks and utility loss statistically using the\npopular IMDb62 movie reviews dataset. Our method can significantly reduce AA\naccuracy from 98.81% to 31.22%, while preserving up to 73.1% of the original\ncontent's semantics.",
|
| 5 |
+
"authors": "Dimitri Staufer, Frank Pallas, Bettina Berendt",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CY",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CY",
|
| 11 |
+
"cs.CL",
|
| 12 |
+
"cs.HC",
|
| 13 |
+
"cs.IR",
|
| 14 |
+
"cs.SE",
|
| 15 |
+
"H.3; K.4; H.5; K.5; D.2; J.4"
|
| 16 |
+
],
|
| 17 |
+
"label": "Original Paper",
|
| 18 |
+
"paper_cat": "LLM Fairness",
|
| 19 |
+
"gt": "Silencing the Risk, Not the Whistle: A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification",
|
| 20 |
+
"main_content": "INTRODUCTION In recent years, whistleblowers have become \u201ca powerful force\u201d for transparency and accountability, not just in the field of AI [9], but also in other technological domains and across both privateand public-sector organizations. Institutions such as the AI Now Institute [9] or the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems [22] have emphasized the key role of whistleblower protection for societal well-being and often also the organizations\u2019 own interests [21]. However, whistleblowing may be a threat for the organizations whose malfeasance is being revealed; thus (potential) whistleblowers often fear or face retaliation. Computationally-supported anonymous reporting seems to be a way forward, but even if reporting frameworks are sufficiently secure systemand network-wise, the report itself may allow inferences towards the whistleblower\u2019s identity due to its content and the whistleblower\u2019s writing style. Non-partisan organizations such as Whistleblower-Netzwerk e.V. (WBN) provide guidance on concise writing. Our interactions with WBN confirm that whistleblower testimonies often include unnecessary personal details. Existing approaches modifying the texts of such reports appear promising, but they take an overly limited view of anonymity and \u2013 like whistleblower protection laws \u2013 address only parts of the problem. This is detailed in Section 2. To improve on these approaches, we propose, implement, and evaluate a novel classification and mitigation strategy for rewriting texts that puts the whistleblower into the loop of assessing risk and utility. Our contributions are threefold. First (Section 3), we analyse the interleaved contributions of different types of identifiers in arXiv:2405.01097v1 [cs.CY] 2 May 2024 \fStaufer, et al. texts to derive a description of the problem for anonymous whistleblowing in terms of a trade-off between risk (identifiability of the whistleblower) and utility (of the rewritten text retaining sufficient information on the specific event details). We derive a strategy for assigning re-identification risk levels of concern to textual features composed of an automated mapping and an interactive adjustment of concern levels. Second (Section 4), we describe our toolwhich implements this strategy. It applies (i) the word/term-to-concern mapping using natural language processing to produce a sanitized but possibly ungrammatical intermediate text version, (ii) a Large Language Model (LLM) that we fine-tuned for paraphrasing to render this text coherent and style-neutral, and (iii) interactivity to draw on the user\u2019s context knowledge. Third (Section 5), we evaluate the resulting risk-utility trade-off. We measure the protection against authorship attribution attacks and utility loss statistically using an established benchmark dataset and show that it can significantly reduce authorship attribution accuracy while retaining utility. We also evaluate our our tool\u2019s effectiveness in masking direct and quasi-identifiers using the Text Anonymization Benchmark [48] and demonstrate its effectiveness on excerpts from a real-world whistleblower testimony. Section 6 sketches current limitations and future work. Section 7 describes ethical considerations and researchers\u2019 positionality, and it discusses possible adverse impacts. 2 BACKGROUND AND RELATED WORK This section describes the importance of, and threats to, whistleblowing (Section 2.1) and the promises and conceptual and practical challenges of \u201canonymity\u201d in reporting (Section 2.2). We survey related work on the anonymization/de-identification of text and argue why it falls short in supporting whistleblowing (Section 2.3). 2.1 Challenges of Safeguarding Whistleblowers Whistleblowers play a crucial role in exposing wrongdoings like injustice, corruption, and discrimination in organizations [6, 41]. However, their courageous acts often lead to negative consequences, such as subtle harassment and rumors, job loss and blacklisting, and, in extreme cases, even death threats [34, 37, 58]. In Western nations, whistleblowing is largely viewed as beneficial to society [66], leading to protective laws like the US Sarbanes-Oxley Act of 2002 and the European Union\u2019s \u201cWhistleblowing Directive\u201d (Directive 2019/1937). The latter, for example, mandates the establishment of safe reporting channels and protection against retaliation. It also requires EU member states to provide whistleblowers with legal, financial, and psychological support. However, the directive faces criticism for its limitations. Notably, it does not cover all publicsector entities [63, p. 3] and leaves key decisions to member states\u2019 discretion [1, p. 652]. This discretion extends to the absence of mandatory anonymous reporting channels and permits states to disregard cases they consider \u201cclearly minor\u201d, leaving whistleblowers without comprehensive protection for non-material harms like workplace bullying [63, p. 3]. Furthermore, according to White [70], the directive\u2019s sectoral approach and reliance on a list of specific EU laws causes a patchwork of provisions, creating a complex and possibly confusing legal environment, particularly for those sectors impacting human rights and life-and-death situations. Last but not least, organizations often react negatively to whistleblowing due to the stigma of errors, even though recognizing these mistakes would be key to building a culture of responsibility [5, p. 12] and improving organizations and society [69]. The reality for whistleblowers is thus fraught with challenges, from navigating legal uncertainties to dealing with public perception [26, 51, 52], leaving many whistleblowers with no option but to report their findings anonymously [50]. However, \u201canonymous\u201d reporting channels alone do not guarantee anonymity [5]. 2.2 Anonymity, (De-)anonymization, and (De-/Re-)Identification Anonymity is not an alternative between being identified uniquely or not at all, but \u201cthe state of being not identifiable within a set of subjects [with potentially the same attributes], the anonymity set\u201d [46, p.9]. Of the manifold possible approaches towards this goal, state-of-the-art whistleblowing-support software as well as legal protections (where existing) focus on anonymous communications [5]. This, however, does not guarantee anonymous reports. Instead, a whistleblower\u2019s anonymity may still be at risk due to several factors, including: (i) surveillance technology, such as browser cookies, security mechanisms otherwise useful to prevent unauthenticated uses, cameras, or access logs, (ii) the author\u2019s unique writing style, and (iii) the specific content of the message [33]. Berendt and Schiffner [5] refer to the latter as \u201cepistemic non-anonymizability\u201d, i.e., the risk of being identified based on the unique information in a report, particularly when the information is known to only a few individuals. In some cases, this may identify the whistleblower uniquely. Terms and their understanding in the domain of anonymity vary. We use the following nomenclature: anonymization is a modification of data that increases the size of the anonymity set of the person (or other entity) of interest; conversely, de-anonymization decreases it (to some number \ud835\udc58\u22651). De-anonymization to \ud835\udc58= 1, which includes the provision of an identifier (e.g., a proper name), is called re-identification. The removal of some identifying information (e.g., proper names), called de-identification, often but not necessarily leads to anonymization [4, 68]. In structured data, direct identifiers (e.g., names or social security numbers) are unique to an individual, whereas quasi-identifiers like age, gender, or zip code, though not unique on their own, can be combined to form unique patterns. Established mathematical frameworks for quantifying anonymity, such as Differential Privacy (DP) [16], and metrics such as k-anonymity [53], along with their refinements [27, 31], can be used when anonymizing datasets. Unstructured data such as text, which constitutes a vast majority of the world\u2019s data, requires its own safeguarding methods, which fall into two broader categories [28]. The first, NLP-based text sanitization, focuses on linguistic patterns to reduce (re-)identification risk. The second, privacy-preserving data publishing (PPDP), involves methods like noise addition or generalization to comply with pre-defined privacy requirements [15]. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification 2.3 Related Work: Text De-Identification and Anonymization, Privacy Models, and Adversarial Stylometry De-identification methods in text sanitization mask identifiers, primarily using named entity recognition (NER) techniques. These methods, largely domain-specific, have been particularly influential in clinical data de-identification, as evidenced, for instance, by the 2014 i2b2/UTHealth shared task [62]. However, they do not or only partially address the risk of indirect re-identification [4, 38]. For example, S\u00e1nchez et al. [55, 56, 57] make the simplifying assumption that replacing noun phrases which are rare in domain-specific corpora or on the web with more general ones offers sufficient protection. Others use recurrent neural networks [12, 30], reinforcement learning [71], support vector machines [65], or pre-trained language models [23] to identify and remove entities that fall into pre-defined categories. However, all of these approaches ignore or significantly underestimate the actual risks of context-based re-identification. More advanced anonymization methods, in turn, also aim to detect and remove identifiers that do not fit into the usual categories of named entities or are hidden within context. For example, Reddy and Knight [49] detect and obfuscate gender, and Adams et al. [2] introduce a human-annotated multilingual corpus containing 24 entity types and a pipeline consisting of NER and co-reference resolution to mask these entities. In a more nuanced approach, Papadopoulou et al. [44] developed a \u201cprivacy-enhanced entity recognizer\u201d that identifies 240 Wikidata properties linked to personal identification. Their approach includes three key measures to evaluate if a noun phrase needs to be masked or replaced by a more general one [43]. The first measure uses RoBERTa [29] to assess how \u201csurprising\u201d an entity is in its context, assuming that more unique entities carry higher privacy risks. The second measure checks if web search results for entity combinations mention the individual in question, indicating potential re-identification risk. Lastly, they use a classifier trained with the Text Anonymization Benchmark (TAB) corpus [48] to predict masking needs based on human annotations. Kleinberg et al.\u2019s [24] \u201cTextwash\u201d employs the BERT model, fine-tuned on a dataset of 3717 articles from the British National Corpus, Enron emails, and Wikipedia. The dataset was annotated with entity tags such as \u201cPERSON_FIRSTNAME\u201d, \u201cLOCATION\u201d, and an \u201cOTHER_IDENTIFYING_ATTRIBUTE\u201d category for indirect reidentification risks, along with a \u201cNONE\u201d category for tokens that are non-re-identifying. A quantitative evaluation (0.93 F1 score for detection accuracy, minimal utility loss in sentiment analysis, and part-of-speech tagging) and its qualitative assessment (82% / 98% success in anonymizing famous / semi-famous individuals) show promise. However, the more recent gpt-3.5-turbo can re-identify 72.6% of the celebrities from Textwash\u2019s qualitative study on the first attempt, highlighting the evolving complexity of mitigating the risk of re-identification in texts [45]. In PPDP, several privacy models for structured data have been adapted for privacy guarantees in text. While most are theoretical [28], \u201cC-sanitise\u201d [54] determines the disclosure risk of a certain term t on a set of entities to protect (C), given background knowledge K, which by default is the probability of an entity co-occurring with a term t in the web. Additionally, DP techniques have been adapted to text, either for generating synthetic texts [20] or for obscuring authorship in text documents [68]. This involves converting text into word embeddings, altering these vectors with DP techniques, and then realigning them to the nearest words in the embedding model [73, 74]. However, \u201cword-level differential privacy\u201d [35] faces challenges: it maintains the original sentence length, limiting variation, and can cause grammatical errors, such as replacing nouns with unrelated adjectives, due to not considering word types. Authorship attribution (AA) systems use stylistic features such as vocabulary, syntax, and grammar to identify an author. State-ofthe-art approaches involve using Support Vector Machines [64, 72], and more recently, fine-tuned LLMs like BertAA [3, 18, 64]. The \u201cValla\u201d benchmark and software package standardizes evaluation methods and includes fifteen diverse datasets [64]. Contrasting this, adversarial stylometry modifies an author\u2019s writing style to reduce AA systems\u2019 effectiveness [61]. Advancements in machine translation [67] have also introduced new methods based on adversarial training [60], though they sometimes struggle with preserving the original text\u2019s meaning. Semi-automated tools, such as \u201cAnonymouth\u201d [36], propose modifications for anonymity in a user\u2019s writing, requiring a significant corpus of the user\u2019s own texts. Moreover, recent advances in automatic paraphrasing using fine-tuned LLMs demonstrated a notable reduction in authorship attribution, but primarily for shorter texts [35]. To the best of our knowledge, there is no \u2013 and maybe there can be no \u2013 complete list of textual features contributing to the reidentification of individuals in text. As Narayanan and Shmatikov [40] highlight, \u201cany attribute can be identifying in combination with others\u201d [p. 3]. In text, we encounter elements like characters, words, and phrases, each carrying varying levels of meaning [19]. Single words convey explicit lexical meaning as defined by a vocabulary (e.g. \u201cemployee\u201d), while multiple words are bound by syntactic rules to express more complex thoughts implicitly in phrases (\u201cyoungest employee\u201d) and sentences (\u201cShe is the youngest employee\u201d). In addition, the European Data Protection Supervisor (EDPS) and Spanish Data Protection Agency (AEPD) [17] state that anonymization can never be fully automated and needs to be \u201ctailored to the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons\u201d [p. 7]. To take these insights and limitations into account, our semiautomated text sanitization tool leverages insights on the removal of identifying information but involves the whistleblower (the user) in the decision-making process. 3 RISK MODELLING AND RISK MITIGATION APPROACH In this section, we derive the problem statement (Section 3.2) from an analysis of different identifier types (Section 3.1). Following an overview of our approach (Section 3.3), we detail the anonymization operations for textual features (Section 3.4) and the automatic assignment of default concern levels (Section 3.5). \fStaufer, et al. 3.1 Identifier Types, Author Identifiability, and Event Details in the Whistleblowing Setting Whistleblowing reports convey information about persons, locations, and other entities. At least some of them need to be identified in order for the report to make any sense. The following fictitious example consists of three possible versions of a report in order to illustrate how different types of identifiers may contribute to the re-identification of the anonymously reporting employee Jane Doe, a member of the Colours and Lacquer group in the company COLOURIFICS. V1 On 24 January 2023, John Smith poured polyurethane resin into the clover-leaf-shaped sink of room R23. V2 After our group meeting on the fourth Tuesday of January 2023, the head of the Colours and Lacquer Group poured a toxin into the sink of room R23. V3 Somebody poured a liquid into a recepticle on some date in a room of the company. In V1, \u201cJohn Smith\u201d is the lexical identifier1 of the COLOURIFICS manager John Smith, as is \u201c24 January 2023\u201d of that date. Like John Smith, room R23 is a unique named entity in the context of the company and also identified lexically. \u201cPolyurethane resin\u201d is the lexical identifiers of a toxin (both are common nouns rather than names of individual instances of their category). The \u201cclover-leaf-shaped\u201d serves as a descriptive identifier of the sink. In V2, John Smith is still identifiable via the descriptive identifier \u201chead of the Colours and Lacquer Group\u201d, at least on 24 January 2023 (reconstructed with the help of a calendar and COLOURIFIC\u2019s personnel files). \u201cOur\u201d group meeting is an indexical identifier that signals that the whistleblower is one of the, say five employees in the Colours and Lacquer Group. The indexical information is explicit in V2 given the background knowledge that only employees in this group were co-present (for example, in the company\u2019s key-card logfiles). The same information may be implicit in V1 (if it can be seen from the company\u2019s organigram who John Smith is and who works in his group). Both versions provide for the inference that Jane Doe or any of her four colleagues must have been the whistleblower. If, in addition, only Jane Doe stayed behind \u201cafter the meeting\u201d, that detail in V2 descriptively identifies her uniquely2. V3 contains only identifiers of very general categories. Many other variants are possible (for example, referencing, in a V4, \u201cthe head of our group\u201d, which would enlarge the search space to all groups that had a meeting in R23 that day). The example illustrates the threats (i)-(iii) of Section 2.2. It also shows that the whistleblower\u2019s \u201canonymity\u201d (or lack thereof) is only one aspect of a more general and graded picture of who and what can be identified directly, indirectly, or not at all \u2013 and what this implies for the whistleblower\u2019s safety as well as for the report\u2019s effectiveness. 1The classification of identifiers is due to Phillips [47]. Note that all types of identifiers can give rise to personal data.. in the sense of the EU\u2019s General Data Protection Regulation (GDPR), Article 4(1): \u201cany information which is related to an identified or identifiable natural person\u201d, or personally identifiable data in the senses used in different US regulations. See [11] for legal aspects in the context of whistleblowing. 2If John Smith knows that only she observed him, she is also uniquely identified in V1, but for the sake of the analysis, we assume that only recorded data/text constitute the available knowledge. Inspired by Domingo-Ferrer\u2019s [14] three types of (data) privacy, we distinguish between the identifiability of the whistleblower Jane Doe (author 3 identifiability \ud835\udc34\ud835\udc56\ud835\udc51) and descriptions of the event or other wrongdoing, including other actors (event details \ud835\udc38\ud835\udc51\ud835\udc61). Given the stated context knowledge, we obtain an anonymity set of size \ud835\udc58= 1 for John Smith in V1 and V2. Jane Doe is in an anonymity set of size \ud835\udc58= 5 or even \ud835\udc58= 1 in V2. In V1, that set may be of size \ud835\udc58= 5 (if people routinely work only within their group) or larger (if they may also join other groups). Thus, the presence of a name does not necessarily entail a larger risk. Both are in an anonymity set containing all the company\u2019s employees at the reported date in V3 (assuming no outsiders have access to company premises). The toxin and the sink may be in a smaller anonymity set in V1 than in V2 or V3, and they could increase further (for example, if only certain employees have access to certain substances). Importantly, the identifiability of people and other entities in \ud835\udc38\ud835\udc51\ud835\udc61can increase the identifiability of the whistleblower. V3 illustrates a further challenge: the misspelled receptacle may be a typical error of a specific employee, and the incorrect placement of the temporal before the spatial information suggests that the writer may be a German or Dutch native speaker. In addition to errors, also correct variants carry information that stylometry can use for authorship attribution, which obviously can have a large effect on \ud835\udc34\ud835\udc56\ud835\udc51. The whistleblower would, on the one hand, want to reduce all such identifiabilities as much as possible. On the other hand, the extreme generalization of V3 creates a meaningless report that neither the company nor a court would follow up on. This general problem can be framed in terms of risk and utility, which will be described next. 3.2 The Whistleblowing Text-Writing Problem: Risk, Utility, And Many Unknowns A potential whistleblower faces the following problem: \u201cmake \ud835\udc34\ud835\udc56\ud835\udc51 as small as possible while retaining as much \ud835\udc38\ud835\udc51\ud835\udc61as necessary\u201d. We propose to address this problem by examining the text and possibly rewriting it. In principle, this is an instance of the oft-claimed trade-off between privacy (or other risk) and utility. In a simple world of known repositories of structured data, one could aim at determining the identifying problem (e.g., by database joins to identify the whistleblower due to some attributive information they reveal about themselves and by multiple joins for dependencies such as managers and teams) and compute how large the resulting anonymity set (or \ud835\udc34\ud835\udc56\ud835\udc51as its inverse) is. Given a well-defined measure of information utility, different points on the trade-off curve would then be welldefined and automatically derivable solutions to a mathematical optimization problem. However, texts offer a myriad of ways to express a given relational information. The space of information that could be crossreferenced, sometimes in multiple steps, is huge and often unknown to the individual. Consequently, in many cases, it is not possible 3We assume that the potential whistleblower is also the author of the report. This is the standard setting. Modifications for the situation in which a trusted third party writes the report on their behalf are the subject of future work. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification to determine the anonymity set size with any mathematical certainty. In addition, setting a threshold could be dangerous: even if the anonymity set is \ud835\udc58> 1, protection is not guaranteed \u2013 for example, the whole department of five people could be fired in retaliation. At the same time, exactly how specific a re-written text needs to be about \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61in order to make the report legally viable 4 cannot be decided without much more context knowledge. For example, the shape of the sink into which a toxic substance is poured probably makes no difference to the illegality, whereas the identity of the substance may affect it. These unknowns have repercussions both for tool design (Section 3.3) and for evaluation design (Section 5.1.1). 3.3 Risk Mitigation Approach and Tool Design: Overview Potential whistleblowers would be ill-served by any fully automated tool that claims to be able to deliver a certain mathematically guaranteed anonymization. Instead, we propose to provide them with a semi-automated tool that does have some \u201canonymity-enhancing defaults\u201d that illustrate with the concrete material how textual elements can be identifying and how they can be rendered less identifying. Our tool starts with the heuristic default assumption that identifiability is potentially always problematic and then lets the user steer our tool by specifying how \u201cconcerning\u201d specific individual elements are and choosing, interactively, the treatment of each of them that appears to give the best combination of \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. By letting the author/user assign these final risk scores in the situated context of the evolving text, we enable them to draw on a maximum of implicit context knowledge. Our approach and tool proceed through several steps. We first determine typical textual elements that can constitute or be part of the different types of identifiers. As can be seen in Table 1, most of them can affect \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. Since identification by name (or, by extension, pronouns that co-reference names) does not even need additional background knowledge and since individuals are more at risk than generics, we classify some textual features as \u201chighly concerning\u201d, others as having \u201cmedium concern\u201d, and the remainder as \u201cpotentially concerning\u201d. We differentiate between two types of proper nouns. Some names refer to typical \u201cnamed entities\u201d, which include, in particular, specific people, places, and organizations, as well as individual dates and currency amounts. These pose particular person-identification risk in whistleblowing scenarios.5 \u201cOther proper nouns\u201d, such as titles of music pieces, books and artworks generally only pose medium risk. For stylometric features, we explicitly categorize outof-vocabulary words, misspelled words, and words that are surprising given the overall topic of the text. Other low-level stylometric features, such as punctuation patterns, average word and sentence length, or word and phrase repetition, are not (and in many cases, such as with character n-gram pattern, cannot be [25]) explicitly identified. Instead, we implicitly/indirectly account for them as a byproduct of the LLM-based rephrasing. For all other parts of 4\u201ca situation in which a plan, contract, or proposal is able to be legally enforced\u201d, https://ludwig.guru/s/legally+viable, retrieved 2024-01-02 5PERSON, GPE (region), LOC (location), EVENT, LAW, LANGUAGE, DATE, TIME, PERCENT, MONEY, QUANTITY, and ORDINAL speech, we propose to use replacement strategies based on dataanonymization operations that are proportional to the risk (Table 2). Given the complexities of natural language and potential context information, the latter two operations are necessarily heuristic; thus, our tool applies the classification and the risk mitigation strategy as a default which can then be adapted by the user. Table 1: Overview of the approach from identifier types to default risk. Identifier Type Textual Feature Aid/Edt Default Risk Lexical Names of named entities \ud835\udc34\ud835\udc56\ud835\udc51,\ud835\udc38\ud835\udc51\ud835\udc61 High Lexical Other proper nouns \ud835\udc38\ud835\udc51\ud835\udc61 Medium Indexical Pronouns \ud835\udc34\ud835\udc56\ud835\udc51,\ud835\udc38\ud835\udc51\ud835\udc61 High Descriptive Common nouns \ud835\udc38\ud835\udc51\ud835\udc61,(\ud835\udc34\ud835\udc56\ud835\udc51) Potential Descriptive Modifiers \ud835\udc38\ud835\udc51\ud835\udc61,(\ud835\udc34\ud835\udc56\ud835\udc51) Potential Descriptive (via pragmatic inferences) Out-of-vocabulary wordsa \ud835\udc34\ud835\udc56\ud835\udc51, (\ud835\udc38\ud835\udc51\ud835\udc61) Medium Misspelled wordsa \ud835\udc34\ud835\udc56\ud835\udc51 Medium Surprising wordsb \ud835\udc34\ud835\udc56\ud835\udc51 Medium Other stylometric features \ud835\udc34\ud835\udc56\ud835\udc51 N/Ac aTreated as noun. bNouns or proper nouns. cNot explicitly specified. Indirectly accounted for through rephrasing. Table 2: Mitigation strategies based on assigned risk (LvC = level of concern, NaNEs = names of named entities, OPNs = other proper nouns, CNs = common nouns, Mods = modifiers, PNs = proper nouns, OSFs = other stylometric features). LvC NaNEs OPNs CNs Mods PNs OSFs High Suppr. Suppr. Suppr. Suppr. Suppr. Pert. Medium Pert. Generl. Generl. Pert. Suppr. Pert. 3.4 Anonymization Operations for Words and Phrases In our sanitization pipeline, we conduct various token removal and replacement operations based on each token\u2019s POS tag and its assigned level of concern (LvC), which can be \u201cpotentially concerning\u201d, \u201cmedium concerning\u201d, or \u201chighly concerning\u201d. Initially, we consider all common nouns, proper nouns, adjectives, adverbs, pronouns, and named entities6 as potentially concerning. Should the user or our automatic LvC estimation (see subsection 3.5) elevate the concern to either medium or high, we apply anonymization operations that are categorized into generalization, perturbation, and suppression. Specific implementation details are elaborated on in section 4. 6By this, we mean names of named entities, e.g. \u201cBerlin\u201d for GPE, but we use named entities instead for consistency with other literature. \fStaufer, et al. 3.4.1 Generalization. The least severe type of operation targets common nouns and other proper nouns marked as medium concerning. We assume their specificity (not necessarily their general meaning) poses re-identification risks. Thus, more general terms can be used to preserve meaning while mitigating the risk of re-identification. \u2022 Common nouns like \u201ccar\u201d are replaced with hypernyms from WordNet, such as \u201cvehicle\u201d. \u2022 Other proper nouns become broader Wikidata terms, e.g. \u201cpolitical slogan\u201d for \u201cMake America Great Again\u201d. 3.4.2 Perturbation. This applies to modifiers7 and named entities annotated as medium concerning. In this process, original words are retained but are assigned zero weight in the paraphrase generation, along with their synonyms and inflections. This approach relies on the LLM to either (a) find similar but non-synonymous replacement words or (b) completely rephrase the sentence to exclude these words. For example, \u201cMicrosoft, the giant tech company, ...\u201d could be paraphrased as \u201cA leading corporation in the technology sector...\u201d. 3.4.3 Suppression. The most severe type of operation is applied to common nouns, other proper nouns, modifiers and named entities annotated as highly concerning, and to pronouns that are either medium concerning or highly concerning. We assume these words are either too unique or cannot be generalized. \u2022 For common nouns and other proper nouns, dependent phrases are omitted (e.g., \u201cWe traveled to the London Bridge in a bus.\u201d becomes \u201cWe traveled in a bus.\u201d). \u2022 Modifiers are removed (e.g., \u201cHe used to be the principal dancer\u201d becomes \u201cHe used to be a dancer\u201d). \u2022 Named entities are replaced with nondescript phrases (e.g., \u201cBarack Obama\u201d becomes \u201ccertain person\u201d). \u2022 Pronouns are replaced with \u201csomebody\u201d (e.g., \u201cHe drove the bus.\u201d becomes \u201cSomebody drove the bus.\u201d). 3.5 Automatic Level of Concern (LvC) Estimation In our whistleblowing context, we deem the detection of outsidedocument LvC via search engine queries, as proposed by Papadopoulou et al. [44] (refer to related work in 2.3), impractical. This is because whistleblowers are typically not well-known, and the information they disclose is often novel, not commonly found on the internet. Therefore, instead of relying on external data, we focus on innerdocument LvC, setting up a rule-based system and allowing users to adjust the LvC based on their contextual knowledge. Further, we assume that this pre-annotation of default concern levels raises awareness for potential sources of re-identification. \u2022 Common nouns and modifiers, by default, are potentially concerning. As fundamental elements in constructing a text\u2019s semantic understanding, they could inadvertently reveal re-identifying details like profession or location. However, without additional context, their LvC is not definitive. \u2022 Other proper nouns, unexpected words, misspelled words and out-of-vocabulary words default to medium 7The current version of our tool considers only adjectives and adverbs as modifiers. concerning. Unlike categorized named entities, other proper nouns only indirectly link to individuals, places, or organizations. Unexpected words may diminish anonymity, according to Papadopoulou et al. [44], while misspelled or out-ofvocabulary words can be strong stylometric indicators. \u2022 Named entities are considered highly concerning by default, as they directly refer to specific entities in the world, like people, organizations, or locations, posing a significant re-identification risk. 4 IMPLEMENTATION Our semi-automated text sanitization tool consists of a sanitization pipeline (Sections 4.1 and 4.2) and a user interface (Section 4.3). The pipeline uses off-the-shelf Python NLP libraries (spaCy, nltk, lemminflect, constituent_treelib, sentence-transformers) and our paraphrasing-tuned FLAN T5 language model. FLAN T5\u2019s errorcorrecting capabilities [39, 42] aid in reconstructing sentence fragments after words or phrases with elevated levels of concern have been removed. The user interface is built with standard HTML, CSS, and JavaScript. Both components are open source and on GitHub8. 4.1 Anonymization Operations for Words and Phrases 4.1.1 Generalization. Common nouns undergo generalization by first retrieving their synsets and hypernyms from WordNet, followed by calculating the cosine similarity of their sentence embeddings with those of the hypernyms. This calculation ranks the hypernyms by semantic similarity to the original word, enabling the selection of the most suitable replacement. By default, we select the closest hypernym. Other proper nouns are generalized as follows: We first query Wikipedia to identify the term, using the allmpnet-base-v2 sentence transformer to disambiguate its meaning through cosine similarity. Next, we find the most relevant Wikidata QID and its associated hierarchy. We then flatten these relationships and replace the entity with the next higher-level term in the hierarchy. 4.1.2 Perturbation. We add randomness to modifiers and named entities through LLM-based paraphrasing, specifically, by using the FLAN-T5 language model, which we fine-tuned for paraphrase generation (Section 4.2). To achieve perturbation9, we give the tokens in question and their synonyms and inflections zero weight during next token prediction. This forces the model to either use a less probable word (controlled by the temperature hyperparameter) or rephrase the sentence to omit the token. Using a LLM for paraphrase generation has the added benefit that it mends fragmented sentences caused by token suppression and yields a neutral writing style, adjustable through the no_repeat_ngram_size hyperparameter. 8https://github.com/dimitristaufer/Semi-Automated-Text-Sanitization 9The strategies \u201csuppression\u201d and \u201cgeneralization\u201d are straightforward adaptations of the classical methods for structured data. Perturbation \u201creplaces original values with new ones by interchanging, adding noise or creating synthetic data\u201d [7]. Interchanging would create ungrammatical texts, and noise can only be added to certain data. We, therefore, generate synthetic data via LLM-Rephrasing, disallowing the highly specific words / terms and their synonyms while producing a new but grammatical text. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification 4.1.3 Suppression. Common nouns and other proper nouns are suppressed by removing the longest phrase containing them with the constituent_treelib library. Sentences with just one noun or proper noun are entirely removed. Otherwise, the longest phrase, be it a main clause, verb phrase, prepositional phrase, or noun phrase, is identified, removed, and replaced with an empty string. Modifiers are removed (e.g., \u201cHe is their principal dancer\u201d \u2192\u201cHe is their \u00b7 dancer\u201d). Pronouns are replaced with the static string \u201csomebody\u201d. For example, \u201cHis apple\u201d \u2192\u201cSomebody apple\u201d (after replacement) \u2192\u201cSomebody\u2019s apple\u201d (after paraphrase generation). Named entities are replaced with static phrases based on their type. For example, \u201cJohn Smith sent her 2 Million Euros from his account in Switzerland\u201d \u2192\u201ccertain person sent somebody certain money from somebody account in certain location\u201d (after suppressing pronouns and named entities) \u2192\u201cA certain individual sent a specific amount of money to whoever\u2019s account in some particular place\u201d (after paraphrase generation). 4.2 Paraphrase Generation We fine-tuned two variants of the FLAN T5 language models, FLAN T5Base and FLAN T5XL, using the \u201cchatgpt-paraphrases\u201d dataset, which uniquely combines three large paraphrasing datasets for varied topics and sentence types. It includes question paraphrasing from the \u201cQuora Question Pairs\u201d dataset, context-based paraphrasing from \u201cSQuAD2.0\u201d, and summarization-based paraphrases from the \u201cCNN-DailyMail News Text Summarization\u201d dataset. Furthermore, it was enriched with five diverse paraphrase variants for each sentence pair generated by the gpt-3.5-turbo model, resulting in 6.3 million unique pairs. This diversity enhances our model\u2019s paraphrasing capabilities and reduces overfitting. For training, we employed Parameter-Efficient Fine-Tuning (PEFT) using LoRA (Low-Rank Adaptation), which adapts the model to new data without the need for complete retraining. We quantized the model weights to enhance memory efficiency using bitsandbytes. We trained FLAN T5Base on a NVIDIA A10G Tensor Core GPU for one epoch (35.63 hours) on 1 mio. paraphrase pairs, using an initial learning rate of 1e-3. After one epoch, we achieved a minimum Cross Entropy loss of 1.195. FLAN T5XL was trained for one epoch (22.38 hours) on 100,000 pairs and achieved 0.88. For inference, we configure max_length to 512 tokens to cap the output at T5\u2019s tokenization limit. do_sample is set to True, allowing for randomized token selection from the model\u2019s probability distribution, enhancing the variety of paraphrasing. Additionally, parameters like temperature, no_repeat_ngram_size, and length_penalty are adjustable via the user interface, providing control over randomness, repetition avoidance, and text length. 4.3 User Interface Our web-based user interface communicates with the sanitization pipeline via Flask endpoints. It visualizes token LvCs (gray, yellow, red), allows dynamic adjustments of these levels, and starts the sanitization process. Moreover, a responsive side menu allows users to select the model size and tune hyperparameters for paraphrasing. The main window (Figure 1) shows the original and the sanitized texts, with options for editing and annotating. Figure 1: The UI\u2019s main window showing the input text (left) and the sanitized text (right). We made up the input and converted it to \u201cInternet Slang\u201d (https://www.noslang.com/ reverse) to showcase how an extremely obvious writing style is neutralized. 5 EVALUATION We evaluate our tool quantitatively (Sections 5.1 and 5.2) and demonstrate its workings and usefulness with an example from a realworld whistleblower testimony (Section 5.3). They complement each other in that the first focuses on identification via writing style and the second two on identification via content. 5.1 Re-Identification Through Writing Style: IMDb62 Movie Reviews Dataset 5.1.1 Evaluation metrics. The large unknowns of context knowledge imply that evaluations cannot rely on straightforward measurement methods for \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. We, therefore, work with the following proxies. Text-surface similarities To understand the effect of language model size and hyperparameter settings on lexical and syntactic variations from original texts, we utilize two ROUGE scores: ROUGE-L (Longest Common Subsequence) to determine to which extent the overall structure and sequence of information in the text changes. And ROUGE-S (Skip-Bigram) to measure word pair changes and changes in phrasings. Risk Without further assumptions about the (real-world casespecific) background knowledge, it is impossible to exactly quantify the ultimate risk of re-identification (see Section 3.1). We therefore only measure the part of \ud835\udc34\ud835\udc56\ud835\udc51where (a) the context knowledge is more easily circumscribed (texts from the same author) and (b) benchmarks are likely to generalize across case studies: the risk of re-identification based on stylometric features, measured as authorship attribution accuracy (AAA). Utility It is also to be expected that the rewriting reduces \ud835\udc38\ud835\udc51\ud835\udc61, yet again it is impossible to exactly determine (without realworld case-specific background knowledge and legal assessment) whether the detail supplied is sufficient to allow for legal follow-up of the report or even only to create alarm that could then be followed up. We, therefore, measure \ud835\udc38\ud835\udc51\ud835\udc61utility through two proxies: a semantic similarity measure and a sentiment classifier. To estimate semantic similarity (SSim), we calculate the cosine similarity of both texts\u2019 sentence \fStaufer, et al. embeddings using the SentenceTransformer10 Python framework. To determine the absolute sentiment score difference (SSD), we classify the texts\u2019 sentiment using an off-the-shelf BERT-based classifier11 from Hugging Face Hub. All measures are normalized to take on values between 0 and 1, and although the absolute values of the scores between these endpoints (except for authorship attribution) cannot be interpreted directly, the comparison of relative orders and changes will give us a first indication of the impacts of different rewriting strategies on \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61. 5.1.2 Data, language models, and settings. We investigate protection against authorship attribution attacks with the popular IMDb62 movie reviews dataset [59], which contains 62,000 movie reviews by 62 distinct authors. We assess AAA using the \u201cValla\u201d software package [64], specifically its two most effective models: one based on character n-grams and the other on BERT. This approach covers both ends of the the authorship attribution spectrum [3], from lowlevel, largely topic-independent character n-grams to the contextrich features of the pre-trained BERT model. The evaluation was conducted on AWS EC2 \u201cg4dn.xlarge\u201d instances with NVIDIA T4 GPUs. We processed 130 movie reviews for each of the 62 authors across twelve FLAN T5 configurations, totaling 96,720 texts with character counts spanning from 184 to 5248. Each review was sanitized with its textual elements assigned their default LvCs (see 3.5). Both model sizes, \u201cBase\u201d (250M parameters) and \u201cXL\u201d (3B parameters) were tested with temperature values T of 0.2, 0.5, and 0.8, as well as with no_repeat_ngram_size (NRNgS) set to 0 or 2. The former, temperature, controls the randomness of the next-word predictions by scaling the logits before applying softmax, which makes the predictions more or less deterministic. For our scenario, this causes smaller or greater perturbation of the original text\u2019s meaning. The latter, NRNgS, disallows n consecutive tokens to be repeated in the generated text, which for our scenario means deviating more or less from the original writing style. The Risk-utility trade-offs of all configurations are compared to three baselines: \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc521 is the original text. In \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522, similar to state-of-the-art related work [24, 44], we only redact named entities by replacing them with placeholders, such as \u201c[PERSON]\u201d and do not utilize our language model. Similarly, in \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc523 we only remove named entities but rephrase the texts using our bestperforming model configuration regarding AA protection. 5.1.3 Results. The n-gram-based and BERT-based \u201cValla\u201d classifiers achieved AAA baselines of 98.81% and 98.80%, respectively. As expected, the AAA and text-surface similarities varied significantly depending on the model configuration. The XL-model generated texts with much smaller ROUGE-L and ROUGE-S scores, i.e. more lexical and syntactic deviation from the original texts. Using \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2 slightly decreased AAA in all configurations while not significantly affecting semantic similarity, which is why we use this for all the following results. 10all-mpnet-base-v2 11bert-base-multilingual-uncased 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Attribution Accuracy (AAA) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Semantic Similarity (SSim) BASE (NRNgS = 0) BASE (NRNgS = 2) XL (NRNgS = 0) XL (NRNgS = 2) Baseline 1 Baseline 2 Baseline 3 (a) Risk-utility trade-off between AAA and SSim. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Attribution Accuracy (AAA) 0.00 0.05 0.10 0.15 Sentiment Score Difference (SSD) BASE (NRNgS = 0) BASE (NRNgS = 2) XL (NRNgS = 0) XL (NRNgS = 2) Baseline 1 Baseline 2 Baseline 3 (b) Risk-utility trade-off between AAA and SSD. Figure 2: Risk-utility trade-offs. Figure 2 (a) shows the risk-utility trade-off between AAA and SSim. \u201cTop-left\u201d (0,1) would be the fictitious best result. For each model configuration, increasing \ud835\udc47caused AAA to drop but also decreased utility by \u223c8%/4% (BASE/XL) for SSim and \u223c12%/3% (BASE/XL) for SSD. The figure shows that the investigated settings create a trade-off curve, with XL (\ud835\udc47= 0.8, \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2) allowing for a large reduction in AAA (to 31.22%, as opposed to the original text \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc521 of 98.81%), while BASE (\ud835\udc47= 0.2, \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 0) retains the most SSim (0.731, as opposed to the original texts, which have \ud835\udc46\ud835\udc46\ud835\udc56\ud835\udc5a= 1 to themselves). Figure 2 (b) shows the risk-utility trade-off between AAA and SSD (the plot shows 1-SSD to retain \u201ctop left\u201d as the optimal point). The results mirror those of AAA-SSim, except for \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522: because only named entities (not considered sentiment-carrying) are removed, the sentiment score changes only minimally. 5.1.4 Discussion. In summary, all our models offer a good compromise between baselines representing state-of-the-art approaches. They have lower risk and higher or comparable utility compared to \ud835\udc4f\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc523, where only named entities are removed. This indicates \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification the effectiveness of LLM-based rephrasing in authorship attribution. \ud835\udc35\ud835\udc4e\ud835\udc60\ud835\udc52\ud835\udc59\ud835\udc56\ud835\udc5b\ud835\udc522, which involves suppressing named entities and rephrasing, shows the lowest risk due to limited content left for the LLM to reconstruct, resulting in mostly short, arbitrary sentences, as reflected by low SSim scores. 5.2 Re-Identification Through Content: European Court of Human Rights Cases Pil\u00e1n et al.\u2019s [48] Text Anonymization Benchmark (TAB) includes a corpus of 1,268 English-language court cases from the European Court of Human Rights, in which directlyand quasi-identifying nominal and adjectival phrases were manually annotated. It solves several issues that previous datasets have, such as being \u201cpseudoanonymized\u201d, including only few categories of named entities, not differentiating between identifier types, containing only famous individuals, or being small. TAB\u2019s annotation is focused on protecting the identity of the plaintiff (also referred to as \u201capplicant\u201d). 5.2.1 Evaluation Metrics. TAB introduces two metrics, entity-level recall (\ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56) to measure privacy protection and token-levelweighted precision (\ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56) for utility preservation. Entity-level means that an entity is only considered safely removed if all of its mentions are.\ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56uses BERT to determine the information content of a token t by estimating the probability of t being predicted at position i. Thus, precision is low if many t with high information content are removed. Both metrics use micro-averaging over all annotators to account for multiple valid annotations. Because our tool automatically rephrases the anonymized texts, we make two changes. First, since we cannot reliably measure \ud835\udc4a\ud835\udc43\ud835\udc51\ud835\udc56+\ud835\udc5e\ud835\udc56, we fall back to our previously introduced proxies for measuring \ud835\udc38\ud835\udc51\ud835\udc61utility. Secondly, we categorize newly introduced entities from LLM hallucination that may change the meaning of the sanitized text. The legal texts, which must prefer direct and commonly-known identifiers, are likely to present none or far fewer of the backgroundknowledge-specific re-identification challenges of our domain. Thus, again the metrics used here should be regarded as proxies. Risk We measure\ud835\udc34\ud835\udc56\ud835\udc51using \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56and count slightly rephrased names of entities as \u201cnot removed\u201d using the Levenshtein distance. For example, rephrasing \u201cUSA\u201d as \u201cU.S.A\u201d has the same influence on \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56as failing to remove \u201cUSA\u201d. Utility We estimate \ud835\udc38\ud835\udc51\ud835\udc61through SSim. In addition, we determine all entities in the sanitized text that are not in the original text (again using the Levenshtein distance). We categorize them into (1) rephrased harmful entities (semantically identical to at least one entity that should have been masked), (2) rephrased harmless entities, and (3) newly introduced entities. We measure semantic similarity by calculating the cosine similarity of each named entity phrase\u2019s sentence embedding to those in the original text. 5.2.2 Data, language models, and settings. The TAB corpus comprises the first two sections (introduction and statement of facts) of each court case. For our evaluation, we use the test split which contains 127 cases of which each has, on average, 2174 characters (356 words) and 13.62 annotated phrases. We perform all experiments using the \u201cXL\u201d (3B parameter) model with temperature values T of 0.2, 0.5, and 0.8, as well as with NRNgS set to 2. 5.2.3 Results and Discussion. \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56/\ud835\udc5e\ud835\udc56and SSim vary slightly, but not significantly for different T values. For T = 0.2, we get an entitylevel recall on quasi-identifiers (\ud835\udc38\ud835\udc45\ud835\udc5e\ud835\udc56) of 0.93, which is slightly better than Pil\u00e1n et al.\u2019s [48] best performing model trained directly on the TAB corpus (0.92). However, our result for direct identifiers \ud835\udc38\ud835\udc45\ud835\udc51\ud835\udc56 is 0.53, while theirs achieves 1.0, i.e. does not miss a single highrisk entity. Closer inspection reveals that our low results for direct identifiers come mainly from (i) the SpaCy NER failing to detect the entity type CODE (e.g. \u201c10424/05\u201d) and (ii) the LLM re-introducing names of named entities that are spelled slightly differently (e.g. \u201cMr Abdisamad Adow Sufi\u201d instead of \u201cMr Abdisamad Adow Sufy\u201d). Regarding utility, all three model configurations achieve similar SSim scores ranging from 0.67 (T = 0.8) to 0.69 (T = 0.2). These results fall into the same range achieved using the IMDb62 movie reviews dataset. However, in addition to re-introducing entities that should have been masked, we found that, on average, the LLM introduces 5.24 new entities (28.49%) per court case. While some of these, depending on the context, can be considered harmless noise (e.g. \u201cEuropean Supreme Tribunal\u201d), manual inspection revealed that many change the meaning and legitimacy of the sanitized texts. For example, 4.7% contain names of people that do not appear in the original text, 43.3% contain new article numbers, 20.5% contain new dates, and 11.8% include names of potentially unrelated countries. The frequency of such hallucinations could also be a consequence of the specific text genre of court cases, and future work should examine to what extent this also occurs in whistleblower testimonies and how it affects the manual post-processing over the generated text that is previewed in our semi-automated tool. 5.3 Re-Identification Through Content: Whistleblower Testimony Excerpts We further investigated our tool\u2019s rewritings of two excerpts (Tables 3, 4) from a whistleblower\u2019s hearing in the Hunter Biden tax evasion case, as released by the United States House Committee on Ways and Means.12 This qualitative view on our results provides for a detailed understanding of which identifiers were rewritten and how.13 5.3.1 Approach. First, we compiled the essential \ud835\udc38\ud835\udc51\ud835\udc61upon which we based our analysis on. Next, we assessed the textual features in both excerpts to enhance our tool\u2019s automatic Level of Concern (LvC) estimations, aiming for the lowest author identifiability (\ud835\udc34\ud835\udc56\ud835\udc51). Finally, we input these annotations into the user interface to produce the rewritings. 5.3.2 \ud835\udc38\ud835\udc51\ud835\udc61and \ud835\udc34\ud835\udc56\ud835\udc51. Based on the information from the original texts in tables 3 and 4 alone, we define \ud835\udc38\ud835\udc51\ud835\udc61as follows, with \ud835\udc38\ud835\udc51\ud835\udc611, \ud835\udc38\ud835\udc51\ud835\udc612 being a subset of excerpt 1 and \ud835\udc38\ud835\udc51\ud835\udc613 a subset of excerpt 2. A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification FAccT \u201924, June 3\u20136, 2024, Rio de Janeiro, Brazil the e\uffffectiveness of LLM-based rephrasing in authorship attribution. \u232b0B4;8=42, which involves suppressing named entities and rephrasing, shows the lowest risk due to limited content left for the LLM to reconstruct, resulting in mostly short, arbitrary sentences, as re\uffffected by low SSim scores. 5.2 Re-Identi\uffffcation Through Content: European Court of Human Rights Cases Pil\u00e1n et al.\u2019s [48] Text Anonymization Benchmark (TAB) includes a corpus of 1,268 English-language court cases from the European Court of Human Rights, in which directlyand quasi-identifying nominal and adjectival phrases were manually annotated. It solves several issues that previous datasets have, such as being \u201cpseudoanonymized\u201d, including only few categories of named entities, not di\ufffferentiating between identi\uffffer types, containing only famous individuals, or being small. TAB\u2019s annotation is focused on protecting the identity of the plainti\uffff(also referred to as \u201capplicant\u201d). 5.2.1 Evaluation Metrics. TAB introduces two metrics, entity-level recall (\u21e2'38/@8) to measure privacy protection and token-levelweighted precision (, %38+@8) for utility preservation. Entity-level means that an entity is only considered safely removed if all of its mentions are., %38+@8 uses BERT to determine the information content of a token t by estimating the probability of t being predicted at position i. Thus, precision is low if many t with high information content are removed. Both metrics use micro-averaging over all annotators to account for multiple valid annotations. Because our tool automatically rephrases the anonymized texts, we make two changes. First, since we cannot reliably measure , %38+@8, we fall back to our previously introduced proxies for measuring \u21e23C utility. Secondly, we categorize newly introduced entities from LLM hallucination that may change the meaning of the sanitized text. The legal texts, which must prefer direct and commonly-known identi\uffffers, are likely to present none or far fewer of the backgroundknowledge-speci\uffffc re-identi\uffffcation challenges of our domain. Thus, again the metrics used here should be regarded as proxies. Risk We measure\ud43483 using \u21e2'38/@8 and count slightly rephrased names of entities as \u201cnot removed\u201d using the Levenshtein distance. For example, rephrasing \u201cUSA\u201d as \u201cU.S.A\u201d has the same in\uffffuence on \u21e2'38/@8 as failing to remove \u201cUSA\u201d. Utility We estimate \u21e23C through SSim. In addition, we determine all entities in the sanitized text that are not in the original text (again using the Levenshtein distance). We categorize them into (1) rephrased harmful entities (semantically identical to at least one entity that should have been masked), (2) rephrased harmless entities, and (3) newly introduced entities. We measure semantic similarity by calculating the cosine similarity of each named entity phrase\u2019s sentence embedding to those in the original text. 5.2.2 Data, language models, and se\uffffings. The TAB corpus comprises the \uffffrst two sections (introduction and statement of facts) of each court case. For our evaluation, we use the test split which contains 127 cases of which each has, on average, 2174 characters (356 words) and 13.62 annotated phrases. We perform all experiments using the \u201cXL\u201d (3B parameter) model with temperature values T of 0.2, 0.5, and 0.8, as well as with NRNgS set to 2. 5.2.3 Results and Discussion. \u21e2'38/@8 and SSim vary slightly, but not signi\uffffcantly for di\ufffferent T values. For T = 0.2, we get an entitylevel recall on quasi-identi\uffffers (\u21e2'@8) of 0.93, which is slightly better than Pil\u00e1n et al.\u2019s [48] best performing model trained directly on the TAB corpus (0.92). However, our result for direct identi\uffffers \u21e2'38 is 0.53, while theirs achieves 1.0, i.e. does not miss a single highrisk entity. Closer inspection reveals that our low results for direct identi\uffffers come mainly from (i) the SpaCy NER failing to detect the entity type CODE (e.g. \u201c10424/05\u201d) and (ii) the LLM re-introducing names of named entities that are spelled slightly di\ufffferently (e.g. \u201cMr Abdisamad Adow Su\uffff\u201d instead of \u201cMr Abdisamad Adow Sufy\u201d). Regarding utility, all three model con\uffffgurations achieve similar SSim scores ranging from 0.67 (T = 0.8) to 0.69 (T = 0.2). These results fall into the same range achieved using the IMDb62 movie reviews dataset. However, in addition to re-introducing entities that should have been masked, we found that, on average, the LLM introduces 5.24 new entities (28.49%) per court case. While some of these, depending on the context, can be considered harmless noise (e.g. \u201cEuropean Supreme Tribunal\u201d), manual inspection revealed that many change the meaning and legitimacy of the sanitized texts. For example, 4.7% contain names of people that do not appear in the original text, 43.3% contain new article numbers, 20.5% contain new dates, and 11.8% include names of potentially unrelated countries. The frequency of such hallucinations could also be a consequence of the speci\uffffc text genre of court cases, and future work should examine to what extent this also occurs in whistleblower testimonies and how it a\uffffects the manual post-processing over the generated text that is previewed in our semi-automated tool. 5.3 Re-Identi\uffffcation Through Content: Whistleblower Testimony Excerpts We further investigated our tool\u2019s rewritings of two excerpts (Tables 3, 4) from a whistleblower\u2019s hearing in the Hunter Biden tax evasion case, as released by the United States House Committee on Ways and Means.12 This qualitative view on our results provides for a detailed understanding of which identi\uffffers were rewritten and how.13 5.3.1 Approach. First, we compiled the essential \u21e23C upon which we based our analysis on. Next, we assessed the textual features in both excerpts to enhance our tool\u2019s automatic Level of Concern (LvC) estimations, aiming for the lowest author identi\uffffability (\ud43483). Finally, we input these annotations into the user interface to produce the rewritings. 5.3.2 \u21e23C and \ud43483. Based on the information from the original texts in tables 3 and 4 alone, we de\uffffne \u21e23C as follows, with \u21e23C1, \u21e23C2 being a subset of excerpt 1 and \u21e23C3 a subset of excerpt 2. \u21e23C := 8 > > > > > > > > < > > > > > > > > : \u201cThe Tax Division approved charges but for no apparent reason changed their decision to a declination.\u201d, \u201cThe declination occurred after signi\uffffcant e\uffffort was put into the investigation by the whistleblower.\u201d, \u201cIn their e\uffffort in doing what is right, the whistleblower su\uffffered on a professional and personal level.\u201d 12https://waysandmeans.house.gov/?p=39854458 [Accessed 29-April-2024], \u201c#2\u201d 13To answer these questions, it is immaterial whether the text sample describes a concrete act of wrongdoing (as in our \uffffctitious Ex. 1) or not (as here). 12https://waysandmeans.house.gov/?p=39854458 [Accessed 29-April-2024], \u201c#2\u201d 13To answer these questions, it is immaterial whether the text sample describes a concrete act of wrongdoing (as in our fictitious Ex. 1) or not (as here). \fStaufer, et al. In \ud835\udc52\ud835\udc65\ud835\udc501 (Table 3), we classified \u201cjoining the case\u201d (first-person indexical) and implications of a nation-wide investigation as highly concerning. Additionally, we marked all \u201ccase\u201d mentions as highly concerning to evaluate consistent suppression. \u201cDOJ Tax\u201d, being a stylometric identifier because it is no official abbreviation, received a medium LvC, and \u201cthousands of hours\u201d was similarly categorized, potentially indicating the authors role as lead in the case. In \ud835\udc52\ud835\udc65\ud835\udc502 (Table 4), we classified the lexical identifier \u201c2018\u201d, which could be cross-referenced relatively easily, as well as all descriptive identifiers concerning the author\u2019s sexual orientation and outing as highly concerning. Furthermore, emotional descriptors (\u201csleep, vacations, gray hairs, et cetera\u201d) are given medium LvC, similar to references of case investment (\u201cthousands of hours\u201d and \u201c95 percent\u201d), mirroring the approach from \ud835\udc52\ud835\udc65\ud835\udc501. 5.3.3 Results and Discussion. \ud835\udc38\ud835\udc65\ud835\udc501\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51retains \ud835\udc38\ud835\udc51\ud835\udc612, but not \ud835\udc38\ud835\udc51\ud835\udc611, as \u201cDOJ Tax\u201d is replaced with \u201cproper noun\u201d due to the nonexistence of a corresponding entity in Wikidata. Consequently, it defaults to the token\u2019s POS tag. For \ud835\udc34\ud835\udc56\ud835\udc51, all identified risks were addressed (e.g., \u201cconsiderable time\u201d replaces \u201cthousands of hours.\u201d). However, the generalization of \u201ccase\u201d led to inconsistent terms like \u201cmatter\u201d, \u201csituation\u201d, and \u201cissue\u201d due to the \ud835\udc41\ud835\udc45\ud835\udc41\ud835\udc54\ud835\udc46= 2 setting. This is beneficial for reducing authorship attribution accuracy but may confuse readers not familiar with the original context. \ud835\udc38\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51maintains parts of \ud835\udc38\ud835\udc51\ud835\udc613, though terms like \u201cX amount of time\u201d and \u201cY amount of the investigation\u201d add little value due to their lack of specificity. Notably, \u201camount o of\u201d represents a rare LLM-induced spelling error, underscoring the need for human editing for real-world use. The emotional state\u2019s broad generalization to \u201cphysical health, leisure, grey body covering\u201d is odd and less suitable than a singular term would be. Despite this, \ud835\udc38\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51 effectively minimizes \ud835\udc34\ud835\udc56\ud835\udc51by addressing all other identified risks. Table 3: LvC-annotated whistleblower testimony \ud835\udc52\ud835\udc65\ud835\udc501 (excerpt 1) with identifiers (top) and \ud835\udc52\ud835\udc65\ud835\udc501\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51(bottom). Original: \u201cPrior to joining the case, DOJ Tax had approved tax charges for the case and the case was in the process of progressing towards indictment [...] After working thousands of hours on that captive case, poring over evidence, interviewing witnesses all over the U.S., the decision was made by DOJ Tax to change the approval to a declination and not charge the case.\u201d Lexical IDs: DOJ Tax; U.S. Indexical IDs: [implicit: me] joining the case (first person) Descriptive IDs: interviewing witnesses all over the U.S. (nationwide investigation); thousands of hours (author involvement) Sanitized: \u201cThe proper noun had approved tax charges for the matter and the situation was moving towards indictment, but after spending considerable time on that captive matter, poring over evidence, the decision was made by proper noun to defer the approval and not charge the issue.\u201d Table 4: LvC-annotated whistleblower testimony \ud835\udc52\ud835\udc65\ud835\udc502 (excerpt 2) with identifiers (top) and \ud835\udc52\ud835\udc65\ud835\udc502\ud835\udc60\ud835\udc4e\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56\ud835\udc67\ud835\udc52\ud835\udc51(bottom). Original: \u201cI had opened this investigation in 2018, have spent thousands of hours on the case, worked to complete 95 percent of the investigation, have sacrificed sleep, vacations, gray hairs, et cetera. My husband and I, in identifying me as the case agent, were both publicly outed and ridiculed on social media due to our sexual orientation.\u201d Lexical IDs: 2018; thousands of hours; 95 percent Indexical IDs: me as the case agent (role of author); My husband (author\u2019s marital status) Descriptive IDs: I had opened this investigation in 2018 (can be cross-referenced); My husband and I + publicly outed and ridiculed [...] due to our sexual orientation (author\u2019s sexual orientation and public event); sacrificed sleep, [...], gray hairs (emotional state) Sanitized: \u201cI had opened this investigation on a certain date, had spent X amount of time on the case, worked to complete Y amount of the investigation, sacrificing my physical health, leisure, grey body covering, etc.\u201d 6 CONCLUSIONS, LIMITATIONS AND FUTURE WORK We evaluated our our tool\u2019s effectiveness using ECHR court cases and excerpts from a real-world whistleblower testimony and measured the protection against authorship attribution attacks and information loss statistically using the popular IMDb62 movie reviews dataset. Our method can significantly reduce authorship attribution accuracy from 98.81% to 31.22%, while preserving up to 73.1% of the original content\u2019s semantics, as measured by the established cosine similarity sentence embeddings. Our qualitative analysis revealed that minor wording changes significantly impact \ud835\udc34\ud835\udc56\ud835\udc51and \ud835\udc38\ud835\udc51\ud835\udc61, and highlighted our tool\u2019s strengths in reducing \ud835\udc34\ud835\udc56\ud835\udc51 through generalization, perturbation, and suppression. Our tool\u2019s usefulness in real-world whistleblowing scenarios remains to be tested, particularly with human users. Challenges arise from the possibility of the tool introducing unrelated entities through model hallucination and its limitations in addressing complex syntactic structures and co-references. Still, our LLM-based approach has proved to be promising in matters of counteracting the limitations of state-of the art approaches. The fine-tuned model effectively reduces authorship attribution and improves text coherence \u2013 two of the main shortcomings of previous works. At the same time, it introduces novel challenges, such as limited control over the accuracy and consistency of the rephrased content. Future work will focus on refining our tool through evaluations involving human participants and domain experts. Given the crucial importance of context knowledge for re-identification risks and the challenges in identifying all textual features that contribute to re-identification, future work will also pay increasing attention to enhancing anonymization awareness. This would not only apply to the whistleblowing use case, but extend to the protection of free speech in other areas too, including journalism, political activism, and social media. \fA Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification We envision an interactive awareness tool as a more dynamic alternative to conventional static writing guides on whistleblowing platforms. This tool would incorporate insights from our research as well as insights from practitioners, aiming to educate users about subtle textual nuances that could pose re-identification risks, thereby creating a deeper understanding and more effective use of anonymization practices in high-risk disclosures. At the same time, we need to draw on practitioners\u2019 and legal experts\u2019 knowledge to better understand what textual changes are detrimental (or conducive) to utility and incorporate these insights into the guidance provided by the awareness tool. 7 ETHICAL CONSIDERATIONS, RESEARCHERS POSITIONALITY, AND POSSIBLE ADVERSE IMPACTS In the following paragraphs, we discuss five key challenges, interweaving a potential adverse impacts statement, an ethical considerations statement (what we have done or can do), and positionalities. We are computer scientists (some of us with a background also in social and legal sciences) who have programming expertise (instrumental for mitigating challenges C1\u2013C4), understanding of data protection law (C1), research expertise in bias and fairness, including methods for risk mitigation when working with LLMs (C2), and collaborators with human-subjects studies expertise (C3). None of us has been a whistleblower. We outline below how future collaborators and/or deployers with other positionalities can contribute relevant complementary expertise on C1\u2013C5. C1 \u2013 Data Protection: Our tool does not collect or store any user data. Original as well as re-written texts are discarded after each run, and they are not used to train the model further. Our tool does not require an internet connection beyond the initial downloading of pre-trained language models and optional queries to Wikidata servers. While querying Wikidata enhances the efficacy of our tool by enabling the generalization of certain words, users should be aware that these queries might expose confidential information to external servers. To mitigate this risk, our implementation remains functional when offline, albeit with slightly reduced efficacy due to the lack of real-time Wikidata look-ups. In a real-life deployment, technical and organizational measures would need to be implemented in order to safeguard the confidential personal or organizational data that remain in the reports; this will also require security and legal expertise. C2 \u2013 Bias and (Un-)fairness: Our tool may inadvertently introduce or perpetuate biases present in the training data. FLAN T5 was trained on C4, which is generated from the April 2019 Common Crawl dataset. Dodge et al. [13] discovered that C4 has a \u201cnegative sentiment bias against Arab identities\u201d and excludes \u201cdocuments associated with Black and Hispanic authors\u201d as well as documents \u201cmentioning sexual orientations\u201d [p. 8] by its blocklist filter. Therefore, similar to other pre-trained models [32], FLAN T5 is \u201cpotentially vulnerable to generating equivalently inappropriate content or replicating inherent biases\u201d [8, p. 52]. This may bias our level of concern measures. For example, certain names, professions, or locations may be classified as \u201cmedium concerning\u201d or \u201chighly concerning\u201d more often because they are considered \u201csurprising\u201d, which may unfairly impact the narratives involving them. Future work should, therefore, include evaluating and mitigating these biases and possibly experiments with other datasets and pre-trained models. C3 \u2013 Over-Reliance and Retaliation: The results of our quantitative evaluation are promising, but an extensive qualitative evaluation is necessary to determine whether our approach translates to realworld situations. Therefore, users of our tool must remain aware of its potential to alter the original intent of their text significantly and, depending on the context, possibly offer limited protection against retaliation. Over-reliance on our tool may lead to a false sense of security, resulting in increased vulnerability to retaliation. We intend to assess the extent of this form of automation bias [10] in a subsequent user study, discuss with people who are working in the field (e.g., whistleblower protection activists) how to best reduce it, and also evaluate these future mitigation measures. C4 \u2013 Resource consumption: Training LLMs is resource-intensive. By re-using the existing model and enlisting distilled LLM learning, this impact could be reduced in future work. C5 \u2013 Tool Misuse: Even though our tool aims to mitigate the risk of whistleblower re-identification, malicious actors might misuse our tool for obfuscating dangerous information or illegally converting copyrighted material. By providing our source code and fine-tuned models publicly, we open avenues for ethical use and misuse alike. Therefore, we emphasize that our sole aim in developing our tool is to facilitate legal, ethical whistleblowing. Future refinements and real-world evaluations will require collaboration with legal and social experts to better understand the practical implications and potential misuse scenarios. ACKNOWLEDGMENTS We acknowledge funding from the German Federal Ministry of Education and Research (BMBF) \u2013 Nr 16DII134. This publication has been partially supported by the EXDIGIT (Excellence in Digital Sciences and Interdisciplinary Technologies) project, funded by Land Salzburg under grant number 20204-WISS/ 263/6-6022."
|
| 21 |
+
}
|
title_10K/test_title_short_2405.01102v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01102v1",
|
| 3 |
+
"title": "Less is More: on the Over-Globalizing Problem in Graph Transformers",
|
| 4 |
+
"abstract": "Graph Transformer, due to its global attention mechanism, has emerged as a\nnew tool in dealing with graph-structured data. It is well recognized that the\nglobal attention mechanism considers a wider receptive field in a fully\nconnected graph, leading many to believe that useful information can be\nextracted from all the nodes. In this paper, we challenge this belief: does the\nglobalizing property always benefit Graph Transformers? We reveal the\nover-globalizing problem in Graph Transformer by presenting both empirical\nevidence and theoretical analysis, i.e., the current attention mechanism overly\nfocuses on those distant nodes, while the near nodes, which actually contain\nmost of the useful information, are relatively weakened. Then we propose a\nnovel Bi-Level Global Graph Transformer with Collaborative Training\n(CoBFormer), including the inter-cluster and intra-cluster Transformers, to\nprevent the over-globalizing problem while keeping the ability to extract\nvaluable information from distant nodes. Moreover, the collaborative training\nis proposed to improve the model's generalization ability with a theoretical\nguarantee. Extensive experiments on various graphs well validate the\neffectiveness of our proposed CoBFormer.",
|
| 5 |
+
"authors": "Yujie Xing, Xiao Wang, Yibo Li, Hai Huang, Chuan Shi",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"cs.AI"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Graph AND Structure AND Learning",
|
| 15 |
+
"gt": "Less is More: on the Over-Globalizing Problem in Graph Transformers",
|
| 16 |
+
"main_content": "Introduction Graph-structured data, an essential and prevalent form in the real world, plays a vital role in modeling object interactions, such as social networks, transportation networks, and protein-protein interaction networks. Graph Neural Net1School of Computer Science, Beijing University of Posts and Telecommunications, Beijing, China 2School of Software, Beihang University, Beijing, China. Correspondence to: Xiao Wang <xiao [email protected]>, Chuan Shi <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). works (GNNs) (Kipf & Welling, 2017; Veli\u02c7 ckovi\u00b4 c et al., 2018; Hamilton et al., 2017), as representative graph machine learning methods, effectively utilize their messagepassing mechanism to extract useful information and learn high-quality representations from graph data. However, GNNs face challenges with layer stacking due to oversmoothing (Li et al., 2018; Nt & Maehara, 2019; Oono & Suzuki, 2020) and over-squashing (Topping et al., 2021; Deac et al., 2022) problems, which limit their receptive fields to near neighbors. In contrast, Transformers (Vaswani et al., 2017), with their global attention mechanism, have shown exceptional expressive capability, which makes significant strides in various fields, including natural language processing (Devlin et al., 2018) and computer vision (Dosovitskiy et al., 2021). Incorporating Transformers into graph data presents an excellent solution to these challenges since they naturally construct a fully connected graph and adaptively learn interaction relationships with the powerful global attention mechanism. In graph-level tasks like molecular property prediction, numerous Graph Transformers leveraging global attention have achieved remarkable success (Ying et al., 2021; Kreuzer et al., 2021; Wu et al., 2021; Ramp\u00b4 a\u02c7 sek et al., 2022). This success is largely attributed to their global perception capability. Inspired by the successful applications in graphlevel tasks, researchers have attempted to solve the scalability challenge posed by the O(N 2) complexity of the global attention mechanism and make efforts to adapt this mechanism for node-level task, aiming at expanding the receptive field and enhancing the model\u2019s expressive ability (Kuang et al., 2022; Zhang et al., 2022; Zhu et al., 2023; Liu et al., 2023a; Wu et al., 2022; Kong et al., 2023; Wu et al., 2023). Although the global attention module has been recognized as the fundamental unit of Graph Transformer, the following question remains largely unknown: Does the globalizing property always benefit Graph Transformers? Understanding the attention mechanism in Graph Transformers, particularly its globalizing property, can provide valuable guidelines and insights for the development of advanced Graph Transformers. In this study, we reveal the over-globalizing problem in Graph Transformers by presenting both empirical evidence and theoretical analysis. 1 arXiv:2405.01102v1 [cs.LG] 2 May 2024 \fLess is More: on the Over-Globalizing Problem in Graph Transformers In particular, we empirically find that there is an inconsistency between the distribution of learned attention scores across all node pairs and the distribution of nodes that are actually informative, i.e., the global attention mechanism tends to focus on higher-order nodes, while the useful information often appears in lower-order nodes. Despite that higher-order nodes may provide additional information, the current attention mechanism overly focuses on those nodes. Theoretically, we demonstrate that an excessively expanded receptive field can diminish the effectiveness of the global attention mechanism, further implying the existence of the over-globalizing problem. Once the weakness of the global attention mechanism in Graph Transformers is identified, another question naturally emerges: how to improve the current global attention mechanism to prevent the over-globalizing problem in Graph Transformers, while still keeping the ability to extract valuable information from high-order nodes? Usually, one can alleviate this problem by implicitly or explicitly integrating a local module (e.g., GNNs) to complement Graph Transformers (Zhao et al., 2021; Zhang et al., 2022; Kuang et al., 2022; Kong et al., 2023; Liu et al., 2023a; Wu et al., 2023). However, the different properties of local smoothing in GNNs and over-globalizing in Graph Transformers raise a fundamental question about which information will predominantly influence the node representation. Moreover, the prevalent approach of fusing local and global information through linear combination is inadequate and potentially leads to incorrect predictions, even in situations where using either local or global information alone could have achieved accurate predictions. In this paper, we propose a novel Bi-Level Global Graph Transformer with Collaborative Training (CoBFormer). Specifically, we first partition the graph into distinct clusters with the METIS algorithm. Subsequently, we propose the bi-level global attention (BGA) module, which consists of an intra-cluster Transformer and an inter-cluster Transformer. This module effectively mitigates the overglobalizing problem while keeping a global receptive ability by decoupling the information within intra-clusters and between inter-clusters. To capture the graph structure information neglected by the BGA module, a Graph Convolution Network (GCN) is adopted as the local module. Finally, we propose collaborative training to integrate the information learned by the GCN and BGA modules and boost their performance. We summarize our contributions as follows: \u2022 We demonstrate a crucial phenomenon: Graph Transformers typically yield the over-globalizing problem of attention mechanism for node classification. Both the theoretical analysis and empirical evidence are provided to show that this problem will fundamentally affect Graph Transformers. Our discoveries provide a perspective that offers valuable insights into the improvement of Graph Transformers. \u2022 We propose CoBFormer, a Bi-Level Global Graph Transformer with Collaborative Training, which effectively addresses the over-globalizing problem. Theoretical analysis implies that our proposed collaborative training will improve the model\u2019s generalization ability. \u2022 Extensive experiments demonstrate that CoBFormer outperforms the state-of-the-art Graph Transformers and effectively solves the over-globalizing problem. 2. Preliminaries We denote a graph as G = (V, E), where the node set V contains N nodes and the edge set E contains E edges. All edges formulate an adjacency matrix A = [auv] \u2208 {0, 1}N\u00d7N, where auv = 1 if there exists an edge from node u to v, and 0 otherwise. Graph G is often associated with a node feature matrix X = [xu] \u2208RN\u00d7d, where xu is a d dimensional feature vector of node u. The label set is denoted as Y. Labels of nodes are represented with a label matrix Y = [yu] \u2208RN\u00d7|Y|, where yu is the one-hot label of node u. We use bold uppercase letters to represent matrices and bold lowercase letters to represent vectors. Graph Transformers. Graph Transformers allow each node in a graph to attend to any other nodes by its powerful global attention mechanism as follows: Attn(H) = Softmax \u0012QKT \u221a h \u0013 V, Q = HWQ, K = HWK, V = HWV , (1) where H \u2208RN\u00d7h denotes the hidden representation matix and h is the hidden representation dimension. WQ, WK, WV \u2208Rh\u00d7h are trainable weights of linear projection layers. The attention score matrix is \u02c6 A = Softmax \u0010 QKT \u221a h \u0011 \u2208RN\u00d7N, containing the attention scores of any node pairs. \u03b1uv is the element of \u02c6 A, representing the attention score between node u and v. It can be seen that Graph Transformers globally update the node representations by multiplying the attention score matrix \u02c6 A with the node representation matrix V. 3. Over-Globalizing Problem Empirical observations. In this section, we closely examine the distribution of the attention scores \u03b1uv to investigate what information the attention mechanism captures. Specifically, we define the proportion of the k-th hop neighbors 2 \fLess is More: on the Over-Globalizing Problem in Graph Transformers 0 1 2 3 4 5 6 7 8 k-hop 0.2 0.4 0.6 0.8 1.0 k u Cora CiteSeer PubMed Actor Deezer (a) 0 1 2 3 4 5 6 7 8 k-hop 0.0 0.1 0.2 0.3 0.4 Attn-k of VT Cora CiteSeer PubMed Actor Deezer (b) 0 1 2 3 4 5 6 7 8 k-hop 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Attn-k of NodeFormer Cora CiteSeer PubMed Actor Deezer (c) Figure 1. (a) The average Ck u with different k-hop on five real-world datasets. (b) The Attn-k of Vanilla Transformer. (c) The Attn-k of NodeFormer. sharing the same label with node u as follows: Ck u = |v \u2208N k(u) : yu = yv| |N k(u)| , (2) where N k(u) denotes the k-th hop neighbors of node u. Larger Ck u indicates a higher proportion of useful nodes in the k-th hop neighbors. Then we denote the average attention scores allocated to the k-th hop neighbors as Attnk, which is formulated as: Attn-k = Eu\u2208V X v\u2208N k(u) \u03b1uv. (3) Larger Attn-k implies that the model pays more attention to the k-th hop information. We present the changes of the average Ck u across three homophilic graphs (Cora, CiteSeer and PubMed) and two heterophilic graphs (Actor and Deezer) in Figure 1(a). We can observe that: (1) For homophilic graphs, Ck u will gradually decrease as the k increases. (2) For heterophilic graphs, Ck u will rapidly decrease when k = 1 and then remains nearly unchanged. This demonstrates that homophilic graphs benefit more from the local structure for node classification, while heterophilic graphs gain more information from the global receptive field. Then we visualize the Attn-k of Vanilla Transformer (VT) and NodeFormer (Wu et al., 2022) to check whether the trend of Attn-k is consistent with Figure 1(a). As can be seen in Figures 1(b) and 1(c), surprisingly, we find that the majority of the attention scores are actually allocated to distant higher-order neighbors, regardless of whether the graphs are homophilic or heterophilic. We identify this phenomenon as the over-globalizing problem in Graph Transformers, underscoring the limitations of relying solely on a global attention module. Theoretical analysis. Here we further theoretically explore the impact of the over-globalizing problem in Graph Transformers. Ideally, Graph Transformers would allocate higher attention scores to nodes with similar embeddings, thereby implicitly learning a graph structure that ensures the smoothness of embeddings among adjacent nodes. Consequently, \u2225Z \u2212\u02c6 AZ\u2225F would be relatively small (Shuman et al., 2013; Kalofolias, 2016). Here Z symbolizes the node embeddings. So we employ \u2225Z \u2212\u02c6 AZ\u2225F to evaluate the smoothness of the embeddings among adjacent nodes learned by Graph Transformers. A smaller \u2225Z\u2212\u02c6 AZ\u2225F indicates a better smoothness, suggesting that Graph Transformers can effectively recognize useful nodes and aggregate information from them, achieving better node classification performance. Then we investigate the factors influencing \u2225Z \u2212\u02c6 AZ\u2225F . Before that, we denote Cu as the proportion of nodes belonging to the same class in the reachable set of node u. If the reachable set of node u is the K-hop neighbors, then Cu can be formulated as follows: Cu = PK k=0 Ck u|N k(u)| PK k=0 |N k(u)| . (4) Now we can establish the connection between \u2225Z \u2212\u02c6 AZ\u2225F , \u03b1uv and Cu as follows: Theorem 3.1. For a given node u and a well-trained Graph Transformer, let \u03b7u = Ev\u2208V,yu=yv exp( qukT v \u221a d ), \u03b3u = Ev\u2208V,yu\u0338=yv exp( qukT v \u221a d ). Then, we have: \u2225Z \u2212\u02c6 AZ\u2225F \u2264 \u221a 2L X u\u2208V X v\u2208V,yu\u0338=yv \u03b1uv = \u221a 2L X u\u2208V 1 1 + Cu 1\u2212Cu \u03b7u \u03b3u . (5) where L is a Lipschitz constant. The proofs are given in Appendix A.1. Theorem 3.1 indicates that \u2225Z\u2212\u02c6 AZ\u2225F is bounded by the sum of the attention scores of node pairs with different labels and negatively correlated with Cu, since \u03b7u and \u03b3u are constants for a given Graph Transformer. Then we further study the variations of Cu in Theorem 3.2. 3 \fLess is More: on the Over-Globalizing Problem in Graph Transformers Theorem 3.2. To analyze the impact of k on Ck u, we assume that each node has an equal probability 1 |Y| of belonging to any given class. Given the edge homophily \u03c1 = |(u,v)\u2208E:yu=yv| |E| , Ck u can be recursively defined as: Ck u = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1, if k = 0 \u03c1, if k = 1 1+|Y|\u03c1Ck\u22121 u \u2212\u03c1\u2212Ck\u22121 u |Y|\u22121 . if k = 2, 3, \u00b7 \u00b7 \u00b7 (6) And Ck u possesses the following properties: \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 C\u221e u = 1 |Y| Ck u \u2265Ck+1 u , if \u03c1 \u2265 1 |Y|, k = 0, 1 \u00b7 \u00b7 \u00b7 C2k u > C2(k+1) u , if \u03c1 < 1 |Y|, k = 0, 1 \u00b7 \u00b7 \u00b7 C2k+1 u < C2(k+1)+1 u , if \u03c1 < 1 |Y|. k = 0, 1 \u00b7 \u00b7 \u00b7 (7) We provide the proof in Appendix A.2. Theorem 3.2 indicates that in homophilic graphs, where \u03c1 is relatively large, Ck u will gradually decrease and converge to 1 |Y|, as the k increases. However, in heterophilic graphs, where \u03c1 is relatively small, it will fluctuate around 1 |Y| and eventually converge to 1 |Y|. Combining with Theorem 3.1, we find that in homophilic graphs, as the receptive field expands, the gradually decreased Ck u will lead to a reduced Cu and a larger \u2225Z \u2212\u02c6 AZ\u2225F , implying that an over-expanded receptive field adversely affects the global attention. Conversely, in heterophilic graphs, global attention brings in additional information that cannot be captured within the local neighborhood. Based on Theorem 3.2, we visualize the theoretical variations of Ck u in Figure 2. Compared with Figure 1(a), we can find that our theories align well with real-world scenarios. More visualization results of theoretical scenarios are provided in Appendix D.2. 0 2 4 6 8 10 k-hop 0.2 0.4 0.6 0.8 1.0 k u =0.9 =0.8 =0.7 =0.6 =0.5 =0.4 =0.3 =0.2 =0.1 Figure 2. The variations of Ck u under various \u03c1 settings for a scenario involving only two classes with uniformly distributed labels. Experimental analysis. Inspired by Theorem 3.1, we define the Attention Signal/Noise Ratio (Attn-SNR) as the metric to quantify the ability of Graph Transformers to distinguish useful nodes as follows: Definition 3.3. The Attention Signal/Noise Ratio (AttnSNR) is: Attn-SNR = 10 lg P yu=yv \u03b1uv P yu\u0338=yv \u03b1uv ! . (8) For a given Graph Transformer, a smaller Attn-SNR usually implies that the attention mechanism pays more attention to nodes with different labels, which may be caused by the over-globalizing problem. We evaluate Vanilla Transformer and NodeFormer utilizing Attn-SNR and accuracy on Cora and Citeseer. Furthermore, we deliberately improve the Attn-SNR of Vanilla Transformer by doubling the attention scores between nodes sharing the same label, and report its performance. The results are presented in Table 1, indicating that: (1) Vanilla Transformer (VT) typically shows the least Attn-SNR, resulting in the poorest performance. NodeFormer (NF) exhibits a higher Attn-SNR and achieves superior performance. (2) Remarkably, the Denoised Vanilla Transformer (VT-D), artificially directed to achieve higher Attn-SNR, demonstrates better performance than Vanilla Transformer. This is because the over-globalizing problem can be alleviated by doubling the attention scores between nodes with the same label, which are more likely to appear in the local neighborhood, thereby enhancing the model\u2019s classification capability. Table 1. The Attn-SNR and testing accuracy of different models. Dataset Metric VT NF VT-D Cora Attn-SNR -6.97 0.43 12.05 Accuracy 55.18 80.20 82.12 CiteSeer Attn-SNR -7.19 -5.09 8.72 Accuracy 50.72 71.50 61.80 4. The Proposed Method In this section, we introduce the Bi-Level Global Graph Transformer with Collaborative Training (CoBFormer). An overview of CoBFormer is shown in Figure 3. Specifically, we first use the METIS algorithm (Karypis & Kumar, 1998) to partition the graph into different clusters. Then we propose a novel bi-level global attention (BGA) module, which can decouple the information within intra-clusters and between inter-clusters by an intra-cluster Transformer and an inter-cluster Transformer. Additionally, we incorporate a GCN as the local module to learn the graph structural information. Finally, we propose collaborative training to integrate the information obtained by the GCN and BGA modules and theoretically prove that the generalization ability can be improved with our proposed collaborative training. 4 \fLess is More: on the Over-Globalizing Problem in Graph Transformers Partition into Clusters Collaborative Training 0.2 0.7 0.1 0.1 0.6 0.3 0.25 0.6 0.15 0.15 0.5 0.35 \u2217\ud835\udf0f \u2217\ud835\udf0f # \ud835\udc66! # \ud835\udc66\" \u0302 \ud835\udc60! \u0302 \ud835\udc60\" GCNConv GCNConv \ud835\udc34, \ud835\udc4b \ud835\udc34, \ud835\udc3b Pool & Lin MultiHead Attention FFN % \ud835\udc3b \ud835\udc44\u2032 \ud835\udc3e\u2032 \ud835\udc49\u2032 Fuse Inter-Cluster Linear MultiHead Attention FFN \ud835\udc3b! \ud835\udc44! \ud835\udc3e! \ud835\udc49 ! Intra-Cluster Lin -T Lin -G GCN BGA Figure 3. The overall framework of our proposed CoBFormer. 4.1. Bi-Level Global Attention Module Traditional Graph Transformers utilize the global attention mechanism to capture information between any node pairs, causing the over-globalizing problem. Therefore, we need to guarantee that local information can be captured, so as to alleviate the problem. To achieve this goal, we first partition the graph into P non-overlapping clusters using METIS (Karypis & Kumar, 1998). We denote the set of clusters as P = {Gi}, where Gi = {Vi, Ei} represents a subgraph of G, satisfying S Gi = G, T Gi = \u2205. The local information is usually within each cluster, so we employ an intra-cluster Transformer. The node features in cluster p are represented as Xp \u2208R N P \u00d7d. We apply an MLP to project the original node features into a latent space as H1 p = MLP(Xp). Subsequently, the hidden representations H1 p \u2208R N P \u00d7h are fed into the intra-cluster Transformer to learn the updated hidden representations \u02c6 Hk p: \u02c6 Hk p = FFN Softmax QpKT p \u221a h ! Vp ! , Qp = Hk pWk Q, Kp = Hk pWk K, Vp = Hk pWk V , (9) where Wk Q, Wk K and Wk V \u2208Rh\u00d7h are trainable weights of the linear projection layers in the k-th intra-cluster Transformer, and FFN represents a Feed-Forward Neural Network. A residual connection (He et al., 2016) is applied in each attention block and FFN block. Subsequently, we apply mean pooling to \u02c6 Hk p to obtain the cluster representations Pk \u2208RP \u00d7h. The p-th row of Pk, represented as pk p, is calculated by pk p = MEAN( \u02c6 Hk p) \u2208 R1\u00d7h. Following this, the cluster representations Pk are fed into the inter-cluster Transformer: \u02c6 Pk = FFN Softmax Q\u2032K\u2032T \u221a h ! V\u2032 ! , Q\u2032 = PkWk Q\u2032, K\u2032 = PkWk K\u2032, V\u2032 = PkWk V \u2032, (10) where Wk Q\u2032, Wk K\u2032 and Wk V \u2032 \u2208Rh\u00d7h are trainable weights of the linear projection layers in the k-th inter-cluster Transformer. Although the inter-cluster Transformer learns the attentions between different clusters, it can approximate the global attention mechanism in Equation (1) and effectively learn the global information as in Proposition 4.1. Proposition 4.1. Given u \u2208Vp, v \u2208Vq, along with a welltrained inter-cluster attention score matrix \u02d9 A \u2208RP \u00d7P . Let \u02d9 \u03b1pq represent the attention score between clusters p and q. Then the approximate attention score between node u and v can be expressed as \u02c6 \u03b1uv = \u02d9 \u03b1pq |Vq|. The proof can be found in Appendix A.4. It can be seen that the attention score \u03b1uv of the global attention mechanism can be approximately represented as \u02d9 \u03b1pq |Vq|, which can be calculated by our inter-cluster Transformer. Now with both \u02c6 Hk p and \u02c6 Pk, we concatenate the node representation with its corresponding cluster representation and calculate the output node representations Hk+1 p with a fusion linear layer parameterized by Wf: Hk+1 p = \u0010 \u02c6 Hk p\u22251 N P \u02c6 pT p \u0011 Wf, (11) here \u2225indicates the concatenation operation, 1 N P is an allone column vector with a dimensional of N P and \u02c6 pp is the hidden representation of cluster p. By decoupling the information within inner-clusters and between inter-clusters, our BGA module can alleviate the over-globalizing problem while preserving the expressive ability of global attention. Concurrently, our method achieves significant computational and memory efficiency by focusing on intra-cluster 5 \fLess is More: on the Over-Globalizing Problem in Graph Transformers and inter-cluster attention. The time and space complexity of our BGA module are O( N 2 P + P 2), reaching O(N 4 3 ) under the optimal condition. Note that the efficiency of our BGA module can be further improved by linear attention techniques (Choromanski et al., 2020; Wang et al., 2020; Wu et al., 2023). 4.2. Collaborative Training With the proposed BGA module capturing intra-cluster and inter-cluser information, we further introduce a GCN as the local module to supplement the graph structure information ignored by the BGA module. Instead of directly employing a linear combination of information from the local module and the global attention module for node classification, we propose a collaborative training approach for the GCN and BGA modules. We denote the labeled node set as VL and the unlabeled node set as VU, with L and U representing their respective label matrix. In previous works, a model is trained by predicting the label distribution of the labeled nodes with a cross-entropy loss. However, this method does not guarantee satisfactory performance on unlabeled nodes. Here, we employ two linear layers, Lin-G and Lin-T, to map the outputs of the GCN and BGA modules onto the label space: \u02c6 ZG = Lin-G(GCN(A, X)), \u02c6 ZT = Lin-T(BGA(X, P)). (12) Then we use the SoftMax function to calculate the predicted labels and soft labels (Hinton et al., 2015): \u02c6 YG = SoftMax(\u02c6 ZG), \u02c6 YT = SoftMax(\u02c6 ZT ), \u02c6 SG = SoftMax(\u02c6 ZG \u2217\u03c4), \u02c6 ST = SoftMax(\u02c6 ZT \u2217\u03c4), (13) where \u03c4 is a temperature coefficient used to control the smoothness of the soft labels. The objective function can be formulated as: Lce = \u2212 \u0000Eyu,u\u2208VL log(\u02c6 yG u ) + Eyu,u\u2208VL log(\u02c6 yT u ) \u0001 , Lco = \u2212 \u0000E\u02c6 sG u ,u\u2208VU log(\u02c6 sT u ) + E\u02c6 sT u ,u\u2208VU log(\u02c6 sG u ) \u0001 , L = \u03b1 \u2217Lce + (1 \u2212\u03b1) \u2217Lco. (14) where yu represents the true label of node u. \u02c6 yG u and \u02c6 yT u are the predicted labels of node u by the GCN and BGA modules, respectively. \u02c6 sG u and \u02c6 sT u denote the soft labels generated by each. Lce is the cross-entropy loss, a standard choice for classification tasks. Lco is designed to encourage mutual supervision between the GCN and BGA modules. The parameter \u03b1 is used to balance the contributions of Lce and Lco. Now we prove that our proposed collaborative training can improve the generalization ability of our GCN module and BGA module, thereby achieving better classification performance. Theorem 4.2. Consider P(L, U) as the true label distribution, PG(L, U) as the predicted label distribution by the GCN, and PT (L, U) as the predicted label distribution by the BGA module. The following relations hold: EP (L,U) log PG(L, U) =EP (L) log PG(L)+ EPT (U|L) log PG(U|L)\u2212 KL(PT (U|L)\u2225P(U|L)), EP (L,U) log PT (L, U) =EP (L) log PT (L)+ EPG(U|L) log PT (U|L)\u2212 KL(PG(U|L)\u2225P(U|L)), (15) where KL(\u00b7\u2225\u00b7) is the Kullback-Leibler divergence. The proof is given in Appendix A.3. EP (L,U) log PG(L, U) is the cross entropy between P(L, U) and PG(L, U). We aim to maximize it so that our model can achieve the best performance on labeled nodes and unlabeled nodes. However, it cannot be maximized directly since the label distribution of unlabeled nodes is unknown. Theorem 4.2 suggests that EP (L,U) log PG(L, U) can be decomposed into three terms. The first term is the cross entropy between P(L) and PG(L), which can be maximized by optimizing the Lce. It will ensure a good performance on labeled nodes. The second term is the cross entropy between PT (U|L) and PG(U|L), which can be maximized by optimizing the Lco. This term indicates that we can further improve the performance of the GCN module on unlabeled nodes by collaboratively training with the BGA module. Note that the third term is the Kullback-Leibler divergence between PT (U|L) and P(U|L), which is a constant when we optimize the GCN module. Therefore, the performance of the GCN module can be improved by the loss in Equation (14). Similarly, the performance of the BGA module can be improved. 5. Experiments Datasets. We select seven datasets to evaluate, including homophilic graphs, i.e., Cora, CiteSeer, Pubmed (Yang et al., 2016), Ogbn-Arxiv, Ogbn-Products (Hu et al., 2020) and heterophilic graphs, i.e., Actor, Deezer (Lim et al., 2021b). For Cora, CiteSeer, PubMed, we adopt the public split offered by PyG (Fey & Lenssen, 2019). For Ogbn-Arxiv and Ogbn-Products, we use the public splits in OGB (Hu et al., 2020). For Actor and Deezer, we perform five random splits of the nodes into train/valid/test sets, with the ratio of 50%:25%:25% (Lim et al., 2021b). The detailed statistics of the datasets can be found in Appendix B. Baselines. We compare our method with five baselines, including two classic GNNs: GCN (Kipf & Welling, 2017) and GAT (Veli\u02c7 ckovi\u00b4 c et al., 2018), and three state-of-theart Graph Transformers: NodeFormer (Wu et al., 2022), 6 \fLess is More: on the Over-Globalizing Problem in Graph Transformers Table 2. Quantitative results (% \u00b1 \u03c3) on node classification. Dataset Metric GCN GAT NodeFormer NAGphormer SGFormer CoB-G CoB-T Cora Mi-F1 81.44 \u00b1 0.78 81.88 \u00b1 0.99 80.30 \u00b1 0.66 79.62 \u00b1 0.25 81.48 \u00b1 0.94 84.96 \u00b1 0.34 85.28 \u00b1 0.16 Ma-F1 80.65 \u00b1 0.91 80.56 \u00b1 0.55 79.12 \u00b1 0.66 78.78 \u00b1 0.57 79.28 \u00b1 0.49 83.52 \u00b1 0.15 84.10 \u00b1 0.28 CiteSeer Mi-F1 71.84 \u00b1 0.22 72.26 \u00b1 0.97 71.58 \u00b1 1.74 67.46 \u00b1 1.33 71.96 \u00b1 0.13 74.68 \u00b1 0.33 74.52 \u00b1 0.48 Ma-F1 68.69 \u00b1 0.38 65.67 \u00b1 2.28 67.28 \u00b1 1.87 64.47 \u00b1 1.58 68.49 \u00b1 0.65 69.73 \u00b1 0.45 69.82 \u00b1 0.55 PubMed Mi-F1 79.26 \u00b1 0.23 78.46 \u00b1 0.22 78.96 \u00b1 2.71 77.36 \u00b1 0.96 78.04 \u00b1 0.41 80.52 \u00b1 0.25 81.42 \u00b1 0.53 Ma-F1 79.02 \u00b1 0.19 77.82 \u00b1 0.22 78.14 \u00b1 2.51 76.76 \u00b1 0.91 77.86 \u00b1 0.32 80.02 \u00b1 0.28 81.04 \u00b1 0.49 Actor Mi-F1 30.97 \u00b1 1.21 30.63 \u00b1 0.68 35.42 \u00b1 1.37 34.83 \u00b1 0.95 37.72 \u00b1 1.00 31.05 \u00b1 1.02 37.41 \u00b1 0.36 Ma-F1 26.66 \u00b1 0.82 20.73 \u00b1 1.58 32.37 \u00b1 1.38 32.20 \u00b1 1.11 34.11 \u00b1 2.78 27.01 \u00b1 1.77 34.96 \u00b1 0.68 Deezer Mi-F1 63.10 \u00b1 0.40 62.20 \u00b1 0.41 63.59 \u00b1 2.24 63.71 \u00b1 0.58 66.68 \u00b1 0.47 63.76 \u00b1 0.62 66.96 \u00b1 0.37 Ma-F1 62.07 \u00b1 0.31 60.99 \u00b1 0.56 62.70 \u00b1 2.20 62.06 \u00b1 1.28 65.22 \u00b1 0.68 62.32 \u00b1 0.94 65.63 \u00b1 0.36 Arxiv Mi-F1 71.99 \u00b1 0.14 70.22 \u00b1 0.19 67.98 \u00b1 0.60 71.38 \u00b1 0.20 72.50 \u00b1 0.28 73.17 \u00b1 0.18 72.76 \u00b1 0.11 Ma-F1 51.89 \u00b1 0.19 45.98 \u00b1 0.26 46.24 \u00b1 0.20 51.38 \u00b1 0.47 52.83 \u00b1 0.31 52.31 \u00b1 0.40 51.64 \u00b1 0.09 Products Mi-F1 73.32 \u00b1 0.26 75.04 \u00b1 0.18 70.71 \u00b1 0.27 76.41 \u00b1 0.53 71.57 \u00b1 0.37 78.09 \u00b1 0.16 78.15 \u00b1 0.07 Ma-F1 35.66 \u00b1 0.42 33.85 \u00b1 0.30 30.09 \u00b1 0.02 37.48 \u00b1 0.38 33.19 \u00b1 0.40 38.21 \u00b1 0.22 37.91 \u00b1 0.44 NAGphormer (Chen et al., 2022), and SGFormer (Wu et al., 2023). Note that in our proposed CoBFormer, the GCN module (CoB-G) and BGA module (CoB-T) each predict the node label independently, so we report their performance simultaneously. Experimental implementation details are given in Appendix C. Node Classification Results. Table 2 reports the experimental results on node classification. We select Micro-F1 and Macro-F1 as metrics to conduct a comprehensive performance comparison. We can observe that: (1) Both GCN and BGA modules of CoBFormer outperform all baselines in homophilic graphs by a substantial margin, demonstrating the effectiveness of CoBFormer. (2) In heterophilic graphs, the performance of our BGA module is comparable to, even surpasses, the best baseline, SGFormer. This indicates that our BGA module can successfully capture global information. (3) Traditional Graph Transformers exhibit superior performance on heterophilic graphs when compared with GCN and GAT. However, their advantage in homophilic graphs is relatively limited. This suggests that local information plays a more crucial role in homophilic graphs, whereas global information significantly enhances model performance in heterophilic graphs. These results are consistent with our analysis in Section 3. 6. Ablation Studies & Analysis Ablation Study. We carry out ablation studies on Cora, PubMed, and Deezer to evaluate the two fundamental components of our CoBFormer: the BGA module and the collaborative training approach, where the results are shown in Table 3. Key observations include: (1) The accuracy of our BGA module consistently outperforms vanilla global attention on all datasets, irrespective of the use of collaboraTable 3. Test accuracy and GPU memory of various CoBFormer variants. \u2018V-A\u2019 denotes the vanilla global attention. \u2018B-A\u2019 represents the BGA module. \u2018C-T\u2019 indicates whether collaborative training is applied. Dataset V-A B-A C-T CoB-G CoB-T MEM Cora \u221a \u00d7 \u00d7 81.44 54.86 0.85G \u221a \u00d7 \u221a 83.78 83.82 0.85G \u00d7 \u221a \u00d7 81.44 68.72 0.38G \u00d7 \u221a \u221a 84.96 85.28 0.38G PubMed \u221a \u00d7 \u00d7 79.26 71.22 8.42G \u221a \u00d7 \u221a 80.38 80.36 8.42G \u00d7 \u221a \u00d7 79.26 74.52 0.50G \u00d7 \u221a \u221a 80.52 81.42 0.50G Deezer \u221a \u00d7 \u00d7 62.07 66.49 20.23G \u221a \u00d7 \u221a 63.67 66.86 20.23G \u00d7 \u221a \u00d7 62.07 66.56 3.97G \u00d7 \u221a \u221a 63.76 66.96 3.97G tive training, demonstrating the effectiveness of our BGA module. (2) Collaborative training leads to significant accuracy improvement in both the GCN and BGA modules, indicating that it enhances the model\u2019s generalization ability by encouraging mutual learning. (3) The BGA module significantly reduces GPU memory, addressing scalability concerns. Specifically, GPU memory usage is largely reduced by 94% for PubMed and 80% for Deezer. Over-Globalizing Problem. To demonstrate our CoBFormer\u2019s ability to alleviate the over-globalizing problem, we visualize the Attn-k of our CoBFormer in Figure 4. Compared with Figures 1(b) and 1(c), CoBFormer allocates more attention scores in the local region than Vanilla Transformer and NodeFormer, indicating that our BGA module can effectively alleviate the over-globalizing problem by decoupling 7 \fLess is More: on the Over-Globalizing Problem in Graph Transformers 0 1 2 3 4 5 6 7 8 k-hop 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Attn-k of CoBFormer Cora CiteSeer PubMed Actor Deezer Figure 4. The Attn-k of CoBFormer. the intra-cluster information and inter-cluster information. We further calculate the Attn-SNR and test accuracy to show our model\u2019s capabilities to distinguish useful nodes and extract valuable information, where the results are shown in Figure 5. Obviously, the CoB-T significantly improves the Attn-SNR and substantially boosts performance on the Cora, CiteSeer, and PubMed. It underscores that our CoBFormer can effectively mitigate the over-globalizing problem. For Actor and Deezer, our CoB-T achieves performance comparable to that of VT, implying that our CoBFormer can effectively capture global information. -8 -6 -4 -2 0 2 4 6 Cora CiteSeer PubMed Actor Deezer Attn-SNR(%) CoB-T VT 30 40 50 60 70 80 90 Cora CiteSeer PubMed Actor Deezer Accuracy(%) CoB-T VT Figure 5. The Attn-SNR and Accuracy of VT and our CoBFormer. 60 65 70 75 80 85 90 \u237a=1.0 \u237a=0.9 \u237a=0.8 \u237a=0.7 Accuracy(%) Cora CoB-G CoB-T 50 55 60 65 70 75 80 \u237a=1.0 \u237a=0.9 \u237a=0.8 \u237a=0.7 Accuracy(%) CiteSeer CoB-G CoB-T 70 72 74 76 78 80 82 \u237a=1.0 \u237a=0.9 \u237a=0.8 \u237a=0.7 Accuracy(%) PubMed CoB-G CoB-T 60 62 64 66 68 \u237a=1.0 \u237a=0.9 \u237a=0.8 \u237a=0.7 Accuracy(%) Deezer CoB-G CoB-T Figure 6. The average test accuracy of CoBFormer for different \u03b1. Parameter Study. We analyze the key parameters, the 83 83.5 84 84.5 85 85.5 86 \ud835\udf0f=0.3 \ud835\udf0f=0.5 \ud835\udf0f=0.7 \ud835\udf0f=0.9 Accuracy(%) Cora CoB-G CoB-T 72 72.5 73 73.5 74 74.5 75 \ud835\udf0f=0.3 \ud835\udf0f=0.5 \ud835\udf0f=0.7 \ud835\udf0f=0.9 Accuracy(%) CiteSeer CoB-G CoB-T 76 77 78 79 80 81 82 \ud835\udf0f=0.3 \ud835\udf0f=0.5 \ud835\udf0f=0.7 \ud835\udf0f=0.9 Accuracy(%) PubMed CoB-G CoB-T 60 62 64 66 68 \ud835\udf0f=0.3 \ud835\udf0f=0.5 \ud835\udf0f=0.7 \ud835\udf0f=0.9 Accuracy(%) Deezer CoB-G CoB-T Figure 7. The average test accuracy of CoBFormer for different \u03c4. collaborative learning strength coefficient \u03b1 and the temperature coefficient \u03c4. We vary \u03b1 in {1.0, 0.9, 0.8, 0.7} and report the performance in Figure 6. We can see that our model achieves notable performance improvements for all values except when \u03b1 = 1. Furthermore, it exhibits consistent performance across various \u03b1 values, underscoring the effectiveness and robustness of our collaborative training approach. Then we fix the best \u03b1, and report the performance of various \u03c4 in {0.9, 0.7, 0.5, 0.3}. Figure 7 suggests that the choice of \u03c4 significantly impacts performance, emphasizing the importance of selecting an appropriate \u03c4 for optimal results. More parameter studies are in Appendix D.1. 7. Related Work Graph Neural Networks. Representative GNNs, such as GCN (Kipf & Welling, 2017) and GAT (Veli\u02c7 ckovi\u00b4 c et al., 2018), leverage a message-passing mechanism to recursively aggregate neighbor information. However, due to over-smoothing (Li et al., 2018; Nt & Maehara, 2019; Oono & Suzuki, 2020) and over-squashing (Topping et al., 2021; Deac et al., 2022) problems, GNNs typically cannot stack multiple layers to capture information from distant nodes. Moreover, early designs of GNNs largely rely on the homophily assumption (McPherson et al., 2001) that nodes of the same type are more likely to be connected. Although some GNNs are designed for heterophilic graphs (Pei et al., 2020; Lim et al., 2021a; Chien et al., 2021; Bo et al., 2021), they still suffer from the same issues of over-smoothing and over-squashing, resulting in a limited receptive field. Graph Transformers. Transformers (Vaswani et al., 2017), benefiting from their global attention, naturally construct a fully connected graph with learnable edge weights, offering a novel approach to address the issues of oversmoothing and over-squashing in GNNs. Extensive works have achieved remarkable success in graph-level tasks (Ying et al., 2021; Kreuzer et al., 2021; Wu et al., 2021; Ramp\u00b4 a\u02c7 sek et al., 2022). This success is primarily attributed to their global perception capability, which is vital for graph-level 8 \fLess is More: on the Over-Globalizing Problem in Graph Transformers tasks. Influenced by the success in graph-level tasks, researchers are now exploring the integration of the global attention mechanism into node-level tasks (Kuang et al., 2022; Zhang et al., 2022; Zhu et al., 2023; Liu et al., 2023a; Wu et al., 2022; Kong et al., 2023; Wu et al., 2023). These approaches aim to broaden the receptive field in large-scale graphs and amplify the expressive potential of the models. 8. Conclusion In this paper, we discover the over-globalizing problem in Graph Transformers by presenting the theoretical insights and empirical results. We then propose CoBFormer, a bilevel global graph transformer with collaborative training, aiming at alleviating the over-globalizing problem and improving the generalization ability. Extensive experiments verify the effectiveness of CoBFormer. Potential Impacts This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.01103v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01103v1",
|
| 3 |
+
"title": "LLM Security Guard for Code",
|
| 4 |
+
"abstract": "Many developers rely on Large Language Models (LLMs) to facilitate software\ndevelopment. Nevertheless, these models have exhibited limited capabilities in\nthe security domain. We introduce LLMSecGuard, an open-source framework that\noffers enhanced code security through the synergy between static code analyzers\nand LLMs. LLMSecGuard aims to equip practitioners with code solutions that are\nmore secure than the code initially generated by LLMs. It also benchmarks LLMs,\nproviding valuable insights into the evolving security properties of these\nmodels.",
|
| 5 |
+
"authors": "Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, Mohammad Ghafari",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.SE",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.SE",
|
| 11 |
+
"cs.CR"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "LLM Fairness",
|
| 15 |
+
"gt": "LLM Security Guard for Code",
|
| 16 |
+
"main_content": "INTRODUCTION The use of Large Language Models (LLMs) such as ChatGPT and Copilot has become popular for software development activities such as coding, design, comprehension, etc. [7, 17]. Nevertheless, hallucination, i.e., \u201cpresenting incorrect information as if it is correct\u201d, poses serious challenges for LLM users [18]. This issue is more prevalent in domains where reliable training content is scarce, such as in the realm of code security. Indeed, recent studies have shown that code models are widely adopted for code generation [12], but they have limited capabilities in the software security domain [2, 25]. Therefore, a vulnerable code piece that an LLM mistakenly recommends as a secure solution could compromise the entire system\u2019s security if it is used without enough scrutiny. We introduce LLMSecGuard, a framework designed to extensively examine the security properties of LLMs and leverage their analytical capabilities for secure code development. It applies static security analysis on LLM-generated code to uncover potential security issues and guides LLMs in resolving such issues in the code. EASE 2024, June 18\u201321, 2024, Salerno, Italy 2024. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 https://doi.org/10.1145/xxx Moreover, LLMSecGuard assesses the security properties of LLMs and benchmark them across different CWEs. In summary, LLMSecGuard contributes to more secure software development. Unlike existing code assistants, developers can integrate unlimited LLMs and code analysis engines into this framework through REST APIs. LLMSecGuard is open-source and publicly available on GitHub.1 The remainder of this paper is structured as follows. In Section 2, we motivate this work. In Section 3, we introduce LLMSecGuard, and in Section 4, we explain its two use cases. In Section 5, we present related work. In Section 6, we outline our plans, and in Section 7, we conclude this paper. 2 MOTIVATION Security issues are pervasive in multiple domains [4], from mobile applications [9, 11] and web servers [10], to critical software systems [27]. There are program analysis tools designed to uncover security issues, but studies indicate their limitations [5, 6], as well as their lack of popularity among developers [14]. Unfortunately, the security landscape may not improve as we witness the popularity of language models for code generation [12]. It is concerning that developers place undue trust in these models, which are known to generate insecure code examples [2, 25]. In the following, we explain how we could iteratively ask ChatGPT 3.5 to generate secure code examples for symmetric encryption. Encryption plays a crucial role in our digital world, but its correct adoption has proven to be challenging for developers [15]. For instance, the analysis of 489 open-source Java projects showed that 85% of cryptography APIs are misused [16]. Importantly, existing tools that identify and fix such issues do not fulfill developers\u2019 expectations [1, 28]. Consider the StackOverflow post ID 3451670, which suffers from security issues such as \u201cECB encryption mode\u201d, \u201cstatic or constant key\u201d, \u201cstatic salt for key derivation\u201d, and \u201chardcoded password\u201d. When we presented this inquiry to ChatGPT, requesting a \u201csecure Java code example\u201d, the resulting response still contained identical security issues. We then explicitly instructed ChatGPT that \u201cI expect a secure encryption algorithm, a secure encryption mode, and a secure Initialization Vector (IV). Do not use hard-coded secrets, suggest best practices, and make sure that your example provides confidentiality and integrity\u201d. Listing 1 presents the response.2 1https://github.com/aryakvnust/LLMSecGuard 2https://chat.openai.com/share/8ce9eb6e-88b7-44dc-807a-51d04e7d446c arXiv:2405.01103v1 [cs.SE] 2 May 2024 \fPreprint Version EASE 2024, June 18\u201321, 2024, Salerno, Italy Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, and Mohammad Ghafari 1 public class SecureEncryptionExample { 2 public static void main(String[] args) throws Exception { 3 String username = \"[email protected]\"; 4 String password = \"Password1\"; 5 String secretID = \"BlahBlahBlah\"; 6 String SALT2 = \"deliciously salty\"; 7 // Generate a secure random IV (Initialization Vector) 8 SecureRandom secureRandom = new SecureRandom(); 9 byte[] iv = new byte[12]; 10 secureRandom.nextBytes(iv); 11 // Derive a key from the password using a secure method 12 byte[] key = deriveKey(SALT2, username, password); 13 // Create a SecretKey object using the derived key 14 SecretKey secretKey = new SecretKeySpec(key, \"AES\"); 15 // Instantiate the cipher with AES in GCM mode 16 Cipher cipher = Cipher.getInstance(\"AES/GCM/NoPadding\"); 17 ... 18 } 19 private static byte[] deriveKey(String salt, String username, String password) throws Exception { 20 // Concatenate salt, username, and password 21 String combined = salt + username + password; 22 // Use a secure hash function (SHA-256) to derive a key 23 MessageDigest md = MessageDigest.getInstance(\"SHA-256\"); 24 byte[] hash = md.digest(combined.getBytes()); 25 return Arrays.copyOf(hash, 16); 26 } 27 } Listing 1: ChatGPT\u2019s improved response to Post ID 3451670 ChatGPT identified the constant key, and it correctly suggested the secure \u201cGCM\u201d encryption mode as a best practice. However, violations such as \u201cstatic salt for key generation\u201d and \u201chardcoded password\u201d were in the response without any proper warning. The suggested code, at line 23, included the \u201cSHA-256\u201d hash function for key generation, which is not a best practice. Anyway, when we asked about the security risk at line 6 (i.e., static salt), ChatGPT responded: \u201cUsing a hard-coded string as a salt is generally not considered a good practice for security. Ideally, salt should be unique for each user and should be randomly generated for each password. Using a constant salt for all users could make the system more vulnerable to certain types of attacks, such as rainbow table attacks\u201d. It then provided the code example in Listing 2 as a replacement. 1 // Generate a secure random salt 2 SecureRandom secureRandom = new SecureRandom(); 3 byte[] salt = new byte[16]; 4 secureRandom.nextBytes(salt); 5 // Convert the salt to a Base64-encoded string for storage 6 String SALT2 = Base64.getEncoder().encodeToString(salt); Listing 2: ChatGPT\u2019s suggestion to replace a static salt Subsequently, we asked, \u201cAre there any other security risks in your provided code?\u201d. It failed to detect the hardcoded password, but at this stage, it correctly suggested a key derivation function (KDF) instead of the hash function for generating a password-based key. Upon a direct inquiry about the line where a hardcoded password existed, it offered a secure suggestion. Through proper interactions with ChatGPT, specifically by listing potential security issues and their locations within the code, it is feasible to harness ChatGPT\u2019s power to generate secure code examples. However, novice developers may not possess the necessary expertise to provide such inputs. Hence, we introduce LLMSecGuard, a framework that links LLMs and static analysis tools to overcome this limitation. 3 LLMSecGuard We introduce LLMSecGuard, an open-source framework that offers enhanced code security through the synergy between code security analyzers and LLMs. The primary objective of LLMSecGuard is (i) to enhance the security of LLM-generated code, and (ii) to benchmark the security properties of LLMs. It adopts a RESTful architecture, implemented in Python using Django and Flask, and uses SQLite for data persistence. LLMSecGuard is publicly available on GitHub.3 LLMSecGuard Prompt Agent Prompt Security Agent Benchmark Agent Potential Vulnerabilities Report Benchmark Prompts Benchmark uses Analysis Engine LLM Figure 1: LLMSecGuard\u2019s components Figure 1 illustrates the schematic architecture of LLMSecGuard, comprising three main components: \u201cPrompt Agent\u201d, \u201cSecurity Agent\u201d, and \u201cBenchmark Agent\u201d. Prompt Agent. This component is responsible for receiving a prompt and providing other components with the code model\u2019s response. Upon receipt of a prompt, \u201cPrompt Agent\u201d undertakes the task of engineering a response. Particularly, it can reformulate a prompt, pass the prompt to code models, collect the response, and forward the result to other components. Security Agent. This component has a pivotal role in uncovering security issues in LLM-generated code. Precisely, it is entrusted to pass the code to static code analysis engines (such as Semgrep and Weggli), and to collect potential security vulnerabilities. Benchmark Agent. This component puts different LLMs to security test. Particularly, it evaluates the security properties of LLMs based on a set of challenges, structured in JSON format. Each challenge includes a prompt and the expected outcome, and \u201cBenchmark Agent\u201d is responsible for flagging LLMs that pass the challenge. 3https://github.com/aryakvnust/LLMSecGuard \fPreprint Version LLM Security Guard for Code EASE 2024, June 18\u201321, 2024, Salerno, Italy 4 USAGE SCENARIOS We describe the LLMSecGuard\u2019s main configurations including its connection with external entities such as LLMs and code analyzers. We then explain the interplay between different components through two main usage scenarios, namely \u201cbenchmarking\u201d and \u201ccode generation\u201d. 4.1 Setup There are several key configurations for running LLMSecGuard that users can adjust according to their preferences. LLMSecGuard requires a minimum of one LLM and one code analysis engine to operate effectively. This can be easily set up by providing the API information (e.g., API endpoint and API key) associated with each entity. There is no limitation, and users can add as many entities as they wish. Currently, we have instantiated LLMSecGuard with ANYSCALE Llama2 [19], and Weggli and Semgrep static code security analysis tools [22, 26]. Weggli is a robust and efficient semantic code analysis for C and C++ programs, and Semgrep is designed for programs written in languages such as C#, Java, Java, JavaScript, Python, PHP, Ruby, Scala, etc. LLMSecGuard relies on CyberSecEval, a benchmark specifically designed to evaluate the cybersecurity aspects of LLMs functioning as coding assistants [3]. It comprises exhaustive test cases to assess the potential of LLMs in producing insecure code and facilitating cyberattacks. Nonetheless, users have the flexibility to update this benchmark or replace it with one of their preferences. LLMSecGuard includes a few execution parameters as well. The primary one is the \u201ctermination condition\u201d designed to prevent the system from looping indefinitely. This condition determines how many iterations a code piece should undergo improvement and analysis if potential vulnerabilities persist. The \u201cbenchmark interval\u201d parameter determines how often the benchmark process is executed, with options such as biweekly or monthly intervals. Given that LLMs evolve and their security properties may change, regular updates are necessary to ensure that results remain current. Lastly, users can choose the LLMs of interest for benchmarking, select specific code analysis engines, and specify challenges that should be considered for measurement. 4.2 Security Benchmarking In this scenario, a set of existing prompts, referred to as benchmark prompts, undergo evaluation. In particular, the \u201cBenchmark Agent\u201d sends each prompt in the benchmark to the \u201cPrompt Agent\u201d. This component dispatches the prompt to every LLM considered for benchmarking, collects responses for each prompt, and forwards them to the \u201cBenchmark Agent\u201d. Each prompt presents a challenge, and \u201cBenchmark Agent\u201d determines the extent to which an LLM succeeds in resolving it by comparing the response to the expected outcome. For instance, in terms of vulnerable code, it measures the quantity and severity of potential CWEs. Each LLM is assigned a performance score for each challenge, and ultimately, LLMs are ranked. At present, LLMSecGuard relies on the CyberSecEval benchmark, which incorporates extensive prompts and regular expressions to uncover potential issues. 4.3 Secure Code Generation Figure 2 illustrates the workflow in this scenario. It begins with the user providing a prompt of interest, anticipating the output to be a piece of code. The \u201cPrompt Agent\u201d assumes control and forwards the user\u2019s prompt to the top LLM determined in the benchmarking scenario (unless the user chooses a different LLM). Upon receiving the LLM\u2019s response, this component then transfers the produced code to the \u201cSecurity Agent\u201d for security inspection. The \u201cSecurity Agent\u201d collaborates with external analysis engines (configured earlier) to uncover potential vulnerabilities and respective lines in the code. If the termination condition is met (i.e., there is no vulnerability or the maximum analysis iterations is reached), the code along with its vulnerability information, is immediately reported back to the user. In cases where the termination condition is not met, this information is relayed to the \u201cPrompt Agent\u201d. This component formulates a new prompt based on the collected vulnerability information and queries the LLM once again. No Yes Terminate Prompt Analysis Prompt Security Analysis Code Potential Vulnerabilities Code and\u00a0Analysis Outcome LLM Analysis Engine Figure 2: LLMSecGuard\u2019s secure code generation workflow 5 RELATE WORK The popularity of AI-generated code has attracted the attention of security researchers. Pearce et al. assessed Copilot\u2019s performance in suggesting code across 89 scenarios aligned with MITRE\u2019s \u201cTop 25\u201d CWEs, revealing that approximately 40% of the generated programs contained vulnerabilities [21]. Nonetheless, in a later study, Asare et al. came to a different conclusion [2]. Precisely, Copilot did not consistently reproduce past vulnerabilities introduced by humans. In about 25% of cases, Copilot even proposed fixes for these vulnerabilities, suggesting a potentially lower susceptibility to security risks compared to human programmers. Fu et al. analyzed 435 code snippets generated by Copilot in public GitHub projects and found that 35% of the snippets exhibit CWEs [8]. Mousavi et al. designed 48 programming tasks for five common security APIs in Java and evaluated the performance of ChatGPT in developing these tasks. They uncovered a concerning level of security, specifically, they found an average misuse rate of 70% in these tasks [20]. \fPreprint Version EASE 2024, June 18\u201321, 2024, Salerno, Italy Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, and Mohammad Ghafari Researchers have also developed benchmarks for assessing the security of LLM-generated code. Bhatt et al. developed CYBERSECEVAL, a benchmark tailored to evaluate the cybersecurity risks posed by LLMs [3]. Hajipour et al. examined code models for generating high-risk security weaknesses and built a collection of diverse nonsecure prompts for various vulnerability scenarios, which can be used as a benchmark to compare security weaknesses in LLMs [13]. 6 FUTURE WORK We plan to investigate whether LLMSecGuard will effectively support developers in writing secure code in real-world scenarios. Two groups of developers, both utilizing LLMs for coding, will be recruited for our study. We will assign the same programming tasks to each group, instructing one to utilize LLMs freely and the other to exclusively use LLMSecGuard during the coding process. We will measure the time taken to complete each task, the number, and the severity of vulnerabilities. We will compare the results across two groups as well as based on participants\u2019 experience. Pending positive evaluations, our ultimate goal is to integrate LLMSecGuard into at least one popular IDE (Integrated Development Environment), as deemed essential by developers for a seamless user experience [24]. Furthermore, this integration would allow LLMSecGuard to gather development context, thereby enabling the formulation of better prompts for code generation. Future work could also investigate code changes in each iteration between \u201cPrompt Agent\u201d and \u201cSecurity Agent\u201d, as well as examine how the engineering of prompts and vulnerability information affects the performance of LLMs. Finally, it is noteworthy that although a code snippet may not inherently contain vulnerabilities, its integration into a program or execution within a specific environment could still pose security risks and therefore warrants investigation [23]. 7 CONCLUSION We introduced LLMSecGuard, an open-source framework developed to equip developers with code solutions that are more secure than the code initially generated by Large Language Models (LLMs). These code suggestions are obtained through the integration of LLMs and static security code analyzers. LLMSecGuard also measures the security properties of LLMs and provides researchers and practitioners with a current security comparison of different LLMs in the wild."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.01116v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01116v1",
|
| 3 |
+
"title": "\"In-Context Learning\" or: How I learned to stop worrying and love \"Applied Information Retrieval\"",
|
| 4 |
+
"abstract": "With the increasing ability of large language models (LLMs), in-context\nlearning (ICL) has evolved as a new paradigm for natural language processing\n(NLP), where instead of fine-tuning the parameters of an LLM specific to a\ndownstream task with labeled examples, a small number of such examples is\nappended to a prompt instruction for controlling the decoder's generation\nprocess. ICL, thus, is conceptually similar to a non-parametric approach, such\nas $k$-NN, where the prediction for each instance essentially depends on the\nlocal topology, i.e., on a localised set of similar instances and their labels\n(called few-shot examples). This suggests that a test instance in ICL is\nanalogous to a query in IR, and similar examples in ICL retrieved from a\ntraining set relate to a set of documents retrieved from a collection in IR.\nWhile standard unsupervised ranking models can be used to retrieve these\nfew-shot examples from a training set, the effectiveness of the examples can\npotentially be improved by re-defining the notion of relevance specific to its\nutility for the downstream task, i.e., considering an example to be relevant if\nincluding it in the prompt instruction leads to a correct prediction. With this\ntask-specific notion of relevance, it is possible to train a supervised ranking\nmodel (e.g., a bi-encoder or cross-encoder), which potentially learns to\noptimally select the few-shot examples. We believe that the recent advances in\nneural rankers can potentially find a use case for this task of optimally\nchoosing examples for more effective downstream ICL predictions.",
|
| 5 |
+
"authors": "Andrew Parry, Debasis Ganguly, Manish Chandra",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.IR",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.IR"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "LLM Fairness",
|
| 14 |
+
"gt": "\"In-Context Learning\" or: How I learned to stop worrying and love \"Applied Information Retrieval\"",
|
| 15 |
+
"main_content": "Introduction In-context learning (ICL), unlike supervised learning, does not involve training a set of parameters \ud835\udf03on labeled examples. Rather, the posteriors are now a function of the following: a) text of the input test instance, b) the decoder parameters of a pre-trained large language model (LLM), c) a prompt instruction, and d) optionally, a set of \ud835\udc58input examples (commonly called \ud835\udc58-shot learning). Formally, \ud835\udc43(\ud835\udc66|x) = \ud835\udc53(x, P\ud835\udc58(x);\ud835\udf19LLM), (1) where, different from a supervised setup, the function \ud835\udc53does not have a parameterized representation that can be learned using a training set with gradient descent. The function itself depends on the pre-trained parameters \ud835\udf19LLM of an LLM, the current inputs for which a label is to be predicted, and a prompt comprising a set of \ud835\udc58 text units denoted by P\ud835\udc58(x). Since the decoder of an LLM generates a sequence of words of the form of \ud835\udc641, . . . ,\ud835\udc64\ud835\udc41(\ud835\udc41being the maximum length of a sequence), the class posterior likelihoods are computed in the following way. A set of classes (say for a \ud835\udc5d-way classification problem) is mapped to \ud835\udc5ddifferent equivalent sets of words, say \ud835\udc49(\ud835\udc66), where \ud835\udc66\u2208Z\ud835\udc5d \u2013 these sets commonly being called verbalisers [32]. For instance, for a binary classification problem (e.g., that of a movie review as shown in Figure 2), \ud835\udc5d= 2 (i.e., \ud835\udc66\u2208{0, 1}), and a reasonable way to define the verbaliser sets could be via the following words: \ud835\udc49(0) = {\u2018false\u2019, \u2018negative\u2019}, and \ud835\udc49(1) = {\u2018true\u2019, \u2018positive\u2019}. Note that the word \u2018learning\u2019 in ICL is a misnomer because there are no updates to the decoder parameters of an LLM. For more details on ICL, please refer to these excellent surveys [20, 47]. 2.2 The role of IR One of the most important components of ICL (as shown in Figure 2) is the search component that outputs a top-\ud835\udc58candidate set of similar instances from the training set, i.e., P\ud835\udc58(x) of Equation 1. Although, in principle, it is possible to include random examples from the training set in the prompt, it has been shown that localised examples (i.e., examples that are topically similar to the current instance) yield better performance [44, 47]. The reason why this works can be traced to the fundamental principle of reproducing kernel Hilbert spaces (RKHS) machine learning \u2013 that a predictor function is an aggregation of parameterised kernel functions pivoted around training data instances [60]. It is thus crucial to retrieve as many relevant examples as possible from the training set while imposing a practical constraint on the number of such examples for efficiency reasons \u2013 a classic trade-off of recall and precision in IR ad-hoc retrieval; the only difference is that relevance for ICL needs to be defined in terms of the utility or usefulness of an example towards the correct prediction. A similar question explored in IR is where to stop reading a ranked list because there is little utility in retrieving documents due to the low probability of finding relevant documents beyond a certain rank cutoff [2, 4]. What is more challenging is that this rank cut-off depends on the number of relevant documents occurring in the collection for a specific query, that is to say, while some queries with well-defined information needs are associated with a small number of relevant documents satisfying the specific relevance criterion, other queries with broader information needs usually are associated with a higher number of relevant documents [6]. In core IR research, this problem is usually addressed by estimating the retrieval qualities of queries \u2013 the assumption being that wellspecified queries yield better retrieval results (in terms of precision and recall), whereas ill-specified ones suffer from poor retrieval quality due to the apparent ambiguity of information need. This motivation paves the path to the following section, where we discuss how query performance prediction (QPP) can also be beneficial to the related problem of retrieving similar examples in ICL. 3 ADAPTIVE ICL \u21a6\u2192QPP? In this section, we describe an adaptive approach to the selection of examples for ICL. We outline analogous principles from IR literature that can be applied in broader tasks. 3.1 A Variable Number of Examples The observation in IR that different queries exhibit different levels of retrieval performance can be utilised for ICL, where we can draw an analogy that some test instances are associated with better candidates for training examples (i.e., examples which are useful in the sense that including them as a part of the prompt leads to correct predictions), and hence including a small number of them should be adequate. On the other hand, the retrieval quality for some test instances (used as queries for ICL) does not yield good candidates. As a result, one needs to look down the ranked list further to collect useful examples. We call this methodology of using a variable number of demonstrations for ICL inference by the name \u2018Adaptive In-Context Learning\u2019, or AICL for short. The idea of AICL centres around choosing the context P\ud835\udc58(x) in a data-driven manner, i.e., making \ud835\udc58 a function of the data (current instance x) itself. This is somewhat analogous to choosing different values of \ud835\udc58for a \ud835\udc58-NN based nonparametric modeling [87], as shown in Figure 3. The motivation is that classifying some instances would be more difficult than others, in which cases they are potentially to be benefited from a larger value of\ud835\udc58(more context). On the other hand, for relatively easy data instances using too much context may be detrimental for effective prediction. Formally speaking, the difference of AICL with that of ICL (Equation 1) is that the value \ud835\udc58, indicating the size of the neighborhood, is no longer a constant. Instead, we denote it by a parameterised function \ud835\udf05(x) such that \ud835\udc43(\ud835\udc66|x) = \ud835\udc53(x, P\ud835\udf05(x) (x);\ud835\udf19LLM), (2) \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Parry, Ganguly, and Chandra Example Retriever Instruction Formatter Index Too slow for a science fiction Ideally paced for a science fiction Fast-paced with no unnecessary side plots Test Sentence The review Ideally paced for a science fiction is a <POSITIVE> review The review Too slow for a science fiction is a <MASK> review The review Ideally paced for a science fiction is a <POSITIVE> review The review Fast-paced with no unnecessary side plots is a <POSITIVE> review The review Too slow for a science fiction is a <MASK> review <MASK> = <POSITIVE> True Label: <MASK> = <NEGATIVE> True Label: Figure 2: Example workflow of In-Context Learning for sentiment classification. The illustrative example shows a sample test instance for which a single demonstration (as retrieved from the training set) does not result in the correct prediction (prediction shown at the top). The example also shows that increasing the number of demonstrations from one to two results in the correct prediction (shown at the bottom). Demonstrations included within the prompt are shown in blue. Figure 3: Motivation behind using a variable sized neighborhood for \ud835\udc58-NN classification [87]: An instance close to a decision boundary (black \u2018?\u2019) is likely to have a higher heterogeneity in its class distribution, thus indicating the necessity of a larger neighborhood for an effective classification. where \ud835\udf05: x \u21a6\u2192{0, . . . , \ud835\udc40}, \ud835\udc40being an upper bound on the number of example instances. We now suggest how unsupervised or supervised approaches may be applied to choose the rank cutoff \ud835\udf05. 3.2 Unsupervised Rank Cutoff Among unsupervised approaches, two main ideas in IR research can be used to determine the number of examples in ICL. Score Distribution-based Models. The first thread of work is based on the hypothesis that the scores of relevant and non-relevant documents follow a different statistical distribution, e.g., Arampatzis et al. propose to use a mixture of Normal-Exponential distributions \u2013 Normal for relevant and Exponential for non-relevant documents \u2013 to model the score distribution of top-ranked documents. The work in [2] uses expectation maximisation (EM) to estimate the parameters of this mixture distribution and thereby predict the most likely cutoff rank beyond which the probability of finding a relevant document is considerably low. Such ideas of utilising the characteristic differences between the score distributions of relevant and non-relevant documents have also been used for query performance prediction (QPP) [12]. While an EM from retrieval scores allows provision for applying a variable number of examples, the following are some ICL-specific challenges that need to be researched. \u2022 With the notion of relevance being changed to \u2018downstream utility\u2019, the score distributions of useful and not useful examples may not follow the same mixture distribution of NormalExponential as reported in [2, 12]. It will be an interesting future research direction to investigate the latent relations between the similarity scores and the downstream utility of the examples in the context of ICL. \u2022 With a threshold on the score-distributions, it is difficult to restrict the cutoff to a maximum value, which is essential for ICL due to a maximum limit on the input size to an LLM. \u2022 A score distribution-based approach does not explicitly consider the information from the queries themselves (equivalently, the test instances in ICL). We now describe another thread of work in IR research that may help alleviate the last two limitations. QPP-based Models. Different from the rank cut-off strategies, query performance prediction (QPP) models seek to estimate the retrieval quality of a query. As a direct analogy, such methods can be applied to the top-similar examples retrieved in ICL with a different objective of predicting the usefulness of the examples. Most of the classic works in QPP involve unsupervised approaches that make use of the information from the set of top-retrieved documents to estimate how topically distinct are the top-retrieved documents from the rest of the collection \u2013 a large difference indicating potentially better retrieval quality [10]. Various evidences extracted from the top-retrieved documents have been shown to be useful for different post-retrieval QPP estimation methods. This includes i) the KL divergence between the language model of the top-retrieved documents and the collection model in Clarity [10], ii) the aggregated values of the information gains of each top-retrieved document with respect to the collection in WIG (Weighted Information Gain) [88], iii) the skew of the RSVs (Retrieval Status Values) measured with variance in NQC (Normalized Query Commitment) [72], iv) ideas based on the clustering hypothesis for a pairwise document similarity matrix [19], and, more recently, v) the characteristics of the embedded space of documents and queries [21, 66]. A suitable adaptation of these existing techniques can be applied in a two-stage pipeline to determine the number of examples in ICL. As a first step, one can employ a QPP methodology to predict the retrieval quality (in terms of the usefulness) of a set of ordered examples \u2013 a high value likely indicating that the useful examples \f\u201cIn-Context Learning\u201d or: How I learned to stop worrying and love \u201cApplied Information Retrieval\u201d SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Algorithm 1: LLM \ud835\udc58-shot predictions Input: x \u2013 an instance from the training set Input: \ud835\udc58(< \ud835\udc40) \u2013 number of examples (max \ud835\udc40) Output: \u0394\ud835\udc5d\u2013 Softmax posteriors begin \ud835\udc41\ud835\udc58(x) \u2190{z1, . . . , z\ud835\udc58} Instruction \u2190\u201cPredict the type of \u27e8x\u27e9as one of {\u27e8\ud835\udc360\u27e9, . . . , \u27e8\ud835\udc36\ud835\udc5d\u22121\u27e9} given the following example\u201d. for \ud835\udc56\u21901 to \ud835\udc58do Instruction.append(\u201cExample: \u27e8z\ud835\udc56\u27e9is a representative of class \u27e8\ud835\udc66(z\ud835\udc56)\u27e9\u201d) \u0394\ud835\udc5d\u2190LLM(Instruction) return \u0394\ud835\udc5d can potentially be found at the very top ranks, as a result of which, a small number of examples should potentially work well. On the other hand, a low QPP estimate likely indicates that the very top ranked examples are not likely to be useful for downstream prediction, in which case it should be better to employ a large number of examples. This approach of selecting rank cutoffs (with an upper bound) as a function of the QPP scores has been applied to determine a variable depth of relevance assessments required for a robust retrieval evaluation [25]. 3.3 Supervised Rank Cutoff Instead of devising a heuristic to predict the number of training examples to use for a test instance x, i.e., \ud835\udf05(x), a supervised approach can be applied to solve this as a classification problem, i.e., \ud835\udf05\u2261Softmax(xT\ud835\udf03), where \ud835\udf03is a set of layer(s) of parameters. The underlying hypothesis is that if we provide enough training data constituting the optimal number of examples for a range of topical content, we should be able to learn to predict the likely number of examples to use for unseen text during inference time. To train a classifier that maps a text to a number between 1 to \ud835\udc40(the maximum number of examples), it is necessary to obtain the ground-truth labels, i.e., the optimal number of examples, for each instance in the training set. We propose to obtain this by the following methodology: Given a training set instance x, one can employ a similarity function (e.g., BM25) to retrieve a candidate set of \ud835\udc40examples {z1, . . . , z\ud835\udc40}. Since x is an instance from the training set, we can utilise its label to check if the \ud835\udc58-shot predictions using an LLM are correct. It may happen that correct predictions are obtained for several values of \ud835\udc58\u2208{1, . . . , \ud835\udc40}. Several strategies can be adapted to define the ground-truth number of examples. For instance, one can stop early and simply select the smallest \ud835\udc58that results in a correct prediction. Alternatively, a potentially more robust procedure would be to exhaustively check through all possible values of \ud835\udc58= 1, . . . , \ud835\udc40, and select the one that results in a correct prediction with the least uncertainty [67, 75]. The workflow of this least uncertainty-based selection of the ground truth for the number of ICL examples is shown in Algorithm 2. Algorithm 1, which is invoked during the ground-truth construction, shows a sample prompt template for text classification. After executing Algorithm 2, we obtain a set of ground-truth labels K which could then be used to train a classifier, parameterised Algorithm 2: Optimal number of examples Input: T \u2013 a training set of labelled instances Output: K = \u222ax\u2208T\ud835\udc58\u2217(x) \u2013 Number of examples yielding the most confident and correct predictions for each instance x \u2208T begin for x \u2208T do max_confidence \u21900; \ud835\udc58\u2217\u21901 for \ud835\udc57\u21900 to \ud835\udc40do \u0394\ud835\udc5d\u2190LLM \ud835\udc58-shot predictions(x, \ud835\udc57) // Call Algorithm 1, i.e., try to predict with \ud835\udc57examples \u02c6 \ud835\udc66(x) \u2190argmax\u0394\ud835\udc5d // Get the predicted class confidence\u2190\u0394 \u02c6 \ud835\udc66(x)I( \u02c6 \ud835\udc66(x) = \ud835\udc66(x)) // Check if the predicted class is the correct one and record the prediction confidence if confidence > max_confidence then max_confidence \u2190confidence // Keep track of the least uncertain correct prediction \ud835\udc58\u2217\u2190\ud835\udc57 K \u2190K \u222a\ud835\udc58\u2217 return K by \ud835\udf03, via optimising: argmin\ud835\udf03 \u2211\ufe01 x\u2208T,\ud835\udc58\u2217\u2208K L(xT\ud835\udf03,\ud835\udc58\u2217), (3) where L is a standard loss function, e.g., the cross-entropy. During inference, for each x \u2208E (E denoting an evaluation set), we propose to apply the classifier \ud835\udf05: x \u21a6\u2192{1, . . . , \ud835\udc40} \u2013 trained via Equation 3 \u2013 to predict the number of examples, and eventually conduct a \ud835\udf05(x)-shot prediction on x (Equation 2). 3.4 Open Research Questions and Challenges Till now in this section, we described how unsupervised and supervised approaches can be applied to dynamically select the number of examples to be used for an ICL-based prediction. In this section, we discuss some research directions that could be explored to adapt ICL in alternative ways to further improve its effectiveness. First, we would like to point out to the existing work on generating query variants, as a part of a data augmentation strategy, to devise alternative formulations of the same or similar information needs. This has been shown to improve the effectiveness of rankers [28], query performance prediction [15, 86] relevance feedback [7], and even act as a tool to measure consistency of IR models [70]. Given the recent success of zero-shot query generation capabilities of LLMs [1, 83], we believe that augmenting a test instance with alternative text representations can be useful to eventually improve retrieval quality (and hence potentially improve the downstream ICL effectiveness). The unsupervised and supervised approaches for predicting the number of examples per query (test instance) may also lead to better ICL effectiveness, as per the existing findings that variants do actually help improve QPP [15, 86]. We thus formulate the following two research questions aligned along this direction. \u2022 RQ-3.1: Can query variants generated by LLMs (or otherwise) improve the prediction of the number of examples to use for each instance? \u2022 RQ-3.2: Can relevance feedback based approaches with or without the use of generated query variants help reorder the top-\ud835\udc58 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Parry, Ganguly, and Chandra initially retrieved candidate set of examples towards a better prediction of the number of examples? The other direction of work involves a dynamic selection of not just the neighborhood size but also other ICL parameters. For instance, the verbaliser [69] sets can be selected dynamically from a set of alternatives based on the input instance. Further, a prompt can also be selected dynamically again based on the input instance; an unsupervised approach exploring this idea has already been studied in [75]. Generally speaking, the research question that can potentially be explored is the following. \u2022 RQ-3.3: Can other ICL parameters also be chosen in a data-driven manner to lead to better effectiveness, e.g., the verbaliser, the prompt, or even an LLM itself (akin to a mixture of experts)? 4 RANK ICL EXAMPLES \u21a6\u2192SUPERVISED IR? In this section, we discuss another crucial aspect of ICL that can potentially be improved by developing ranking models specifically suited to a different notion of relevance: ICL downstream task-specific usefulness of examples. The concept of an effective example in core neural IR is well-researched, particularly the notion of \u2018hard\u2019 negatives during fine-tuning [27, 36]. These negatives have improved downstream precision on ranking tasks [84] and, more generally, representation learning [29]. Specific to few-shot learning, Rubin et al. [67] employed a noise contrastive estimation (NCE) loss [30] to train a bi-encoder-based pairwise ranker using SBERT [64] embeddings. For training the ranking model, pairs of instances (relevant and non-relevant examples) were collected in the following way. For each pivot instance x from a training set, the authors employed BM25 to constitute a top-\ud835\udc58candidate set of examples. Each pair (x, z\ud835\udc56) was then tested to check whether a 1-shot prediction with z\ud835\udc56was correct, in which case, z\ud835\udc56was classified as a relevant example for x, or else it was considered as a non-relevant one. Batches comprising relevant and non-relevant pairs were then constituted to train a standard NCE loss. While the work of Rubin et al. [67] is a definitive step towards leveraging a task-specific notion of relevance, the investigation should not be considered complete. Several potentially promising research directions should be explored to improve ICL effectiveness further. We now provide a survey of neural ranking literature introducing core paradigms which may be utilised in example selection. Bi-Encoder architecture. A bi-encoder architecture encodes text into a latent representation that can be compared in a vector space; in the context of a retrieval task, these texts would be queries and documents. While a bi-encoder is implemented either with a Siamese network of shared parameters [64] or as a single encoder [48], the latter has become prevalent in recent years [36, 84]. The performance of neural models in search was significantly improved with the release of BERT [18]. Karpukhin et al. [36] first proposed the use of \u2018hard\u2019 negatives mined from BM25 to improve the precision of BERT-based rankers. Gao et al. [27] then proposed a variant of the NCE objective, \u2018Localised Contrastive Estimation\u2019, in which multiple negatives are sampled for each query to account for the variance in the notion of non-relevance. In doing so, they also showed the effectiveness of hard negatives mined from finetuned rankers. To further improve the quality of negative samples, Xiong et al. [85] proposed that a model could choose negatives during training to allow negatives to become continuously \u2018harder\u2019 as fine-tuning progresses. At a conceptual level, bi-encoders generally represent a text as a single embedding by using the representation of the BERT [CLS] token as a proxy for the entire sequence. Other pooling methods are effective, including maximum sequence similarity [13] and late interaction in which a max pooling is performed over the token-level similarity of each query token to document tokens [37]. More recent works instead use a BERT-style encoder with a shallow decoder, which places greater emphasis on the ability of the encoder during pre-training. This architectural development has yielded not only state-of-the-art recall but new pre-training styles, including lexical grounding [71] and text reconstruction [84]. The separate encoding of queries and documents allows for the offline encoding of documents which can vastly improve online latency. This is often coupled with an approximate nearest neighbour search in a vector space [31, 37]. More specifically, after training a bi-encoder model, the parameters of the trained model act as \u2018embeddings\u2019 for each document in the collection. During inference time, a query is first embedded into a vector. Then an approximate nearest neighbour search, e.g., HNSW [50], is conducted on an indexed representation of these dense document vectors. Therefore, exploring the potential benefits gained from efficient, dense endto-end retrieval of training examples for effective ICL can be an interesting research direction. Cross-Encoder architecture. A cross-encoder instead jointly encodes a query and document at inference time [55], allowing deep interactions between texts that are impossible in a bi-encoder architecture. Empirically, these models are more precise than biencoders at the expense of latency, as representations cannot be pre-computed in a standard setting. Both BERTand T5-based architectures have been proposed [55, 56]; in the case of a BERT model, a feed-forward classification head is used to output class probabilities of relevance [55]. In the case of a sequence-to-sequence model, token logits are taken as surrogates for class probabilities [56]. Recent developments in LLMs have prompted research in these large decoder-only models as text rankers. A list-wise approach is commonly taken in which a model receives multiple documents for a given query and outputs a permutation of the original ranking [62, 77]. The development of these models is still in its infancy but it offers opportunities to investigate highly precise ranking models potentially in sample mining beyond simple ad-hoc search. Therefore, it is reasonable to assume that employing a crossencoder to learn ranking examples by their downstream usefulness should yield better results than a bi-encoder-based approach. An interesting research direction would hence be to investigate the optimal architecture within an ICL pipeline, considering the efficiency-effectiveness trade-off. Teacher Distillation. Moreover, a rich literature exists on distilling the more computationally expensive cross-encoder models into the simpler bi-encoder, the former acting as a teacher model and the latter as a student [31]. Distilling a teacher model into a bi-encoder one allows provision for end-to-end dense retrieval without requiring any sparse index to retrieve a candidate top-\ud835\udc58. Two core paradigms of distillation are homogeneous architecture and heterogeneous architecture distillation. The former generally \f\u201cIn-Context Learning\u201d or: How I learned to stop worrying and love \u201cApplied Information Retrieval\u201d SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA will distill one model into a newly initialised copy via minimisation of a divergence metric over either the final hidden state [43] or internal states such as attention layers [34]. The latter minimises prediction error between teacher and student models commonly via a mean squared error criterion over triplet residuals (residual between positive and negative example scores), allowing for \u2018crossarchitecture knowledge distillation\u2019 [31] as a scalar relevance score is not architecture dependent. This approach has become a core component of many state-of-the-art dense retrieval models, frequently beginning with a cross-encoder teacher used to mine hard negatives and teacher scores before a second stage distillation is performed using the previous distilled model as a teacher [71, 84]. A parallel area of work gaining traction is knowledge sharing between a retrieval system and a generative model [33, 34, 40]. This paradigm is directly correlated with our perspective with recent work finding success in directly optimising a retriever to maximise downstream QA performance [34]. However, these systems are currently brittle with Cuconasu et al. [11] finding that the addition of irrelevant content before a gold label answer as context to a QA system can improve performance against any intuition, suggesting much work can be done in this area to optimise how we present a model with ICL examples. 4.1 Combined Utility of ICL Examples There exists a fundamental difference between relevance of documents in IR and usefulness of examples in ICL. In IR, a document\u2019s relevance is independent of the relevance of another document, and when combined, the information still remains relevant. The situation is more complex for ICL. More precisely speaking, two labeled instances in ICL that are useful examples by themselves (i.e., when used as a 1-shot demonstration results in a correct prediction) may not be yielding a correct prediction when combined for a 2-shot inference [46]. This is likely to happen because the decoder, on obtaining too much of a context, can be biased towards a specific topical cluster of words corresponding to the incorrect class descriptor. While more investigation is required to analyse the empirical likelihood of this phenomenon of \u2018non-cooperation\u2019 between examples occurring, it is worth exploring what adjustments may be needed at a methodology level to even define an ideal ranking of the training data examples for few-shot ICL. The objective in this case is not just to maximise the number of \u2018relevant documents\u2019 (as per the IR analogy) within the top-\ud835\udc58, but rather to ensure the combined usefulness of the examples. A possible direction towards this could be to adapt a listwise ranking model with this modified notion of combined relevance (usefulness). A more computationally efficient approach would be to operate at the level of pairs, i.e., predict which pairs are concordant and discordant. An element of each pair takes on a Boolean value (either useful as a 1-shot example or not), which means that the number of different ways in which a pair can be either concordant or discordant is the number of possible Boolean functions of 2 variables, which is 16 (two such sample functions are Boolean OR, where if one of the examples is useful so is the combination, and XNOR where a pair is discordant if either of the examples are useful as 1-shot). Since, in the general case, the number of Boolean functions of \ud835\udc5bvariables is 22\ud835\udc5b, listwise training with \ud835\udc5b> 3 will likely be computationally prohibitive. Open research questions. Before concluding this section, we now summarise the importance of the following IR-specific research questions for ranking examples in ICL. \u2022 RQ-4.1: Is ICL sensitive to the choice of a neural retrieval model, i.e., can we get an improvement using a basic Siamese model over SBERT as envisaged in [67]? \u2022 RQ-4.2: How faithful is the assumption that a combination of several 1-shot useful examples remain useful for ICL prediction? \u2022 RQ-4.3: If the answer to RQ-4.2 is negative, then there is a significant scope of improving over standard learning to rank approach by explicitly modeling concordance (or the lack of it) of the usefulness of examples in ICL. How can we adjust ranking models, and how much improvement can we achieve over a baseline of the standard few-shot? 5 INFORMATIVE EXAMPLES \u21a6\u2192FACETED IR? In this section, we discuss the last of our proposed verticals towards an effective ICL workflow as outlined in Figure 2, which is that of seeking to provide relevant but diverse contexts to an LLM. More precisely speaking, topical diversity of the examples should play an important role in preventing a decoder bias towards a single topic. This is more true for text generation tasks, such as non-factoid question answering, where an LLM decoder needs to be aware of the different sub-topics to be able to construct a comprehensive answer. Even for classification tasks, diverse examples are likely to help a decoder consider a majority of likely topics (the verbalisers of which map to descriptors of closely related categories) during inference, thus minimising the risks of misclassification. Faceted search has been well studied in IR. Explained simply, a faceted search system extracts the multiple different aspects of the information need from the top-retrieved set and maps each retrieved document to one of these aspects [9, 22\u201324]. Faceted search is particularly useful for queries with broader information needs, where it can assist a user to reformulate their information need to one of the more specific aspects of the broader one, e.g., transform a query \u2018dehumidifiers\u2019 to \u2018price range of dehumidifiers\u2019 where the intention (information need facet) is to buy one [6]. Faceted search is closely related to the concept of diversified ranking [8], where search systems seek to improve the retrieval effectiveness for all possible aspects of a broader information need, e.g., for the earlier example query on \u2018dehumidifiers\u2019, retrieve documents related to information on both the aspects of price ranges, technical specifications, product reviews, and general knowledge on dehumidifiers. Santos et al. [68] propose to leverage query variants (what the paper calls \u2018sub-queries\u2019) and their top-retrieved lists for constructing a list of documents potentially relevant to each facet of the original query. Close to diversity is the concept of fair search which seeks to mitigate biases towards any particular aspects of information need, and recently neural approaches have become common to balance relevance with fairness [57]. From a search user\u2019s perspective, it has been shown that diversified retrieval systems play an important role in improving the search experience, by providing greater coverage of a topic and mitigating potential bias in search results [51]. Similarly, a greater \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Parry, Ganguly, and Chandra topical coverage and a less topical bias can potentially lead an LLM decoder towards contexts more useful for a downstream task. In fact, Levy et al. [38] show that diversifying the few-shot examples on the basis of abstract syntax tree (AST) structures improves the downstream task of compositional generalisation. This indeed shows a positive direction of research where the considerable volume of work conducted on faceted search and diversification by the IR community can be useful for ICL. However, similar to relevance, the notion of diversity would also need suitable adjustments for ICL. A suitable notion of diversity should not just consider similarities between the input examples but rather also their class labels and, more importantly, similarities in the ways in which they affect an LLM decoder\u2019s generation path. Two examples which both output similar output trees should not be considered diverse. In principle, one can potentially adapt the classification methodology that we proposed to learn the optimal number of examples based on minimising the prediction uncertainties for the purpose of classifying if a given pair of examples is diverse or not. Furthermore, we posit that neural approaches that take into account both relevance and fairness or diversity (again both in the context of downstream ICL) should find a use-case in ICL to help diversify the useful examples. Open research questions. Based on the discussions in this section, we now outline the following research directions. \u2022 RQ-5.1: How sensitive is ICL towards the topical diversity of the examples? \u2022 RQ-5.2: How can the standard notion of diversity be extended to consider the latent dependence between the input and the output of an LLM decoder aligning towards a specific downstream task? \u2022 RQ-5.3: How may existing IR metrics for diversity (e.g., \ud835\udefc-nDCG [8]) be adapted to measure how effective is the example retrieval for downstream ICL? \u2022 RQ-5.4: How can multi-objective neural ranking models be trained to jointly learn downstream specific usefulness and diversity for ICL? 6 PRELIMINARY EVALUATION In this section, we report the results of our initial investigation, which was conducted to answer a subset of research questions of the first vertical, i.e., to develop an effective adaptive version of ICL that can dynamically select the number of examples. 6.1 Research Questions and Dataset Research Questions Investigated. In Section 3.2, we discussed the possibilities of applying QPP-inspired unsupervised approaches for selecting a cutoff point in the ranked list of examples. On the other hand, in Section 3.3 we proposed a classifier-based approach to learn the optimal number of examples. In our experiments, we compare the supervised approach of Algorithm 2 and an NQC-based unsupervised approach for adaptive \ud835\udc58-shot and compare both with static \ud835\udc58-shot on standard datasets for text classification. Explicitly stated, we investigate the following research question. \u2022 CRQ-1: Does adaptively selecting the number of examples in ICL lead to improved downstream effectiveness? \u2022 CRQ-2: Does an unsupervised approach obtain a reasonable performance as compared to a supervised one? Since our experiments answer the above questions, they are not open, unlike the ones we expounded on in this paper. Therefore, we prefix these questions with a \u2018C\u2019 (closed). Dataset. We conduct experiments on three text classification datasets, namely AGNews [16], Jigsaw Toxic Comment1 and SST2 [74]. Below, we provide more details on each dataset. \u2022 AGNews: AGNews is a topic classification dataset constituting news articles from the Web. Each document in the dataset belongs to one of the following 4 classes: World, Sports, Business, and Sci/Tech. The total number of training instances is 120, 000, while the test set size is 7, 600. Each class contains 30, 000 samples from the train set and 1, 900 instances from the test set. \u2022 Jigsaw Toxic Comments: Due to its societal impact, toxicity prediction is a problem of considerable practical interest. This dataset, released by Jigsaw and Google as a part of a Kaggle competition, comprises of comments extracted from Wikipedia\u2019s talk page, each being annotated by human evaluators across six categories representing toxic behaviors: toxic, \u2018severe toxic\u2019, obscene, threat, insult, and \u2018identity hate\u2019. \u2022 SST2: The Stanford Sentiment Treebank (SST) is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in a language. The corpus consists of 11, 855 sentences extracted from movie reviews. Being parsed with the Stanford parser, it constitutes a total of 215, 154 unique phrases from the parse trees, each annotated by 3 human judges. The SST2 (also called SST-binary) dataset is a subset of SST, specifically prepared for the task of binary classification. More precisely, neutral sentences from SST were discarded, and two level, each for the negative and the positive classes were merged thus yielding two classes in total. 6.2 Methods and Parameters Our proposed methods for Adaptive ICL (AICL). As novel methods for adaptive ICL, we employ the following: \u2022 The supervised strategy of Algorithm 2, which we call supervised adaptive ICL (SAICL). \u2022 A QPP-based unsupervised strategy (as per the generic direction outlined in Section 3.2), where we compute the rank cutoff in a relatively simple way, stated as follows. First, given a top-\ud835\udc40set of candidate examples, we compute a normalised value of the NQC estimator [72] (we employ a max normalisation, the normalisation constant being the max NQC value from the training set). We then quantise the normalised values into \ud835\udc40equi-spaced intervals ranging from 0 to the max NQC value. As per the hypothesis that a higher NQC value indicates a better retrieval quality, we employ the inverse linear relation and end up selecting a value close to 0 for higher NQC, and a value close to \ud835\udc40for smaller ones. We call this method QPP-AICL. Baselines. As baselines to compare SAICL and QPP-AICL against, we employ the following: 1https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge \f\u201cIn-Context Learning\u201d or: How I learned to stop worrying and love \u201cApplied Information Retrieval\u201d SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u2022 0-shot: This approach simply inputs an instruction without supplying any examples. \u2022 Static ICL (SICL): This refers to the standard method of supplying a fixed number of semantically-similar examples as input, similar to [44]. This is different from AICL in that the number of examples in the prompt is always fixed, however, the examples themselves vary for different test inputs based on semantic similarity. For a fair comparison with AICL methods, we report the results obtained with three different values of \ud835\udc58: 1, \u2308\ud835\udc40 2 \u2309and \ud835\udc40 representing the most conservative (in terms of the input size), average, and least conservative situations. In our case, \ud835\udc40= 5, which means that our standard ICL experiments operate with the 1-shot, 3-shot and 5-shot settings. Model and hyper-parameter settings. Among a relatively large number of available choices for available LLMs \u2013 either opensource models or black-box cloud APIs \u2013 we, in particular, conduct our experiments on GPT-J [76]. GPT-J is an open-source GPT-3-like model trained on the Pile dataset [26]. GPT-J-6B yields performance comparable to the 6.7 billion parameter GPT-3 (Curie) on a variety of tasks [81]. The maximum context length (in terms of number of tokens) of GPT-J is 2048. In our experiments, we vary \ud835\udc40the maximum number of examples, from 1 to 5 (for static ICL this is denoted by \ud835\udc58). For a fair comparison, we use the identical prompt template (as shown in Algorithm 1) and greedy decoding with the same verbalizers across all methods employed in our experiments. 6.3 Results Table 1 shows the results (in terms of macro-averaged precision, recall and F1) obtained by the different ICL strategies. It can be seen that SAICL turns out to be the best among the competing approaches. The reason it outperforms the best baseline (static ICL) is that SAICL is able to effectively adapt the number of examples to use, thereby preventing itself from the degradation effects of non-relevant (not useful) examples. In effect, it learns a latent relationship between the topical content and the quantity of context required to guide the decoder\u2019s output in the right direction effectively. Moreover, SAICL is able to operate more effectively with smaller input sizes (see the average value of \ud835\udc58and also the average size of the input in terms of the number of tokens), which means that it is computationally faster as compared to static ICL (SICL). Our observations reveal that CRQ-1 is answered in the affirmative, i.e., an adaptive selection of the number of examples in ICL does improve downstream effectiveness and efficiency. The results with the unsupervised QPP-based approach (QPPAICL) turned out to be worse than the baseline of static ICL. From a broader perspective, this points to an important finding that offthe-shelf IR approaches without modifications specifically suited to the underlying characteristics of the downstream tasks in ICL may not directly yield improvements in the effectiveness of ICL. For instance, NQC seeks to estimate relevance of documents, and as we have argued before, that relevance has a different interpretation for the ICL examples. Although the observations with QPP-AICL answers CRQ-2 in negative, i.e., an unsupervised approach for an adaptive selection of ICL examples is substantially worse than a supervised one, they do suggest that methodologies developed by Table 1: Macro-averaged precision, recall and F1-scores for different in-context learning (ICL) methodologies. The column \ud835\udc58denotes the number of few-shot examples. For AICL approaches, this column denotes the average number of examples used for the respective method. \u2018AIS\u2019 denotes the average input size measured in terms of the number of tokens rounded off to the nearest integer. Evaluation Dataset Method \ud835\udc58 Precision Recall F-score AIS AGNews 0-shot 0 0.6569 0.5932 0.5849 60 SICL 1 0.9015 0.9017 0.9016 125 SICL 3 0.9008 0.8997 0.8989 252 SICL 5 0.8963 0.8930 0.8917 380 QPP-AICL 3 0.8545 0.8499 0.8486 220 SAICL 1.87 0.9080 0.9096 0.9067 175 Toxicity 0-shot 0 0.5689 0.6238 0.5769 103 SICL 1 0.5760 0.6989 0.5505 195 SICL 3 0.6092 0.7180 0.6254 335 SICL 5 0.6078 0.7248 0.6217 431 QPP-AICL 3 0.5906 0.6942 0.5977 289 SAICL 3.46 0.6194 0.6983 0.6303 359 SST2 0-shot 0 0.7503 0.5022 0.3379 30 SICL 1 0.8703 0.8703 0.8703 61 SICL 3 0.9140 0.9137 0.9137 121 SICL 5 0.9245 0.9230 0.9230 181 QPP-AICL 3 0.8556 0.8479 0.8470 106 SAICL 4.12 0.9302 0.9304 0.9302 154 researchers in the future for answering any of the open research questions discussed in this paper should be fundamentally grounded in modeling the notion of relevance (usefulness of examples) in a robust and effective manner. 7 CONCLUSION In this perspective paper, we discuss how some of the recent developments in generative AI (specifically in-context learning or ICL) can provide a scope to IR/NLP researchers to revisit some of the well-researched IR topics in a new light, where the notion of relevance of a document to an information need changes to that of usefulness of a few-shot example for a downstream AI task, e.g., text classification, question answering etc. More specifically, we suggest three main verticals in which this research can be structured each offering a set of open questions related to core IR research. The first vertical aims at adaptively adjusting an ICL workflow, e.g., choosing the number of examples to be used in a data-driven manner. Initial empirical investigations reported in this perspective paper shows that this direction is promising. The second vertical mainly covers devising novel ranking models to better distinguish (and thereby retrieve at better ranks) a useful few-shot context from a noisy one. Finally, the third vertical concerns an investigation of topical diversity in the few-shot examples for better downstream prediction. We believe that the research questions that we have proposed in this paper will benefit the research community to exploit this synergy between ICL and IR, and eventually guide the development of new algorithms and techniques. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Parry, Ganguly, and Chandra"
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01130v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01130v1",
|
| 3 |
+
"title": "Automated Virtual Product Placement and Assessment in Images using Diffusion Models",
|
| 4 |
+
"abstract": "In Virtual Product Placement (VPP) applications, the discrete integration of\nspecific brand products into images or videos has emerged as a challenging yet\nimportant task. This paper introduces a novel three-stage fully automated VPP\nsystem. In the first stage, a language-guided image segmentation model\nidentifies optimal regions within images for product inpainting. In the second\nstage, Stable Diffusion (SD), fine-tuned with a few example product images, is\nused to inpaint the product into the previously identified candidate regions.\nThe final stage introduces an \"Alignment Module\", which is designed to\neffectively sieve out low-quality images. Comprehensive experiments demonstrate\nthat the Alignment Module ensures the presence of the intended product in every\ngenerated image and enhances the average quality of images by 35%. The results\npresented in this paper demonstrate the effectiveness of the proposed VPP\nsystem, which holds significant potential for transforming the landscape of\nvirtual advertising and marketing strategies.",
|
| 5 |
+
"authors": "Mohammad Mahmudul Alam, Negin Sokhandan, Emmett Goodman",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CV",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CV"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Diffusion AND Model",
|
| 14 |
+
"gt": "Automated Virtual Product Placement and Assessment in Images using Diffusion Models",
|
| 15 |
+
"main_content": "Introduction Virtual Product Placement (VPP) refers to the unobtrusive, digital integration of branded products into visual content, which is often employed as a stealth marketing strategy [15]. Advertising solutions utilizing VPP have significant appeal due to their high customizability, effectiveness across diverse customer bases, and quantifiable efficiency. *The author performed this work as an intern at Amazon Web Services (AWS). Accepted at 6th AI for Content Creation (AI4CC) workshop at CVPR 2024. (Preprint) (a) Background (b) Inpainting Figure 1. An illustration of the proposed VPP system with an Amazon Echo Dot device. The input background image is shown in (a), and the inpainted output image is shown in (b) where an Amazon Echo Dot device is placed on the kitchen countertop by automatic identification of optimal location. Previous research underscores the impact of product placement within realms such as virtual reality [22] and video games [5]. With the recent advancements in generative AI technologies, the potential for product placement has been further expanded through the utilization of diffusion models. Significant research has focused on the development of controlled inpainting via diffusion models, albeit largely without an explicit emphasis on advertising applications [1, 8, 11]. However, these methods can be fine-tuned with a small set of 4 to 5 product sample images to generate high-quality advertising visual content. In this paper, we propose a novel, three-stage, fully automated system that carries out semantic inpainting of products by fine-tuning a pre-trained Stable Diffusion (SD) model [18]. In the first stage, a suitable location is identified for product placement using visual question answering and text-conditioned instant segmentation. The output of this stage is a binary mask highlighting the identified location. Subsequently, this masked region undergoes inpainting using a fine-tuned SD model. This SD model is fine-tuned by 1 arXiv:2405.01130v1 [cs.CV] 2 May 2024 \fDreamBooth [19] approach utilizing a few sample images of the product along with a unique identifier text prompt. Finally, the quality of the inpainted image is evaluated by a proposed Alignment Module, a discriminative method that measures the image quality, or the alignment of the generated image with human expectations. An illustration of the proposed VPP system is presented in Figure 1 with an Amazon Echo Dot device. Controlled inpainting of a specific product is a challenging task. For example, the model may fail to inpaint the intended object at all. If a product is indeed introduced through inpainting, the product created may not be realistic and may display distortions of shape, size, or color. Similarly, the background surrounding the inpainted product may be altered in such a way that it either meaningfully obscures key background elements or even completely changes the background image. This becomes especially problematic when the background images contain human elements, as models can transform them into disturbing visuals. As a result, the proposed Alignment Module is designed to address these complications, with its primary focus being on the appearance, quality, and size of the generated product. To exert control over the size of the generated product, morphological transformations, specifically erosion, and dilation, are employed. By adjusting the size of the mask through dilation or erosion, the size of the inpainted product can be effectively increased or decreased. This allows the system to generate a product of an appropriate size. In summary, the main contributions of this paper are twofold. The first pertains to the design of a fully automated Virtual Product Placement (VPP) system capable of generating high-resolution, customer-quality visual content. The second involves the development of a discriminative method that automatically eliminates subpar images, premised on the content, quality, and size of the product generated. The remainder of this paper is organized as follows. In section 2 we will delve into the related literature, with a specific emphasis on semantic inpainting methods utilizing diffusion models, and section 3 will highlight the broad contributions of the paper. Next, the proposed end-to-end pipeline for automatic VPP will be discussed in section 4. This includes a detailed examination of the three primary stages of the solution, along with the three sub-modules of the Alignment Module. Thereafter, we will elucidate the experimental design and evaluation methodologies adopted and report the corresponding results in section 5. Subsequently, deployment strategy and web application design will be explained in section 6. Finally, the paper will conclude with an outline of the identified limitations of our proposed methodology in section 7, complemented by a discussion on potential avenues for future research. 2. Related Works Recently, there has been significant progress in developing semantic or localized image editing using diffusion models largely without an explicit focus on digital marketing. Nevertheless, new generative AI approaches promise significant advances in VPP technology. For instance, in Blended Diffusion [1], the authors proposed a method of localized image editing using image masking and natural language. The area of interest is first masked and then modified using a text prompt. The authors employed a pre-trained CLIP model [17] along with pre-trained Denoising Diffusion Probabilistic Models (DDPM) [7] to generate natural images in the area of interest. Similar to Blended Diffusion, Couairon et. al. [3] proposed a method of semantic editing with a mask using a diffusion model. However, instead of taking the mask from the user, the mask is generated automatically. Nevertheless, a text query input from the user is utilized to generate the mask. The difference in noise estimates, as determined by the diffusion model based on the reference text and the query text, is calculated. This difference is then used to infer the mask. The image is noised iteratively during the forward process and in the reverse Denoising Diffusion Implicit Model (DDIM) [21] steps, the denoised image is interpolated with the same step output of the forward process using masking. Paint by Word proposed by Bau et. al. [2], is also similar, however instead of a diffusion model they utilized a Generative Adversarial Networks (GAN) [4] with a mask for semantic editing guided by text. On the other hand, Imagic [8] also performs text-based semantic editing on images using a diffusion model but without using any mask. Their approach consists of three steps. In the beginning, text embedding for a given image is optimized. Then the generative diffusion model is optimized for the given image with fixed-optimized text embedding. Finally, the target and optimized embedding are linearly interpolated to achieve input image and target text alignment. Likewise, a semantic editing method using a pre-trained text-conditioned diffusion model focusing on the mixing of two concepts is proposed by [12]. In this method, a given image is noised for several steps and then denoised with text condition. During the denoising process, the output of a denoising stage is also linearly interpolated with the output of a forward noise mixing stage. Hertz et. al. [6] took a different approach to semantic image editing where text and image embeddings are fused using cross-attention. The cross-attention maps are incorporated with the Imagen diffusion model [20]. However, instead of editing any given image, their approach edits a generated image using a text prompt which lacks any interest when VPP is concerned. Alternatively, Stochastic Differential Edit (SDEdit) [16] synthesizes images from stroke 2 \fpaintings and can edit images based on stroke images. For image synthesis, coarse colored strokes are used and for editing, colored stroke on real images or image patches on target images is used as a guide. It adds Gaussian noise to an image guide of a specific standard deviation and then solves the corresponding Stochastic Differential Equations (SDE) to produce the synthetic or edited image. To generate images from a prompt in a controlled fashion and to gain more control over the generated image, Li et. al proposed grounded text-to-image generation (GLIGEN) [11]. It feeds the model the embedding of the guiding elements such as bounding boxes, key points, or semantic maps. Using the same guiding components, inpainting can be performed in a target image. DreamBooth [19] fine-tunes the pre-trained diffusion model to expand the dictionary of the model for a specific subject. Given a few examples of the subject, a diffusion model such as Imagen [20] is fine-tuned using random samples generated by the model itself and new subject images by optimizing a reconstruction loss. The new subject images are conditioned using a text prompt with a unique identifier. Fine-tuning a pre-trained diffusion model with a new subject is of great importance in the context of VPP. Therefore, in this paper DreamBooth approach is utilized to expand the model\u2019s dictionary by learning from a few sample images of the product. 3. Contributions In this paper, a method of automated virtual product placement and assessment in images using diffusion models is designed. Our broad contributions are as follows: 1. We introduce a novel fully automated VPP system that carries out automatic semantic inpainting of the product in the optimal location using language-guided segmentation and fine-tuned stable diffusion models. 2. We proposed a cascaded three-stage assessment module named \u2018Alignment Module\u2019 designed to sieve out lowquality images that ensure the presence of the intended product in every generated output image. 3. Morphological transformations are employed such as dilation and erosion to adjust the size of the mask, therefore, to increase or decrease the size of the inpainted product allowing generating a product of appropriate size. 4. Experiments are performed to validate the results by blind evaluation of the generated images with and without the Alignment module resulting in 35% improvement in average quality. 5. The inpainted product generated by the proposed system is not only qualitatively more realistic compared to the previous inpainting approach [23] but also shows a superior quantitative CLIP score. 4. Methodology Fine-tuned Model DreamBooth VILT Visual Question Answering \u201cdesk\u201d \u201cwhich object in the image has a flat surface area?\u201d CLIPSeg Semantic Segmentation Content Score Quality Score Volume Score Stage 2 Stage 1 Stage 3 Figure 2. The block diagram of the proposed solution for the VPP system where each of the three stages is distinguished by varied color blocks. In stage 1, a suitable placement for product inpainting is determined by creating a mask using CLIPSeg and VILT models. Next, in stage 2, semantic inpainting is performed in the masked area using the fine-tuned DreamBooth model. Finally, stage 3 contains the cascaded sub-modules of the Alignment Module to discard low-quality images. 4.1. Proposed Method For semantic inpainting, we utilized the DreamBooth algorithm [19] to fine-tune stable diffusion using five representative images of the product and a text prompt with a unique identifier. Even with a limited set of five sample images, the fine-tuned DreamBooth model was capable of generating images of the product integrated with its background. Nevertheless, when inpainting was conducted with this fine-tuned model, the resulting quality of the inpainted product was significantly compromised. To enhance the quality of the product in the inpainted image, we augmented the sample images through random scaling and random cropping, consequently generating a total of 1,000 product images used to fine-tune SD. 4.2. Product Localization Module The proposed VPP system operates in three stages. A core challenge in product placement lies in pinpointing a suitable location for the item within the background. In the first stage, this placement is indicated via the generation of a binary mask. To automate this masking process, we leveraged the capabilities of the Vision and Language Transformer (ViLT) Visual Question Answering (VQA) model [9] in conjunction with the Contrastive Language3 \fImage Pretraining (CLIP) [17]-based semantic segmentation method, named CLIPSeg [13]. Notably, each product tends to have a prototypical location for its placement. For example, an optimal location for an Amazon Echo Dot device is atop a flat surface, such as a desk or table. Thus, by posing a straightforward query to the VQA model, such as \u201dWhich object in the image has a flat surface area?\u201d, we can pinpoint an appropriate location for the product. Subsequently, the identified location\u2019s name is provided to the CLIPSeg model, along with the input image, resulting in the generation of a binary mask for the object. 4.3. Product Inpainting Module In the second stage, the input image and the generated binary mask are fed to the fine-tuned DreamBooth model to perform inpainting on the masked region. Product inpainting presents several challenges: the product might not manifest in the inpainted region; if it does, its quality could be compromised or distorted, and its size might be disproportionate to the surrounding context. To systematically detect these issues, we introduce the third stage: the Alignment Module. 4.4. Product Alignment Module The Alignment Module comprises three sub-modules: Content, Quality, and Volume. The Content sub-module serves as a binary classifier, determining the presence of the product in the generated image. If the product\u2019s probability of existence surpasses a predefined threshold, then the Quality score is calculated for that image. This score evaluates the quality of the inpainted product in relation to the sample images originally used to train the SD model. Finally, if the image\u2019s quality score exceeds the set quality threshold, the Volume sub-module assesses the product\u2019s size in proportion to the background image. The generated image will be successfully accepted and presented to the user only if all three scores within the Product Quality Alignment Module meet their respective thresholds. Within the Content module, an image captioning model [14] is employed to generate a caption, which is then refined by incorporating the product\u2019s name. The super-class name of the product can also be utilized. Both the captions and the inpainted image are fed into the CLIP model to derive a CLIP score. If the modified caption scores above 70%, it\u2019s inferred that the product exists in the inpainted image. The Quality module contrasts the mean CLIP image features of the sample images with the CLIP image feature of the generated image. The greater the resemblance of the inpainted product to the sample images, the higher the quality score. A threshold of 70% has been established. The Volume module finally gauges the size of the inpainted product. The generated image is processed through the CLIP model, accompanied by three distinct textual size prompts. Given that \u201ca small dog sitting on a desk next to a computer\u201d \u201ca small dog sitting on a desk next to a computer with an echo dot\u201d \u201cInput Image\u201d \u201cGenerated Image\u201d Caption Generator CLIP Score Fine-tuned Caption Product Exist (a) Content Sub-module \u201cSample Images\u201d \u201cGenerated Image\u201d Mean CLIP Image Feature CLIP Image Feature Cosine Similarity Quality Score (b) Quality Sub-module \u201cGenerated Image\u201d CLIP Score \u201ctoo large {product}\u201d \u201cregular size {product}\u201d \u201ctoo small {product}\u201d Product Size (c) Volume Sub-module Figure 3. Block diagram of each of the components of the Alignment Module. The Content sub-module is built using a pre-trained caption generator and CLIP models shown in (a). The generated caption is fine-tuned by adding the name of the intended product to the caption. For the Quality sub-module, the image features of the same CLIP model are utilized shown in (b). Finally, in the Volume sub-module, the same CLIP model with three different size text prompts is used shown in (c). size perception can be subjective and varies based on camera proximity, a milder threshold of 34% (slightly above a random guess) has been selected. The comprehensive block diagram of the proposed VPP system is illustrated in Figure 2, with the three stages distinguished by varied color blocks. The block diagrams for each sub-module can be found in Figure 3. 4 \fThe Volume sub-module provides insights regarding the size of the inpainted product. To modify the product\u2019s size, the mask\u2019s dimensions must be adjusted. For this task, morphological transformations, including mask erosion and dilation, can be employed on the binary mask. These transformations can either reduce or augment the mask area, allowing the inpainting module to produce a product image of the desired size. The relationship between alterations in the mask area and the size of the inpainted product across various erosion iterations is depicted in Figure 4. Approximately, 25 iterations of erosion consume around 3 milliseconds, making it highly cost-effective. 0 10 20 25 Figure 4. Application of erosion to the mask where a kernel of size (5 \u00d7 5) is used for 0, 10, 20, and 25 iterations shown in the figure consecutively. The resulting output is presented at the bottom of the corresponding mask to show the size reduction of the generated product in the output image. 5. Experimental Results Experiments were conducted to evaluate the performance of the proposed VPP system. For these experiments, five sample images of an \u201cAmazon Echo Dot\u201d were chosen. 1, 000 augmented images of each product created from these five sample images were used to fine-tune the DreamBooth model using the text prompt \u201dA photorealistic image of a sks Amazon Alexa device.\u201d The model was fine-tuned for 1, 600 steps, employing a learning rate of 5 \u00d7 10\u22126, and a batch size of 1. The fine-tuned model can inpaint products into the masked region. However, issues such as lack of product appearance, poor resolution, and disproportionate shape persist. The goal of the proposed Alignment Module is to automatically detect these issues. If identified, the problematic images are discarded, and a new image is generated from different random noise. Only if a generated image meets all the module\u2019s criteria it is presented to the user. Otherwise, a new image generation process is initiated. This loop continues for a maximum of 10 iterations. 5.1. Assessing Alignment Module To assess the effectiveness of the Alignment Module, images were generated both with and without it. For each submodule, as well as for the overall Alignment Module, 200 images were generated: 100 with the filter activated and 100 without (referred to as the \u201dNaive\u201d case). To prevent bias, all images were given random names and were consolidated into a single folder. These images were also independently evaluated by a human, whose scores served as the ground truth. This ground truth information was saved in a separate file for the final evaluation, which followed a blindfolded scoring method. All the experiments were also repeated for another product named \u201cLupure Vitamin C\u201d. 5.2. Evaluation Metrics The evaluation and scoring method of each of the submodules of the Alignment module is described in the consecutive segments. \u2022 Content Score For the image content score, images are categorized into two classes: \u2018success\u2019 if the product appears, and \u2018failure\u2019 otherwise. When the content module is utilized, the Failure Rate (FR), defined as the ratio of Failure to Success, is below 10% for both of the products. \u2022 Quality Score For the quality score, images are rated on a scale from 0 to 10: 0 indicates the absence of a product, and 10 signifies a perfect-looking product. To evaluate in conjunction with the CLIP score, both the Mean Assigned Quality Score (MAQS) and Mean Quality Score (MQS) are calculated. MAQS represents the average score of images labeled between 0 and 10, while MQS is the output from the quality module, essentially reflecting cosine similarity. \u2022 Volume Score For the volume module, images are also rated on a scale from 0 to 10: 0 for a highly unrealistic size, and 10 for a perfect size representation. When evaluating the volume module, the content module is not utilized. Since the size score necessitates the presence of a product, images without any product are excluded from this evaluation. To gauge performance, the Mean Assigned Size Score (MASS) is calculated in addition to the CLIP score. 5.2.1 Overall Results The results of individual evaluations are presented in Table 1. It can be observed from this table that using any of the sub-modules consistently produced better outcomes compared to when no filtering was applied across various metrics. The results of the comprehensive evaluation, encompassing all sub-modules, can be found in Table 2. 5 \fTable 1. Individual evaluation of content, quality, and volume sub-modules within the overall Alignment Module. \u201cNaive\u201d represents the outputs without any filtering sub-modules. Content classifies the presence of the product in the generated images. Quality measures the proximity of the generated product to the sample product images used to fine-tune the diffusion model. Finally, Volume identifies the size category of the product. Naive Content Naive Quality Naive Volume Amazon Echo Dot Success 72 94 CLIP 32.49 \u00b1 3.69 33.80 \u00b1 2.69 CLIP 32.58 \u00b1 3.70 33.42 \u00b1 2.69 Failure 28 6 MAQS 4.41 \u00b1 3.23 6.41 \u00b1 1.90 MASS 3.01 \u00b1 2.68 4.81 \u00b1 2.31 FR 38.89% 6.38% MQS 0.75 \u00b1 0.14 0.83 \u00b1 0.06 Lupure Vitamin C Success 87 100 CLIP 24.61 \u00b1 2.4 25.23 \u00b1 2.66 CLIP 24.22 \u00b1 3.01 24.51 \u00b1 2.89 Failure 13 0 MAQS 5.65 \u00b1 2.85 6.47 \u00b1 1.09 MASS 5.64 \u00b1 3.05 7.14 \u00b1 1.53 FR 14.94% 0.0% MQS 0.81 \u00b1 0.13 0.86 \u00b1 0.04 Table 2. Comparison of the proposed method with and without using the Alignment Module in addition to the Paint-By-Example (PBE) [23] inpainting model. The \u201cNaive\u201d performance represents the generated output without applying the Alignment Module. The \u201cAlignment\u201d column represents the generated outputs where three cascaded filtering sub-modules are used, i.e., the Alignment Module. Amazon Echo Dot Lupure Vitamin C PBE Naive Alignment PBE Naive Alignment CLIP 31.44 \u00b1 3.43 32.85 \u00b1 3.19 33.85 \u00b1 2.54 27.01 \u00b1 2.10 24.71 \u00b1 2.64 24.89 \u00b1 2.90 MAQS 1.13 \u00b1 1.30 4.65 \u00b1 3.60 6.31 \u00b1 2.39 1.75 \u00b1 1.51 6.60 \u00b1 3.01 7.81 \u00b1 1.13 MASS 1.22 \u00b1 1.60 3.05 \u00b1 2.98 4.70 \u00b1 2.81 2.43 \u00b1 2.07 6.25 \u00b1 3.08 7.30 \u00b1 1.59 MQS 0.64 \u00b1 0.08 0.75 \u00b1 0.14 0.82 \u00b1 0.05 0.67 \u00b1 0.06 0.82 \u00b1 0.12 0.86 \u00b1 0.05 FR 78.57% 29.87% 0.00% 38.89% 17.64% 0.00% (a) (b) (c) (d) Figure 5. Inpainted product image of Paint-by-Example (PBE). PBE generates high-quality images which explains the higher CLIP score in the case of Lupure Vitamin C. However, the inpainted product does not look similar to the desired product at all resulting in very poor mean assigned quality and size scores. Output images for Amazon Echo Dot is shown in (a) and (b), and for Lupure Vitamin C is shown in (c) and (d). Figure 6. Empirical performance of Alignment Module for Amazon Echo Dot. Noticeably, no output is generated without any product when the Alignment Module is employed. Moreover, the mean quality score has increased from 4.65 to 6.31. 5.3. Comparison with Paint-By-Example The proposed method is compared with the Paint-ByExample (PBE) [23] inpainting model and Table 2 shows the performance comparison of the proposed method along with PBE. PBE can generate very high-quality images, however, the inpainted product in the generated image does not look alike the desired product at all as shown in Figure 5 resulting in very poor MAQS and MASS. Whereas the inpainted product of our proposed method resembles much of the original product shown in Figure Figure 7. 6 \f5.4. Frequency Distribution The frequency distribution and density function of the assigned quality scores in the case of \u201cNaive\u201d and \u201cAlignment\u201d for Amazon Echo Dot is presented in Figure 6. The density mean has shifted from 4.65 to 6.31 when Alignment Module is adopted indicating the effectiveness of the proposed module. 6. Path to Production 6.1. Product API The location identifier, fine-tuned model, and Alignment Module are combined to develop an easy-to-use VPP Streamlit web app 1. This app is hosted on Amazon Sagemaker using an \u201cml.p3.2xlarge\u201d instance, which is a single V100 GPU with 16GB of GPU memory. The demo app\u2019s interface is illustrated in Figure 8. In the top-left \u2018Image\u2019 section, users can either upload their own background image or choose from a selection of sample background images to generate an inpainted product image. The web app provides extensive flexibility for tuning the parameters of the Alignment Module so that users can comprehend the effects of these parameters. In the \u2018seed\u2019 text box, a value can be input to control the system output. The segmentation threshold for CLIPSeg defaults to 0.7, but users can refine this value using a slider. Within the \u2018Mask Params\u2019 section, the number of dilation and erosion iterations can be set and visualized in real-time. The filter, represented by the Alignment Module, can be toggled on or off. The \u2018Max Attempt\u2019 slider determines the number of regeneration attempts if the model doesn\u2019t produce a satisfactory output. However, if a seed value is specified, the model will generate the output only once, regardless of the set value. Lastly, in the \u2018Filter Params\u2019 section, users can fine-tune the threshold values for each sub-module of the Alignment Module, specifically for content, quality, and volume. The \u201cshow stats\u201d button beneath the input image displays the mask alongside details of the model outputs. These details include the seed value, placement, generated and modified captions, and the content, quality, and volume/size scores. By visualizing the mask and its area, users can apply erosion or dilation to adjust the product\u2019s size. The default threshold values for content, quality, and volume are 0.7, 0.7, and 0.34, respectively. While these values can be adjusted slightly higher, it\u2019s recommended to also set the \u2019Max Attempt\u2019 to 10 in such cases. A higher threshold means that the generated output is more likely to fail the criteria set by the Alignment Module. 1STREAMLIT: https://streamlit.io/ 6.2. Future Considerations for Product Scalability Fine-tuning stable diffusion using DreamBooth can take up to 30 minutes, depending on dataset size, image resolution, and extent of training. When considering a customer with hundreds or thousands of products, this process could take days to complete model training across different products. Our pipeline is deployed on Amazon SageMaker, a managed service that supports the automatic scaling of deployed endpoints. This service can dynamically accommodate large computational needs by provisioning additional instances as required. As such, fine-tuning 100 SD models for 100 different products would still only take about 30 minutes if 100 instances were utilized in parallel. The fine-tuned models are stored in an Amazon S3 (Simple Storage Service) bucket, with each model being 2.2 GB in size. Consequently, 100 fine-tuned models would occupy approximately 220 GB of storage space. A pertinent question arises: Can we strike a space-time trade-off by training a single model with a unique identifier for each product? If this is feasible, the space requirement would be reduced to a consistent 2.2 GB. However, that one model would need more extensive training specifically training steps would increase by a factor of 100 for 100 products, thereby lengthening the computation time. This approach remains untested and warrants future exploration [10]. 7. Conclusion In this paper, we present a novel, fully automated, endto-end pipeline for Virtual Product Placement. The proposed method automatically determines a suitable location for product placement into a background image, performs product inpainting, and finally evaluates image quality to ensure only high-quality images are presented for the downstream task. Using two different example products, experiments were conducted to evaluate the effectiveness of the proposed pipeline, the performance of the individual sub-modules, and the overarching Alignment Module. Notably, when upon employing the Alignment Module, the Failure Ratio (FR) plummeted down to 0.0% for both investigated products. Additionally, images produced with the Alignment Module achieved superior CLIP, quality, and size scores. Qualitatively, the produced images present a clean and natural semantic inpainting of the product within the background image. The accompanying web application facilitates pipeline deployment by enabling image generation through a user-friendly interface with extensive image finetuning capabilities. The high-quality integration of products into images underscores the potential of the proposed VPP in the realms of digital marketing and advertising. 7 \f Amazon Echo Dot Background and Inpainted Images Lupure Vitamin C Background and Inpainted Images Figure 7. Qualitative results of the proposed VPP system. Experiments are performed using two different products, Amazon Echo Dot as shown on top, and Lupure Vitamin C as shown on bottom. The original training images are shown on the left, and then the pairs of background and inpainted output images are presented side by side. Figure 8. The interface of the VPP web app demo was built using Streamlit hosted in Amazon SageMaker. The uploaded background image is shown under the title \u201cInput Image\u201d and the inpainting image with an Amazon Echo Dot is shown under the title \u201cOutput Image\u201d. Moreover, the generated mask produced by the location identifier and the other intermediate details of the proposed VPP system is also presented in the interface. 8"
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01143v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01143v1",
|
| 3 |
+
"title": "Are We Really Achieving Better Beyond-Accuracy Performance in Next Basket Recommendation?",
|
| 4 |
+
"abstract": "Next basket recommendation (NBR) is a special type of sequential\nrecommendation that is increasingly receiving attention. So far, most NBR\nstudies have focused on optimizing the accuracy of the recommendation, whereas\noptimizing for beyond-accuracy metrics, e.g., item fairness and diversity\nremains largely unexplored. Recent studies into NBR have found a substantial\nperformance difference between recommending repeat items and explore items.\nRepeat items contribute most of the users' perceived accuracy compared with\nexplore items. Informed by these findings, we identify a potential \"short-cut\"\nto optimize for beyond-accuracy metrics while maintaining high accuracy. To\nleverage and verify the existence of such short-cuts, we propose a\nplug-and-play two-step repetition-exploration (TREx) framework that treats\nrepeat items and explores items separately, where we design a simple yet highly\neffective repetition module to ensure high accuracy, while two exploration\nmodules target optimizing only beyond-accuracy metrics. Experiments are\nperformed on two widely-used datasets w.r.t. a range of beyond-accuracy\nmetrics, viz. five fairness metrics and three diversity metrics. Our\nexperimental results verify the effectiveness of TREx. Prima facie, this\nappears to be good news: we can achieve high accuracy and improved\nbeyond-accuracy metrics at the same time. However, we argue that the real-world\nvalue of our algorithmic solution, TREx, is likely to be limited and reflect on\nthe reasonableness of the evaluation setup. We end up challenging existing\nevaluation paradigms, particularly in the context of beyond-accuracy metrics,\nand provide insights for researchers to navigate potential pitfalls and\ndetermine reasonable metrics to consider when optimizing for accuracy and\nbeyond-accuracy metrics.",
|
| 5 |
+
"authors": "Ming Li, Yuanna Liu, Sami Jullien, Mozhdeh Ariannezhad, Mohammad Aliannejadi, Andrew Yates, Maarten de Rijke",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.IR",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.IR"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "LLM Fairness",
|
| 14 |
+
"gt": "Are We Really Achieving Better Beyond-Accuracy Performance in Next Basket Recommendation?",
|
| 15 |
+
"main_content": "INTRODUCTION Recommender systems have become an essential instrument for connecting people to the content, services, and products they need. In e-commerce, more and more consumers purchase food and household products online instead of visiting physical retail stores [18]. The COVID-19 pandemic has only accelerated this shift [33]. In this scenario, consumers usually purchase a set of items at the same time, a so-called basket. Next basket recommendation (NBR) is a type of sequential recommendation that caters to this scenario: baskets are the target of recommendation and historical sequential data consists of users\u2019 interactions with baskets. NBR has increasingly been attracting attention in recent years [2]. Many methods, based on different machine learning techniques, have been proposed for accurate recommendations, e.g., Markov chain (MC)-based methods [37, 41], frequency and nearest neighbor-based methods [12, 15], RNN-based methods [14, 19, 34, 49], and self-attention methods [9, 39, 50]. Repetition vs. exploration in NBR. Recently, Li et al. [24] have assessed the performance of state-of-the-art NBR in terms of repeat and explore items: items that a user has interacted with before and items that they have never interacted with before, respectively. The authors distinguish between the task of repetition recommendation (recommending repeat items) and the task of exploration arXiv:2405.01143v1 [cs.IR] 2 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al. Table 1: Comparison of the repetition and exploration tasks in NBR. Aspect Repetition Exploration Task difficulty Easy Difficult Number of items Dozens Thousands Item interactions Previous None Users\u2019 interest With feedback Without feedback Task type Re-consume Infer new recommendation (recommending explore items). Repetition and exploration recommendations have different levels of difficulty, where recommending items that are regularly present in a user\u2019s baskets is shown to be a far easier task [24]. Building on these findings, repetition-only [1, 17] and exploration-only [21] methods have been proposed to optimize the accuracy of next basket recommendation. Accuracy and beyond-accuracy metrics. Even though accuracy naturally serves as the most important objective of recommendations, it is widely recognized that it should not be the sole focus. Beyond-accuracy metrics such as item fairness [11, 13, 46, 47] and diversity [8, 52, 53] also play crucial roles in evaluating recommendation services. Such beyond-accuracy metrics have gained increasing attention and have been optimized in a range of recommendation scenarios [48, 53]. In the NBR scenario, however, beyond-accuracy metrics have been far less studied than accuracybased metrics. In this paper, we help to address this knowledge gap. Following the paradigm of multiple-objective recommender systems [16], it is widely recognized that there is a trade-off between accuracy and beyond-accuracy metrics. E.g., diversity goals are reckoned to stand in contrast with accuracy. Put differently, a method achieving a better beyond-accuracy performance while maintaining the same level of accuracy performance is considered to be a success [48, 53]. And how can we achieve a reasonable balance between accuracy and beyond-accuracy metrics in NBR? Potential \u201cshort-cuts\u201d to balancing accuracy and beyondaccuracy metrics. Besides the imbalance between repetition and exploration [22\u201325], Li et al. also found that repeat items contribute most of the accuracy, whereas the explore items in the recommended basket contribute very little to the user\u2019s perceived utility. As Table 1 summarizes, there are essential differences between the repetition and exploration tasks, which explain the substantial performance differences between the two tasks. Inspired by these findings, we hypothesize that there may be a \u201cshort-cut\u201d strategy to optimize for both accuracy and beyondaccuracy metrics, which contains two aspects: (i) accuracy: Predict repeat items to achieve good accuracy: predicting repeat items is much easier than predicting explore items [24], and (ii) beyond-accuracy: Use explore items to improve beyond-accuracy metrics: it is very difficult to recommend quality explore items. Thus, exchange the low accuracy that is typically achieved on such items for beyond-accuracy metrics, i.e., trade accuracy for diversity and item fairness. We call this NBR strategy a short-cut strategy because it avoids making the fundamental trade-off between accuracy and beyond-accuracy metrics. TREx framework. To operationalize our short-cut idea, and check whether the \u201cshort-cut\u201d strategy can be made to work, we propose the two-step repetition-exploration (TREx) framework. TREx decouples the prediction of repeat items and explore items. Specifically, TREx uses separate models for predicting (a) repeat items, and (b) explore items, and then combines the outcomes of the two prediction models to generate the next basket. In contrast, existing NBR methods usually output the scores/probabilities of all items and then select the top-\ud835\udc58items to fill up a basket to be recommended, ignoring the differences between repeat and explore items. For TREx\u2019s repeat item prediction, we propose a simple yet effective probability-based method, which considers the item characteristics and users\u2019 repurchase frequency. For exploration recommendations, we design two strategies that cater to the different beyond-accuracy metrics. The flexibility of TREx allows us to design suitable models for repetition and exploration, with the possibility of controlling the proportions of repetition and exploration to investigate the relations between accuracy and various beyond-accuracy metrics. Findings and reflections. We consider two types of widely-used beyond-accuracy metrics, i.e., diversity and item fairness. Specifically, we investigate five fairness metrics (i.e., logEUR, logRUR, EEL, EED, and logDP) [28, 36] and three diversity metrics (i.e., ILD, Entropy, and DS) [48]. To provide an overall understanding of these metrics, we group them according to different levels of connection with accuracy as follows: (i) Strong connection: logRUR, (ii) Weak connection: logEUR, EEL, EED (iii) No connection: logDP, ILD, Entropy, DS. Briefly, the strong connection between logRUR and accuracy stems from the fact that logRUR uses ground truth relevance to discount the exposure, making sure that only correctly predicted items contribute to effective exposure. The connection between logEUR, EEL, and accuracy is weak because they just ensure the exposure distribution across groups of recommended results is close to the group exposure distribution of ground truth, without considering whether the exposure is contributed by correctly predicted items. Since the position weighting model of EED considers ground truth, EED shows a weak connection. There is no connection between accuracy and logDP, ILD, Entropy, and DS because their exposure distributions across groups are designed to reflect a specific distribution. The strength of the connection between a beyond-accuracy metric and accuracy determines whether there is a short-cut towards optimizing both accuracy and the beyondaccuracy metric. We perform experiments on two brick-and-mortar retailers\u2019 NBR datasets, considering six NBR baselines and eight metrics. The experimental results show that: (1) State-of-the-art accuracy can be achieved by only recommending repeat items via the proposed simple yet effective repetition model. (2) Leveraging the \u201cshort-cut\u201d using TREx achieves \u201cbetter\u201d beyond accuracy performance w.r.t. seven out of eight beyond-accuracy metrics. (3) In terms of the item fairness metric having a strong connection with the accuracy (i.e., logRUR), it is more difficult to achieve better beyond-accuracy metrics via the proposed strategy. Stepping back. Instead of blindly claiming TREx with the designed modules as a state-of-the-art method for optimizing both accuracy and various beyond-accuracy metrics, we reflect and challenge our \fTREx SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA evaluation paradigm in the definition of success in this setting. The core question is: Are we really achieving better beyond-accuracy performance in next basket recommendation? Two perspectives offer different ways forward for researchers and practitioners to address this question: (1) If we are willing to sacrifice the accuracy of the exploration, then superior beyond-accuracy performance can be achieved by leveraging the \u201cshort-cut\u201d strategy via TREx, which is straightforward and efficient. This \u201cshort-cut\u201d strategy must be considered before developing more sophisticated and elaborate approaches. (2) Conversely, if we believe it is unreasonable to sacrifice the accuracy of exploration [45], the existence of the \u201cshort-cut\u201d strategy reveals flaws in our current evaluation paradigm to demonstrate an NBR method\u2019s superiority. A fine-grained analysis (i.e., distinguishing between repetition and exploration) needs to be performed to check whether \u201cbetter\u201d beyond-accuracy is achieved by triggering the \u201cshort-cut\u201d strategy, which would hurt the exploration accuracy after all. Our contributions. The main contributions of the paper are: \u2022 We identify a \u201cshort-cut\u201d strategy (i.e., sacrificing accuracy for exploration and using explore items to optimize for beyondaccuracy metrics), which could achieve \u201cbetter\u201d beyond-accuracy metrics without degrading accuracy. \u2022 We propose a simple repetition recommendation model considering item features and users\u2019 repurchase frequency, which can achieve the state-of-the-art NBR accuracy by only recommending repeat items. \u2022 We propose TREx, a flexible two-step repetition-exploration framework for NBR, which allows us to control the trade-off between accuracy and beyond-accuracy metrics w.r.t. the recommended baskets. \u2022 We conduct experiments on two datasets w.r.t. eight beyondaccuracy metrics, and find that leveraging \u201cshort-cuts\u201d via TREx can achieve better performance on a wide range of metrics. We also find that the stronger the connection with accuracy, the more challenging it becomes to utilize a \u201cshort-cut\u201d strategy to enhance a beyond-accuracy metric. \u2022 We reflect on, and challenge, existing evaluation paradigms, and find that a fine-grained level analysis can provide a complementary view of a method\u2019s performance. 2 RELATED WORK We summarize related research on next basket recommendation and beyond-accuracy metrics. Next basket recommendation. The NBR problem has been studied for many years. Factorizing personalized Markov chains (FPMC) [37] leverages matrix factorization and Markov chains to model users\u2019 general interest and basket transition relations. HRM [41] applies aggregation operations to learn a hierarchical representation of baskets. RNNs have been adapted to the NBR task to learn longterm trends by modeling the whole basket sequence. E.g., Dream [49] uses max/avg pooling to encode baskets. Sets2Sets [14] adapts an attention mechanism and adds frequency information to improve performance. Some methods [19, 44] consider the underlying item relations to get a better representation. Yu et al. [50] argue that item-item relations between baskets are important, and leverage GNNs to use these relations. Some methods [3, 20, 39, 42] exploit auxiliary information, including product categories, amounts, prices, and explicit timestamps. TIFUKNN [15] and UP-CF@r [12], frequency-neighbor-based methods, model temporal patterns, and then combine these with neighbor information or user-wise collaborative filtering. Li et al. [24] provide several metrics to evaluate repetition and exploration performance in the NBR task and find that the repetition task is easier than the exploration task. Inspired by this analysis, repetition-only [1, 17] and exploration-only [21] models were proposed for next basket recommendation. Existing NBR work mainly focuses on optimizing accuracy whereas this paper extends to various beyond-accuracy metrics for NBR. Beyond-accuracy metrics. In addition to accuracy, there are various beyond-accuracy metrics (i.e., diversity, fairness, novelty, serendipity, coverage) we need to consider when making recommendations [11]. Diversity is a crucial factor in meeting the diverse demands of users [7, 35, 43, 52]. Recently, empirical and revisitation studies [29, 48] have been conducted to explore the trade-off between accuracy and diversity. The concepts of fairness and item exposure have emerged as crucial considerations since items and producers play pivotal roles within a recommender system and its ecosystem. Related metrics measure whether items receive a fair share of exposure according to different definitions of fairness. Current research on fairness primarily focuses on individual or group fairness, either from the customer\u2019s perspective, adopting a user-centered approach [5], or from the provider\u2019s viewpoint, adopting an item-centered approach [30, 51], or a two-sided approach [31, 46, 47]. Recently, Liu et al. [28] evaluated item fairness on existing NBR methods to investigate the robustness of different fairness metrics. Unlike the work listed above, this paper is not limited to optimizing a specific type of metric. It examines the possibility of leveraging a \u201cshort-cut\u201d strategy to seemingly optimize various beyond-accuracy metrics and provides insights w.r.t. evaluation paradigms when extending NBR optimization and evaluation to these beyond-accuracy metrics. 3 TASK FORMULATION AND DEFINITIONS We describe the next basket recommendation problem and formalize the notions of repetition and exploration. Our notation is summarized in Table 2. Next basket recommendation. Given a set of users \ud835\udc48= {\ud835\udc621, \ud835\udc622, ..., \ud835\udc62\ud835\udc5b} and items \ud835\udc3c= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc5a}, \ud835\udc46\ud835\udc62= {\ud835\udc351 \ud835\udc62, \ud835\udc352 \ud835\udc62, . . . , \ud835\udc35\ud835\udc61 \ud835\udc62} represents the historical interaction sequence for \ud835\udc62, where \ud835\udc35\ud835\udc61 \ud835\udc62is the user\u2019s basket at the time step \ud835\udc61. \ud835\udc35\ud835\udc61 \ud835\udc62consists of a set of items \ud835\udc56\u2208\ud835\udc3c, and the goal of the next basket recommendation task is to predict \ud835\udc43\ud835\udc62= \ud835\udc35\ud835\udc61+1 \ud835\udc62 , the following basket of items that the user would probably like, based on the user\u2019s past interactions \ud835\udc46\ud835\udc62, i.e., \ud835\udc43\ud835\udc62= \u02c6 \ud835\udc35\ud835\udc61+1 \ud835\udc62 = \ud835\udc53(\ud835\udc46\ud835\udc62), (1) where \ud835\udc53is our basket generation algorithm. We assume that the user\u2019s attention and screen space is limited; hence, like previous studies [24, 28], we recommend fixed-size baskets of sizes 10 or 20. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al. Table 2: Notation used in the paper; fairness related notation is adapted from [28, 36]. Symbol Description \ud835\udc62\u2208\ud835\udc48 Users \ud835\udc56\u2208\ud835\udc3c Items \ud835\udc46\ud835\udc62 Sequence of historical baskets for \ud835\udc62 \ud835\udc35\ud835\udc61 \ud835\udc62 \ud835\udc61-th basket in \ud835\udc46\ud835\udc62, a set of items \ud835\udc56\u2208\ud835\udc3c \ud835\udc3c\ud835\udc5f\ud835\udc52\ud835\udc5d \ud835\udc62,\ud835\udc61 Set of repeat items for \ud835\udc62up to timestamp \ud835\udc61 \ud835\udc3c\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59 \ud835\udc62,\ud835\udc61 Set of explore items for \ud835\udc62up to timestamp \ud835\udc61 \ud835\udc47\ud835\udc62 Ground-truth basket for \ud835\udc62that we aim to predict \ud835\udc47rep \ud835\udc62 Set of repeat items in the ground truth basket \ud835\udc47\ud835\udc62for \ud835\udc62 \ud835\udc47expl \ud835\udc62 Set of explore items in the ground truth basket \ud835\udc47\ud835\udc62for \ud835\udc62 \ud835\udc43\ud835\udc62 Predicted basket for \ud835\udc62 \ud835\udc43rep \ud835\udc62 Set of repeat items in the predicted basket \ud835\udc43\ud835\udc62for \ud835\udc62 \ud835\udc43expl \ud835\udc62 Set of explore items in the predicted basket \ud835\udc43\ud835\udc62for \ud835\udc62 \ud835\udc3a(\ud835\udc43) Group alignment matrix for items in \ud835\udc43 \ud835\udc3a+ Popular group \ud835\udc3a\u2212 Unpopular group a\ud835\udc43 Exposure vector for items in \ud835\udc43 \ud835\udf16\ud835\udc43 The exposure of groups in \ud835\udc43(\ud835\udc3a(\ud835\udc43)\ud835\udc47a\ud835\udc43) Repetition and exploration. We assume that the set of items is fixed. Although this might not be the case in real-world settings, modeling the addition and deletion of items in the set of items is out of the scope of this paper. With this assumption in mind, the addition of every new basket to the users\u2019 history, may translate into fewer items left to explore. To differentiate between the items coming from the exploration and repeat consumption behavior, for a user \ud835\udc62and timestamp \ud835\udc61, a set of items \ud835\udc3crep \ud835\udc62,\ud835\udc61\u2282\ud835\udc3care considered to be the \u201crepeat items.\u201d The set of explore items \ud835\udc3cexpl \ud835\udc62,\ud835\udc61 is simply its complement within the overall item set \ud835\udc3c. We define \ud835\udc3crep \ud835\udc62,\ud835\udc61as: \ud835\udc3crep \ud835\udc62,\ud835\udc61= \ud835\udc3c\ud835\udc5f\ud835\udc52\ud835\udc5d \ud835\udc62,\ud835\udc61\u22121 \u222a\ud835\udc35\ud835\udc61 \ud835\udc62. (2) This also means that \ud835\udc3crep \ud835\udc62,1 \u2282\u00b7 \u00b7 \u00b7 \u2282\ud835\udc3crep \ud835\udc62,\ud835\udc61\u22121 \u2282\ud835\udc3crep \ud835\udc62,\ud835\udc61. Conversely, we have \ud835\udc3cexpl \ud835\udc62,\ud835\udc61 \u2282\ud835\udc3cexpl \ud835\udc62,\ud835\udc61\u22121 \u2282\u00b7 \u00b7 \u00b7 \u2282\ud835\udc3cexpl \ud835\udc62,1 . The task of predicting the next basket for a user \ud835\udc62is equivalent to predicting which items from \ud835\udc3c\ud835\udc5f\ud835\udc52\ud835\udc5d \ud835\udc62,\ud835\udc61and \ud835\udc3c\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59 \ud835\udc62,\ud835\udc61 will appear in \ud835\udc35\ud835\udc61+1 \ud835\udc62 . One way to solve this problem is to decouple it into two subtasks: the repetition subtask that aims to predict which items from \ud835\udc3c\ud835\udc5f\ud835\udc52\ud835\udc5d \ud835\udc62,\ud835\udc61to recommend, and the exploration task that recommends items from \ud835\udc3c\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59 \ud835\udc62,\ud835\udc61 . Table 1 shows the different characteristics w.r.t. the repetition and exploration tasks. 4 EVALUATION METRICS Next, we describe the accuracy and beyond-accuracy metrics (i.e., fairness and diversity) considered in the paper.1 Accuracy. In terms of accuracy, we use three metrics that are widely used for the NBR task: \ud835\udc45\ud835\udc52\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc59@\ud835\udc58, \ud835\udc41\ud835\udc37\ud835\udc36\ud835\udc3a@\ud835\udc58, and \ud835\udc43\ud835\udc3b\ud835\udc45@\ud835\udc58. \ud835\udc45\ud835\udc52\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc59 measures the ability to find all items that the user will purchase in the next basket; NDCG is a ranking metric that also considers the order of the items; PHR is a user level measurement which 1Due to space limitations, we only provide brief introductions of each metric; more detailed information (e.g., function, responsibility, etc.) can be found in the original papers and relevant survey papers [28, 36, 53]. represents the ratio of users whose recommended basket contains the item in the ground-truth. Fairness. Assume \ud835\udf0b(\ud835\udc43| \ud835\udc62) is a user-dependent distribution and \ud835\udf0c(\ud835\udc62) is a distribution over users; overall, the recommended item rankings among all users follow the following distribution: \ud835\udf0c(\ud835\udc62)\ud835\udf0b(\ud835\udc43| \ud835\udc62). \ud835\udf16\ud835\udc43= \ud835\udc3a(\ud835\udc43)Ta\ud835\udc43is the group exposure within a recommended basket.2 Its expected value \ud835\udf16\ud835\udf0b= \ud835\udc38\ud835\udf0b\ud835\udf0c[\ud835\udf16\ud835\udc43] is the group exposure among all the recommended baskets. Following [28, 36], we select a set of well-known fairness metrics and cover two types of fairness considerations as follows:3 (1) Equal opportunity. Promote equal treatment based on merit or utility, regardless of group membership [28, 36]. (i) Exposed Utility Ratio (EUR) [38] quantifies the deviation from the objective that the exposure of each group is proportional to its utility \ud835\udc4c(\ud835\udc3a). (ii) Realized Utility Ratio (RUR) [38] models actual user engagement, the click-through rates for the groups \u0393 (\ud835\udc3a) are proportional to their utility. (iii) Expected Exposure Loss (EEL) [10] is the distance between expected exposure and target exposure \ud835\udf16\u2217, which is the exposure under the ideal policy. (2) Statistical parity. Ensure comparable exposure among groups. (i) Expected Exposure Disparity (EED) [10] measures the inequality in exposure distribution across groups. (ii) Demographic Parity (DP) [38] measures the ratio of average exposure given to the two groups. Following [36], we reformulate DP as logDP to tackle the issue of empty-group scenarios and improve interpretability. Exposed Utility Ratio (logEUR) and Realized Utility Ratio (logRUR) are defined in a similar manner. Diversity. Following [48], we consider the following widely-used diversity metrics, which satisfy users\u2019 diversified demands. (i) Intra-List Distance (ILD) [6, 7] measures the average distance between every pair of items in the recommendation list (\ud835\udc43\ud835\udc62), where \ud835\udc51\ud835\udc56\ud835\udc57 is the Euclidean distance between the respective embeddings of categories (ii) Entropy [43, 54] quantifies the dispersion of item category distribution in the recommendation list \ud835\udc43\ud835\udc62; a higher degree of dispersion in the category distribution corresponds to increased diversity. (iii) Diversity Score (DS) [27] is calculated as the number of interacted/recommended categories divided by the number of interacted/recommended items. As shown in Table 3, we can group beyond-accuracy metrics according to their connection with accuracy. 5 A TWO-STEP REPETITION-EXPLORATION FRAMEWORK Given the differences depicted in Table 1, we propose a two-step repetition-exploration (TREx) framework for NBR. TREx assembles recommendations from a repetition and an exploration module. TREx allows one to easily swap out the sub-algorithms used for repetition and exploration. In the first step, we model the repetition and exploration behavior separately to get candidates from both sources. Then, we generate the recommended basket from those candidates in the second step. The main architectural differences between previous approaches to the NBR problem, which typically 2The formula to compute the exposure vector a\ud835\udc43using different position weighting models can be found in [28, 36]. 3Item fairness metric Inequity of Amortized Attention [4] is not used in this paper since some baselines do not have predicted relevance for items. \fTREx SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 3: Summary of fairness and diversity metrics; fairness metrics are adapted from [36]. \u2191indicates that higher values are better; \u2193indicates that lower values are better; \u25e6means that the closer the value is to 0, the better the performance. Category Metrics Goal Better Accuracy connection Equal opportunity logRUR Click-through rate proportional to relevance \u25e6 Strong logEUR Exposure proportional to relevance \u25e6 Weak EEL Exposure matches ideal (from relevance) \u2193 Weak Statistical parity EED Exposure well-distributed \u2193 Weak logDP Exposure equal across groups \u25e6 None Diversity ILD Average distance between categories for each pair of items in the list \u2191 None Entropy Entropy of item category distribution in the list \u2191 None DS Number of categories divided by the number of items in the list \u2191 None Algorithm 1: TREx Framework Data: Basket sequence \ud835\udc46, basket size \ud835\udc58, repetition confidence threshold \ud835\udc63 Result: Recommended basket \ud835\udc35\ud835\udc61+1 \ud835\udc62 for each user \ud835\udc62, 1 Calculate the repetition feature RepI (\ud835\udc56) for each item; 2 for each user \ud835\udc62do 3 Get repeat items \ud835\udc3crep \ud835\udc62,\ud835\udc61, and explore items \ud835\udc3cexpl \ud835\udc62,\ud835\udc61; 4 Calculate the repetition score RepS\ud835\udc62(\ud835\udc56) for each \ud835\udc56\u2208\ud835\udc3crep \ud835\udc62,\ud835\udc61; 5 Remove items \ud835\udc56from \ud835\udc3crep \ud835\udc62,\ud835\udc61, when RepS\ud835\udc62(\ud835\udc56) < \ud835\udc63; 6 Rank \ud835\udc3crep \ud835\udc62,\ud835\udc61according to RepS\ud835\udc62(\ud835\udc56) in descending order; 7 Initialize next basket \ud835\udc35\ud835\udc61+1 \ud835\udc62 ; 8 if |\ud835\udc3crep \ud835\udc62,\ud835\udc61| < \ud835\udc58then 9 Fill \ud835\udc35\ud835\udc61+1 \ud835\udc62 using \ud835\udc3crep \ud835\udc62,\ud835\udc61; 10 m \u2190\ud835\udc58\u2212|\ud835\udc3crep \ud835\udc62,\ud835\udc61|; 11 Fill m empty slots of \ud835\udc35\ud835\udc61+1 \ud835\udc62 using explore items via exploration module; 12 else 13 Fill \ud835\udc35\ud835\udc61+1 \ud835\udc62 using top-\ud835\udc58of \ud835\udc3crep \ud835\udc62,\ud835\udc61; 14 end 15 end consists of a single treatment of all items, and TREx, which treats repeat and explore items differently. The pseudo-code for TREx is given in Algorithm 1. Next, we describe the three modules that make up TREx.4 5.1 Repetition module As the repetition task is a much simpler task than exploration, we therefore design a repetition module targeted at improving the accuracy. Intuitively, if a user consumed an item several times in the past, they are likely to repurchase that item in the next basket. Thus, frequency information is a strong signal for repetition prediction [40]. The personal item frequency (PIF) introduced in TIFUKNN [15] and the recency window in UP-CF@r[12] both capture temporal dependencies by focusing more on recent behavior. However, they do not capture the item characteristics w.r.t. repurchasing. For example, a purchase of a bottle of milk and a pan is more likely to be followed by a repurchase of milk rather than a 4Theoretically, TREx allows us to choose or design the suitable repetition and exploration modules both targeted at the accuracy to achieve state-of-the-art performance. However, we aim to investigate the \u201cshort-cut\u201d and relationship between accuracy and various beyond-accuracy metrics. pan, even if both currently have the same purchase frequency. To consider both item features and user interest simultaneously, we use the repetition score RepS\ud835\udc62(\ud835\udc56) to represent the repurchase score of item \ud835\udc56for user \ud835\udc62. This score is decomposed into two parts, the item-specific repurchase feature RepI (\ud835\udc56) and the user\u2019s interest \ud835\udc38\ud835\udc62 \ud835\udc56 in item \ud835\udc56. Formally: RepS\ud835\udc62(\ud835\udc56) = \ud835\udc38\ud835\udc62 \ud835\udc56\u00b7 RepI (\ud835\udc56) . (3) This corresponds to line 4 in Algorithm 1. Given the items in the dataset \ud835\udc3c= {\ud835\udc561,\ud835\udc562, . . . ,\ud835\udc56\ud835\udc5a}, we need to derive the repurchase feature RepI (\ud835\udc56) for each item in the training set. First, the repurchase frequency \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc39(\ud835\udc56) can be calculated by gathering the statistical information across users. To mitigate the impact of abnormally high values in some users, we introduce a hyperparameter \ud835\udefcto discount the repurchase frequency of item \ud835\udc56. Rep\ud835\udc39(\ud835\udc56) = \u00cd \ud835\udc48(item \ud835\udc56repurchase frequency)\ud835\udefc #users who bought item \ud835\udc56at least once . (4) In addition, some items might only have a few samples, which might lead to low confidence about their repetition feature estimation. We leverage the average estimate RepF across all items as supplementary information to help items with a few samples. Then, the final repetition feature is given by: RepI (\ud835\udc56) = Rep\ud835\udc39(\ud835\udc56) + RepF \ud835\udc41\ud835\udc56 , (5) where \ud835\udc41\ud835\udc56is the number of users who bought item \ud835\udc56. Thus, the average RepF will have a small effect on RepI (\ud835\udc56) when we have more samples to compute item-specific features. This corresponds to line 1 in Algorithm 1. The item frequency in a user\u2019s historical baskets can partially reflect the user\u2019s interest. Yet, user interests can also be dynamic. To model temporal dependencies, we introduce a time-decay factor \ud835\udefd, which makes the recent interactions have more impact on the interest \ud835\udc38\ud835\udc62 \ud835\udc56. Assume that a specific item \ud835\udc56was purchased by the user \ud835\udc62several times in their historical baskets {\ud835\udc35\ud835\udc591 \ud835\udc62, \ud835\udc35\ud835\udc592 \ud835\udc62, . . . , \ud835\udc35\ud835\udc59\ud835\udc5a \ud835\udc62}; the corresponding position set is denoted as \ud835\udc3f\ud835\udc56= {\ud835\udc591,\ud835\udc592, . . . ,\ud835\udc59\ud835\udc5a}; then \ud835\udc38\ud835\udc62 \ud835\udc56is defined as: \ud835\udc38\ud835\udc62 \ud835\udc56= \u00cd\ud835\udc5a \ud835\udc57=1 \ud835\udefd\ud835\udc47\u2212\ud835\udc59\ud835\udc57, (6) where \ud835\udc47represents the length of the user\u2019s basket. TREx\u2019s repeat recommendation model takes item features, user interests, and the temporal order of baskets into consideration. We treat the items in baskets independently and calculate the repetition score RepS for all items that appeared in the previous baskets for each user, which will be used in the final basket generation process. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al. Table 4: Statistics of the processed datasets. Dataset #items #users Avg. basket size Avg. #baskets per user Repeat ratio Explore ratio Instacart 29,399 19,210 10.06 15.91 0.60 0.40 Dunnhumby 37,162 2,482 10.07 43.17 0.43 0.57 5.2 Exploration module As it is more challenging than repetition, exploration is also an important aspect of NBR. To complement the repetition module, we design different exploration modules, targeting item fairness and diversity, respectively. For each user \ud835\udc62, the exploration candidates \ud835\udc3c\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59 \ud835\udc62,\ud835\udc61 are the set of items that the user never bought before. Item fairness. According to [24], we find that NBR methods usually have varying degrees of popularity bias, which means they recommend more popular items compared to the ground truth and harm item fairness. Thus, we recommend unpopular items \ud835\udc56\u2208\ud835\udc3a\u2212for the exploration module for the sake of approaching the distribution of ground truth and decreasing the exposure gap between the popular and the unpopular groups. Specifically, we randomly sample explore items based on a sampling probability, which is calculated from the purchase frequency of unpopular items. Diversity. Diversity optimizes for more dispersed categories in the predicted basket. For each user, we record categories of repetition candidates, rank exploration candidates according to their popularity, and select explore items to fill in the \ud835\udc35\ud835\udc61+1 \ud835\udc62 in turn. The category of each explore item differs from the categories already in \ud835\udc35\ud835\udc61+1 \ud835\udc62 . 5.3 Basket generation module To construct the final basket to be recommended by TREx for the accuracy objective, we adopt a repetition greedy approach and first consider the item candidates generated by the repetition module and fill the remaining slots via the exploration module. TRExFairness and TRExdiversity denote TREx with the exploration module targeted at fairness and diversity, respectively. For a user \ud835\udc62, we get their repetition score RepSu(\ud835\udc56), where \ud835\udc56\u2208\ud835\udc3crep \ud835\udc62,\ud835\udc61(Algorithm 1, lines 3\u2013 4). First, we define a confidence threshold \ud835\udc63for the repetition score and repetition items are removed from the \ud835\udc56\u2208\ud835\udc3crep \ud835\udc62,\ud835\udc61when the corresponding RepSu(\ud835\udc56) < \ud835\udc63(line 5).5 Then, \ud835\udc3crep \ud835\udc62,\ud835\udc61can be seen as the repetition candidates set. If the number of repetition candidates exceeds the basket size, the items with a high score will have priority to fill the basket (Algorithm 1, line 13). If the number of repetition candidates is smaller than the basket size, the basket is first filled with all items in the repetition candidates set \ud835\udc3crep \ud835\udc62,\ud835\udc61. Then, we fill up the basket using the explore items via the exploration module, where \ud835\udc5arepresents the number of empty slots (lines 9\u201311). 6 EXPERIMENTS 6.1 Experimental setup Datasets. We conduct experiments on two widely-used datasets: (i) Instacart,6 which includes a large number of grocery orders from 5The confidence threshold \ud835\udc63controls the proportion of repeat items and explore items in the recommendation, as well as the accuracy and beyond-accuracy trade-off in this paper. We sweep repetition confidence bound \ud835\udc63to get TREx variants with different accuracy and beyond-accuracy metrics performance. 6https://www.kaggle.com/c/instacart-market-basket-analysis/data users; following [28, 32], \u223c20000 users are randomly selected to conduct experiments; and (ii) Dunnhumby,7 which contains two years\u2019 household-level transactions of 2500 frequent shoppers at a retailer. Following [1, 28], we sample users who have at least three baskets and remove items that appeared less than five times. The two datasets vary in the repeat ratio, i.e., the proportion of repeat items in the ground-truth baskets [24]. We focus on the fixed size (10 or 20) NBR problem. The statistics of the processed datasets are shown in Table 4. In our experiments, each dataset is partitioned according to [1, 12, 28, 32]. The training baskets encompass all user baskets except the last one. In cases where users have over 50 baskets in the training data, only their last 50 baskets are considered for inclusion in the training set. The final baskets of all users are then divided equally between a 50% validation set and a 50% test set. Figure 1 shows the distribution of users across repeat ratios, which is the proportion of repeat items in the ground-truth basket. 0.0 0.2 0.4 0.6 0.8 1.0 Repeat Ratio 0 500 1000 1500 2000 Number of users Instacart 0.0 0.2 0.4 0.6 0.8 1.0 Repeat Ratio 0 50 100 150 200 250 Dunnhumby Figure 1: Distribution of users across different repeat ratios for Instacart and Dunnhumby. NBR baselines. We compare TREx with 8 representative baselines, which we select based on their characteristics in the analysis performed in [24, 28], divided into three groups: 6.1.1 Simple baselines. (i) G-TopFreq uses the \ud835\udc58most popular items in the dataset to form the recommended next basket. (ii) P\u2013 TopFreq is a personalized TopFreq method, which treats the most frequent \ud835\udc58items in historical records of the user as the next basket. (iii) GP-TopFreq [24] is a simple combination of P-TopFreq and G-TopFreq, which first use P-TopFreq to fill the basket, then use G-TopFeq to fill the remaining slots. 6.1.2 Nearest neighbor-based methods. (i) TIFUKNN [15] is a state-of-art method that models the temporal dynamics of frequency information of users\u2019 past baskets to introduce Personalized Frequency Information (PIF), then it uses KNN-based method on the PIF. (ii) UP-CF@r [12] is a combination of recency aware user-wise popularity and user-wise collaborative filtering. 6.1.3 Neural network-based methods. (i) Dream [49] models users\u2019 global sequential basket behavior for NBR using recurrent neural network (RNN). (ii) DNNTSP [50] is a state-of-art method that leverages a GNN and self-attention techniques. It encodes item-item relations via a graph and employs a self-attention mechanism to capture temporal dependencies of users\u2019 basket sequences. (iii) ReCANet [1] is a repeat-only model for NBR, which uses user-item representations with historical consumption patterns via RNN. Configurations. To assess group fairness (Section 4), we follow configurations from previous research [26, 28]; the group of items is determined by their popularity (i.e., the number of purchases 7https://www.dunnhumby.com/source-files/ \fTREx SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Instacart Dunnhumby 0.0 0.1 0.2 0.3 Recall P-TopFreq +T +T+RF Instacart Dunnhumby 0.0 0.1 0.2 0.3 NDCG P-TopFreq +T +T+RF Figure 2: Performance of TREx-Rep when we add a timedecay factor \ud835\udefd(+T), add both \ud835\udefdand item-specific repetition feature \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) (+T+RF). recorded in the historical baskets of the dataset). The top 20% of items with the highest purchase frequency as the popular group (\ud835\udc3a+), while the remaining 80% of items are assigned to the unpopular group (\ud835\udc3a\u2212). For the baseline methods, a grid search is performed to find the optimal hyper-parameters via the validation set. For TIFUKNN, the number of neighbors \ud835\udc58is tuned on {100, 300, 500, 900, 1100, 1300}, the number of groups \ud835\udc5ais tuned on {3, 7, 11, 15, 19, 23}, the within-basket time-decayed ratio \ud835\udc5f\ud835\udc4fand the group time-decayed ratio \ud835\udc5f\ud835\udc54are selected from {0.1, 0.2, . . . , 0.9, 1}, and the fusion weight \ud835\udefcis selected from {0, 0.1, . . . , 0.9, 1}. For UPCF@r, recency window \ud835\udc5fis tuned on {1, 5, 10, 25, 100, \u221e}, locality \ud835\udc5eis tuned on [1, 5, 10, 50, 100, 1000], and asymmetry \ud835\udefcis tuned on {0, 0.25, 0.5, 0.75, 1}. For Dream, DNNTSP, and ReCANet, the item and user embedding size is tuned on {16, 32, 64, 128}. As to TREx, for the repetition module, \ud835\udefcis selected from {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}, and the time-decay factor \ud835\udefdis selected from {0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0}. To facilitate reproducibility, we release the source code and all hyper-parameters in an online repository: https://github.com/lynEcho/TREX. 6.2 Overall accuracy performance By decoupling the repetition and exploration tasks, TREx-Rep optimizes for the repeat items prediction and accounts for the accuracy of the NBR performance. Table 5 shows the experimental results for TREx-Rep and the baselines. We observe that TREx-Rep surpasses two complex deep learning-based methods (i.e., Dream and DNNTSP) by a large margin on the Dunnhumby and Instacart datasets, and TREx-Rep always achieves or matches the SOTA accuracy on both datasets across different accuracy metrics. Note that, TREx-Rep achieves a competitive accuracy performance by only using part of the available slots in the basket.8 Compared to the deep learning methods with complex architectures that try to learn basket representations and model temporal relations, TREx-Rep is very efficient due to its simplicity. To investigate the effect of the repetition features and the improvement in repetition performance in NBR. We conduct experiments on TREx-Rep by gradually adding the time-decay factor \ud835\udefdand item-specific repetition feature RepI (\ud835\udc56). The results are shown in Figure 2. The accuracy increases when we gradually integrate different factors into TREx-Rep, which indicates that both the time-decay factor \ud835\udefdand the item-specific repetition feature \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) contribute 8As TREx-Rep only recommends repeat items, the basket could not be fulfilled when the number of user\u2019s repeat items (historical items) is smaller than the basket size. ReCANet also only recommends repeat items, however, it is a complex neural-based model, which is much slower than the proposed TREx-Rep module. 0.2 0.4 0.6 0.8 1.0 Sample ratio 0 5 10 Recall improvement (%) Instacart Dunnhumby Figure 3: The recall improvement of (+T+RF) over (+T) when the training sample ratio changes from 0.2 to 1. 1.00 1.05 1.10 1.15 1.20 ILD 0.40 0.41 0.42 0.43 0.44 0.45 0.46 Recall + Instacart TREx-Div G-TopFreq Dream P-TopFreq GP-TopFreq UP-CF@r TIFUKNN DNNTSP ReCANet 2.2 2.4 2.6 2.8 Entropy 0.40 0.41 0.42 0.43 0.44 0.45 0.46 + Instacart 0.30 0.35 0.40 0.45 0.50 DS 0.40 0.41 0.42 0.43 0.44 0.45 0.46 + Instacart 0.7 0.8 0.9 1.0 ILD 0.14 0.16 0.18 0.20 0.22 Recall + Dunnhumby 1.4 1.5 1.6 1.7 Entropy 0.14 0.16 0.18 0.20 0.22 + Dunnhumby 0.20 0.25 0.30 0.35 0.40 DS 0.14 0.16 0.18 0.20 0.22 + Dunnhumby 1.00 1.05 1.10 1.15 1.20 ILD 0.40 0.41 0.42 0.43 0.44 0.45 0.46 Recall + Instacart TREx-Div G-TopFreq Dream P-TopFreq GP-TopFreq UP-CF@r TIFUKNN DNNTSP ReCANet 2.2 2.4 2.6 2.8 Entropy 0.40 0.41 0.42 0.43 0.44 0.45 0.46 + Instacart 0.30 0.35 0.40 0.45 0.50 DS 0.40 0.41 0.42 0.43 0.44 0.45 0.46 + Instacart 0.7 0.8 0.9 1.0 ILD 0.14 0.16 0.18 0.20 0.22 Recall + Dunnhumby 1.4 1.5 1.6 1.7 Entropy 0.14 0.16 0.18 0.20 0.22 + Dunnhumby 0.20 0.25 0.30 0.35 0.40 DS 0.14 0.16 0.18 0.20 0.22 + Dunnhumby Figure 4: Performance of TREx\ud835\udc37\ud835\udc56\ud835\udc63\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc56\ud835\udc61\ud835\udc66at different \ud835\udc63values, compared with different NBR methods in terms of different diversity metrics. The red + marker indicates the direction with both high accuracy and diversity. to the accuracy performance of TREx-Rep. Significant improvements over only using the time-decay factor \ud835\udefdcan be observed on the Dunnhumby dataset when the item-specific repetition feature \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) is also adopted to compute the repetition score \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc46\ud835\udc62(\ud835\udc56). Note that the improvement of adding \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) to TREx-Rep on the Instacart dataset is relatively small. We conjecture that items in the Instacart dataset are more regular products, that have little difference in repetition feature with each other. Figure 3 shows the performance when using different amounts of training samples, the improvement in recall resulting from adding \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56) increases when we use more training data since we have more samples for estimating the repetition feature \ud835\udc45\ud835\udc52\ud835\udc5d\ud835\udc3c(\ud835\udc56). 6.3 Beyond-accuracy performance We conduct experiments to verify whether TREx with the designed models (i.e., TREx\ud835\udc37\ud835\udc56\ud835\udc63\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc56\ud835\udc61\ud835\udc66and TREx\ud835\udc39\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc5b\ud835\udc52\ud835\udc60\ud835\udc60) could achieve better performance on representative diversity and item fairness metrics. Note that, the recommended basket remains fixed for a specific user in existing baselines, resulting in fixed performance regarding both accuracy and beyond-accuracy metrics on each dataset. In contrast, TREx provides the flexibility to adjust the trade-off between accuracy and beyond-accuracy metrics by adjusting the repetition confidence bound \ud835\udc63. This allows for a more nuanced control over the recommendation process compared to traditional baselines. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al. Table 5: Comparison of TREx-Rep (repetition-module only) against baselines and two types of state-of-art methods; boldface indicates the maximum; underlining indicates the second best performing method. \u2020 indicates that TREx-Rep results achieve the same level of performance as SOTA baselines (paired t-test). Dataset Metric G-TopFreq P-TopFreq GP-TopFreq UP-CF@r TIFUKNN Dream DNNTSP ReCANet TREx-Rep Instacart Recall@10 0.0704 0.3143 0.3150 0.3377 0.3456 0.0704 0.3295 0.3490 0.3476\u2020 NDCG@10 0.0817 0.3339 0.3343 0.3582 0.3657 0.0817 0.3434 0.3699 0.3661\u2020 PHR@10 0.4600 0.8447 0.8460 0.8586 0.8639 0.4600 0.8581 0.8668 0.8655\u2020 Recall@20 0.0973 0.4138 0.4168 0.4405 0.4559 0.0979 0.4339 0.4562 0.4557\u2020 NDCG@20 0.0962 0.3889 0.3902 0.4161 0.4271 0.0968 0.4018 0.4303 0.4269\u2020 PHR@20 0.5302 0.8921 0.8959 0.9045 0.9098 0.5346 0.9033 0.9097 0.9092\u2020 Dunnhumby Recall@10 0.0897 0.1628 0.1628 0.1699 0.1763 0.0896 0.0871 0.1730 0.1815\u2020 NDCG@10 0.0798 0.1562 0.1562 0.1639 0.1683 0.0759 0.0792 0.1625 0.1689\u2020 PHR@10 0.3795 0.5399 0.5399 0.5536 0.5729 0.3873 0.4303 0.5655 0.5761\u2020 Recall@20 0.1046 0.2075 0.2075 0.2168 0.2227 0.1081 0.1442 0.2252 0.2257\u2020 NDCG@20 0.0877 0.1787 0.1787 0.1885 0.1917 0.0853 0.1021 0.1879 0.1921\u2020 PHR@20 0.4392 0.6116 0.6116 0.6326 0.6342 0.4558 0.5378 0.6377 0.6390\u2020 Diversity. The experimental results w.r.t. the accuracy and different diversity metrics (i.e., ILD, Entropy, and DS) are shown in Figure 4.9 We have the following observations: (1) Compared to methods (i.e., TIFUKNN and ReCANet) with the best accuracy, TRExDiversity can achieve better performance in terms of all three diversity metrics while preserving the same level of accuracy on both datasets. (2) In contrast to other baseline methods (excluding TIFUKNN and ReCANet), TRExDiversity showcases the ability to recommend baskets with enhanced accuracy and diversity simultaneously. Item fairness. The experimental results regarding the accuracy and five fairness metrics (LogRUR, logEUR, logDP, EEL, and EED) are depicted in Figure 5. Based on our analysis, we make the following observations: (i) On the Dunnhumby dataset, TREx\ud835\udc39\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc5b\ud835\udc52\ud835\udc60\ud835\udc60 demonstrates superior fairness w.r.t. logDP and logEUR while maintaining the same level of accuracy performance as the best-performing baselines (i.e., TIFUKNN and ReCANet). Similarly, on Dunnhumby, TREx\ud835\udc39\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc5b\ud835\udc52\ud835\udc60\ud835\udc60showcases enhanced fairness across four fairness metrics (logDP, logEUR, EEL, and EED) while achieving accuracy performance comparable to the best-performing baselines. (ii) TRExFairness demonstrates its capability to recommend baskets with improved accuracy and fairness w.r.t. logDP and logEUR concurrently, when compared to complex baselines such as Dream, UP-CF@r, and DNNTSP. (iii) In terms of logRUR, TRExFairness exhibits inferior performance in fairness while maintaining similar accuracy levels compared to several existing baselines. Moreover, as both accuracy and fairness decrease simultaneously, a win-win and lose-lose scenario is evident rather than a conventional trade-off relationship in this fairness evaluation. Connections with accuracy. To get a better understanding of the possibility of leveraging the \u201cshort-cut\u201d via TREx to improve beyond-accuracy metrics, we conduct an analysis by categorizing these beyond-accuracy metrics into different groups based on their connections with accuracy (see Section 4 and Table 3). We can observe that TREx can easily achieve better performance w.r.t. beyond-accuracy metrics have no connections with the accuracy (i.e., ILD, Entropy, DS, and logDP) on two datasets. When 9G-TopFreq and Dream exhibit low recall, fairness, and diversity, which prevents them from being visible in Figures 4 and 5. beyond-accuracy metrics (e.g., logEUR, EEL, and EED) exhibit weak associations with accuracy, TREx outperforms alternative methods in some instances (4 out of 6). However, in cases where beyondaccuracy metrics are strongly correlated with accuracy (e.g., logRUR), TREx struggles to achieve superior performance. Since only accurate predictions contribute to improvements in logRUR fairness, leveraging the exploration module to optimize such beyondaccuracy metrics is very challenging. 6.4 Reflections and discussions The above results verify our hypothesis and demonstrate the effectiveness of leveraging a \u201cshort-cut\u201d strategy to achieve better beyond-accuracy under the current evaluation paradigms. It is controversial to use this \u201cshort-cut\u201d strategy in real-world scenarios when NBR practitioners consider beyond-accuracy metrics. In scenarios where the accuracy of exploration is not important to practitioners and only overall accuracy is of concern, the \u201cshortcut\u201d strategy proves to be a straightforward and efficient means to achieve better performance w.r.t. various beyond-accuracy metrics. TREx must be considered or serve as a baseline before designing more sophisticated methods, such as including multi-objective loss functions [7, 20], integer programming [53], and so on. However, in some scenarios, it is unreasonable to sacrifice the exploration accuracy despite it being low. Therefore, the existence of the \u201cshort-cut\u201d strategy reveals the potential flaws of the existing evaluation paradigms (i.e., using overall metrics to define success). We look into the exploration accuracy [24] of TRExDiversity when it outperforms several existing baselines in terms of both overall accuracy and diversity (i.e., success according to existing evaluation paradigm). Table 6 shows the huge decrease in the accuracy of exploring items in the recommended basket of TRExDiversity, compared to these baselines, since the designed module in TRExDiversity is mainly designed for improving diversity and does not consider accuracy. In this sense, we can not simply claim the superiority of TRExDiversity compared to these baselines just depends on the overall performance. Note that, the fundamental reason for the existence of this \u201cshortcut\u201d is that predicting accurate explore items is much more difficult \fTREx SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 0.2 0.4 0.6 0.8 logEUR 0.30 0.31 0.32 0.33 0.34 0.35 0.36 Recall + Instacart TREx-Fair G-TopFreq Dream P-TopFreq GP-TopFreq UP-CF@r TIFUKNN DNNTSP ReCANet 0.5 0.6 0.7 0.8 logRUR 0.30 0.31 0.32 0.33 0.34 0.35 0.36 + Instacart 0.00 0.02 0.04 EEL 0.30 0.31 0.32 0.33 0.34 0.35 0.36 + Instacart 1.1 1.2 1.3 1.4 EED 0.30 0.31 0.32 0.33 0.34 0.35 0.36 + Instacart 3.0 3.2 3.4 3.6 logDP 0.30 0.31 0.32 0.33 0.34 0.35 0.36 + Instacart 0.5 1.0 1.5 logEUR 0.08 0.10 0.12 0.14 0.16 0.18 0.20 Recall + Dunnhumby 1.5 1.6 1.7 1.8 logRUR 0.08 0.10 0.12 0.14 0.16 0.18 0.20 + Dunnhumby 0.2 0.4 0.6 EEL 0.08 0.10 0.12 0.14 0.16 0.18 0.20 + Dunnhumby 2.0 2.2 2.4 2.6 EED 0.08 0.10 0.12 0.14 0.16 0.18 0.20 + Dunnhumby 2.5 3.0 3.5 4.0 logDP 0.08 0.10 0.12 0.14 0.16 0.18 0.20 + Dunnhumby Figure 5: Performance of TREx\ud835\udc39\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc5b\ud835\udc52\ud835\udc60\ud835\udc60at different \ud835\udc63values, compared with different NBR methods in terms of different fairness metrics. The red + marker indicates the direction with both high accuracy and fairness. than predicting repeat items, and exploration prediction only accounts for a limited user\u2019s overall accuracy [22\u201325]. Given that exploration prediction contributes only minimally to the overall accuracy of users, it becomes feasible to allocate resources toward optimizing other beyond-accuracy metrics instead of accuracy itself. Therefore, beyond using the overall performance to measure accuracy and beyond-accuracy metrics, a fine-grained level evaluation could help to provide a more rigid identification of the success when considering beyond-accuracy metrics. Table 6: Exploration accuracy [24] of TREx\ud835\udc37\ud835\udc56\ud835\udc63\ud835\udc52\ud835\udc5f\ud835\udc60\ud835\udc56\ud835\udc61\ud835\udc66compared with NBR methods that are inferior to it within existing evaluation paradigms. Dataset Metric TIFUKNN Dream DNNTSP TREx-Div Instacart Recall\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@10 0.0014 0.0322 0.0014 0.0002 PHR\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@10 0.0037 0.1431 0.0040 0.0009 Recall\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@20 0.0077 0.0526 0.0072 0.0008 PHR\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@20 0.0198 0.2120 0.0217 0.0031 Dunnhumby Recall\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@10 0.0042 0.0111 0.0017 0.0000 PHR\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@10 0.0139 0.0521 0.0085 0.0019 Recall\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@20 0.0069 0.0214 0.0028 0.0016 PHR\ud835\udc52\ud835\udc65\ud835\udc5d\ud835\udc59@20 0.0232 0.1045 0.0115 0.0065 7 CONCLUSION We have expanded the research objectives of NBR to go beyond sole accuracy to encompass both accuracy and beyond-accuracy metrics. We have recognized a potential \u201cshort-cut\u201d strategy to optimize beyond-accuracy metrics while preserving high accuracy levels. To capitalize on and validate the presence of such \u201cshort-cuts,\u201d we have introduced a plug-and-play framework called two-step repetitionexploration (TREx) considering the differences between repetition and exploration tasks. This framework treats repeat items and explore items as distinct entities, employing a straightforward yet highly effective repetition module to uphold accuracy standards. Concurrently, two exploration modules have been devised to target the optimization of beyond-accuracy metrics. We have conducted experiments on two publicly available datasets w.r.t. eight representative beyond-accuracy metrics, including item fairness (i.e., logEUR, LogRUR, logDP, EEL, and EED) and diversity (i.e., ILD, Entropy, and DS). Our experimental results demonstrate the effectiveness of our proposed \u201cshort-cut\u201d strategy, which can achieve better beyondaccuracy performance w.r.t. several fairness and diversity metrics on different datasets. Additionally, we group beyond-accuracy metrics according to the strength of their connection with accuracy. Our analysis reveals that the stronger the connection with accuracy, the more difficult it becomes to employ a \u201cshort-cut\u201d strategy to optimize these beyond-accuracy metrics, favoring the metrics with a stronger connection to avoid such short-cuts. As to the broader implications of our work, we have discussed the reasonableness of leveraging the \u201cshort-cut\u201d strategy to trade the accuracy of exploration for beyond-accuracy metrics in various scenarios. The presence of this \u201cshort-cut\u201d highlights a potential flaw in the definition of success within existing evaluation paradigms, particularly in scenarios where exploration accuracy is important despite being low [45]. A fine-grained level evaluation should be performed in NBR to offer a more precise identification of achieving \u201cbetter\u201d performance in such a scenario. Despite the simplicity of the \u201cshort-cut\u201d strategy and TREx, our paper sheds light on the research direction of considering both accuracy and beyond-accuracy metrics in NBR. Rather than blindly embracing sophisticated methods in NBR, follow-up research should realize the existence of the \u201cshort-cut\u201d and potential flaws of existing evaluation paradigms in this research direction. Acknowledgements. This work is partially supported by the Dutch Research Council (NWO), under project numbers 024.004.022, NWA.1389.20.183, KICH3.LTP.20.006, and VI.Vidi.223.166. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Li et al."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01159v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01159v1",
|
| 3 |
+
"title": "TartuNLP at EvaLatin 2024: Emotion Polarity Detection",
|
| 4 |
+
"abstract": "This paper presents the TartuNLP team submission to EvaLatin 2024 shared task\nof the emotion polarity detection for historical Latin texts. Our system relies\non two distinct approaches to annotating training data for supervised learning:\n1) creating heuristics-based labels by adopting the polarity lexicon provided\nby the organizers and 2) generating labels with GPT4. We employed parameter\nefficient fine-tuning using the adapters framework and experimented with both\nmonolingual and cross-lingual knowledge transfer for training language and task\nadapters. Our submission with the LLM-generated labels achieved the overall\nfirst place in the emotion polarity detection task. Our results show that\nLLM-based annotations show promising results on texts in Latin.",
|
| 5 |
+
"authors": "Aleksei Dorkin, Kairit Sirts",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
|
| 14 |
+
"gt": "TartuNLP at EvaLatin 2024: Emotion Polarity Detection",
|
| 15 |
+
"main_content": "Introduction This short report describes the system developed the TartuNLP team for the Emotion Polarity Detection task of the EvaLatin 2024 Evaluation Campaign (Sprugnoli et al., 2024). The goal of the task was to label Latin texts from three historical authors with four emotion polarity labels as positive, negative, neutral or mixed. For this task, no training data was provided, but only a polarity lexicon and a small evaluation set with 44 annotated sentences. Our approach entails two steps. First, we annotated data for supervised model training a) via heuristic rules using the provided polarity lexicon and b) using GPT-4 (see Section 2). Secondly, we adopted knowledge transfer with parameterefficient training via adapters (Houlsby et al., 2019) followed by task-specific fine-tuning on the data annotated in the first step (see Section 3). The knowledge transfer was applied both cross-lingually via pretraining on an English sentiment analysis task, and monolingually by training on an unannotated Latin text corpus. We made two submissions to the shared task: one with heuristically annotated training data and another with the GPT-4 annotated labels. Both submissions obtained competitive results, with the submission with GPT-4 labels obtaining the first place overall. The code for the system is available on GitHub.1 2. Data Annotation For the Emotion Polarity Detection task, no training data was provided. However, the organizers provided two useful resources: a polarity lexicon and 1https://github.com/slowwavesleep/ ancient-lang-adapters/tree/lt4hala Label Heuristics LLM-based positive 6535 1334 negative 2243 1028 mixed 5884 221 neutral 735 4698 Total 15396 7281 Table 1: Statistics of the annotated training data. a small gold annotated sample. We employed two distinct approaches to annotate the training data based on these resources: a heuristics-based and an LLM-based. The annotated data from both approaches is available on HuggingFace Hub.2 The label distribution for the annotated data is presented in Table 1. 2.1. Heuristics-based annotation In this approach, we employed the provided polarity lexicon similarly to the lexicon-based classifier by Sprugnoli et al. (2023). First, data from all available Universal Dependencies (Zeman et al., 2023) sources (Version 2.13, the most recent one at the time of writing) in Latin was collected : 1) Index Thomisticus Treebank (ITTB); 2) Late Latin Charter Treebank (LLCT); 3) UDante; 4) Perseus; 5) PROIEL treebank. Then, the sentences containing no nouns or adjectives in the lexicon were removed. The filtered sentences were assigned labels based on the following rules: 2https://huggingface.co/datasets/ adorkin/evalatin2024 1 arXiv:2405.01159v1 [cs.CL] 2 May 2024 \f1) If all words in the sentence are neutral according to the polarity lexicon, the sentence was labeled as neutral; 2) If the mean polarity of the words in the sentence is in the range from -0.1 to 0.1, then the sentence was labeled as mixed; 3) If the mean polarity is larger than 0.1, then the sentence was labeled as positive; 4) If the mean polarity is less than 0.1, then the sentence was labeled as negative. Our expectation from this approach was that training a model on lexicon-annotated data would result in a model with better generalization capabilities than simply applying the lexicon classifier. The total amount of sentences annotated this way was 15396. 2.2. LLM-based annotation In this approach, we made use of the OpenAI\u2019s GPT-4 model via the API (gpt-4-turbopreview3). The sentences were again sampled from the Universal Dependencies sources. The model was given the description of the problem and one example per label from the gold annotations file. The model was tasked with assigning the given sentence a label and providing an explanation as to why it assigned that particular label. With this approach, we expected that GPT-4 could simulate the annotation process done by an expert in Latin. According to the first author\u2019s somewhat limited understanding of Latin and based on a small sample of annotations and explanations done by the model, the output seems reasonable. We set out to spend about 15 euros per data annotation, which after removing sentences with invalid labels resulted in 7281 annotated sentences. 3. Description of the system The system in our submission is based on the BERT architecture (Devlin et al., 2019). More specifically, we employed the multilingual version of RoBERTa (Zhuang et al., 2021)\u2014XLMRoBERTa (Conneau et al., 2020), which was trained on the data that included Latin. We treated Emotion Polarity Detection as a multiclass classification problem and fine-tuned the model accordingly. However, instead of full finetuning, we trained a stack of adapters: a language adapter and a task adapter. Training adapters involves adding a small number of trainable parameters to the model while freezing the rest of the parameters (Houlsby et al., 2019). In addition to making the training considerably faster, adapters mitigate overfitting and catastrophic forgetting, which 3https://platform.openai.com/docs/ models/gpt-4-and-gpt-4-turbo are common problems when dealing with small amounts of training data. We implemented our system by using the transformers4 and the adapters5 libraries. We expected the model to benefit from both mono-lingual and cross-lingual knowledge transfer; therefore, the training process comprised several stages. First, we fine-tuned a Latin language adapter on a publicly available Latin Corpus6 collected from the Latin Library7. In the next phase of training, we trained a task-specific classification adapter on the English IMDB movie reviews dataset8. The dataset contains only two labels: positive and negative. We created an adapter with a classification head with four classes, two of which remained unused during this stage. Finally, we stacked the task adapter previously trained on English on top of the language adapter, and continued training the task adapter on the annotated data in Latin. The language adapter was trained for ten epochs with a learning rate 1e-4. For further usage, we took the last checkpoint. The task adapter was trained on data in English for five epochs with a learning rate of 5e-4, and we also took the last checkpoint. Finally, for the submissions, we trained a model on both sets of annotated data for 50 epochs with a 5e-4 learning rate. We used the provided gold annotation example as the validation set for training and measured the F-score on it after each epoch. For submission, we selected the best checkpoint based on the validation F-score. 4. Results We made two submissions to the Emotion Polarity Detection task; the first one (TartuNLP_1) fine-tuned on the dataset with the heuristic labels, and the second one (TartuNLP_2) fine-tuned on the dataset with the LLM-generated labels. Both submissions obtained competitive results, with the model trained on the LLM-annotated labels (TartuNLP_2) taking the overall first place and the model trained on the heuristics-annotated data (TartuNLP_1) taking the second place on micro average F1-score and the third place on the macro average F1-score (see Table 2). While the scores obtained by the two models are quite close, there is frequent disagreement in their predictions: out of 294 test examples, the models 4https://github.com/huggingface/ transformers 5https://github.com/adapter-hub/ adapters 6https://github.com/mathisve/ LatinTextDataset 7https://www.thelatinlibrary.com/ 8https://huggingface.co/datasets/imdb 2 \fModel Micro Average F1 Macro Average F1 TartuNLP_2 0.34 0.29 TartuNLP_1 0.32 0.27 NostraDomina_1 0.22 0.28 NostraDomina_2 0.22 0.22 Table 2: The overall results of all teams. negative positive neutral mixed Predicted negative positive neutral mixed Actual 67 17 53 16 20 26 24 4 4 1 15 0 14 11 18 4 0 10 20 30 40 50 60 (a) TartuNLP_1 with lexicon-based heuristic labels. negative positive neutral mixed Predicted negative positive neutral mixed Actual 75 33 43 2 15 37 20 2 2 3 15 0 14 14 17 2 0 10 20 30 40 50 60 70 (b) TartuNLP_2 with GPT4-generated labels. Figure 1: Confusion matrices for both submissions. disagreed in 140 examples. In case of disagreement, the heuristicsand LLM-based models made correct predictions in 40 and 57 examples respectively. Meanwhile, in case of agreement, the models correctly predicted the labels of 72 examples out of 154. The confusion matrices for both models (see Figure 1) are similar. The models had the most trouble with the mixed class, while the negative class was the easiest to predict; this is in line with findings by Sprugnoli et al. (2023), who reported the lowest inter-annotator agreement for the mixed class, while the negative class had the highest agreement, assuming that the test data of the shared task was annotated in a similar manner. We performed a small ablation study on the labeled test data released by the organizers after evaluating the shared task results to measure the effect of the knowledge transfer methods used: 1) Monolingual knowledge transfer from the wider Latin corpus in training the language adapter; 2) Cross-lingual knowledge transfer from the English IMDB sentiment dataset in training the task adapter. The results of the study, shown in Table 3, were somewhat unexpected. First of all, we observe that the base model with no knowledge transfer is already as good or better than the submitted models adopting both types of knowledge transfer. Secondly, the monolingual knowledge transfer by training the language adapter improves the microaveraged F1-score with both types of labels. Finally, the model with the LLM-generated labels benefits more from the monolingual language adapter training resulting in a model that noticeably outperforms our initial submission. 5. Discussion The model with LLM-generated labels obtained better results than the model with lexicon-based heuristic labels, although the final results of both submitted systems are relatively close. However, the ablation study testing the effectiveness of both monolingual and cross-lingual knowledge transfer demonstrated that the model trained on the LLMannotated data can show even better results when omitting the cross-lingual transfer from English. This is despite the fact that the number of LLMannotated examples was nearly twice as small, suggesting that the LLM annotations are of higher quality than the labels based on lexicon-informed heuristics. Despite our model trained on the LLM-annotated data taking the overall first place, the absolute values are somewhat low and sometimes below the baseline. There might be several reasons related to the choice of the data source and the annotation scheme and procedures. First, many of the exam3 \fAblation Micro Avg F1 Macro Avg F1 Val F1 Heuristic labels without knowledge transfer 0.33 0.26 0.48 Heuristic labels + Monolingual language transfer 0.34 0.25 0.48 Heuristic labels + Cross-lingual task transfer 0.30 0.23 0.55 Heuristic labels + Both (TartuNLP_1) 0.32 0.27 0.47 LLM labels without knowledge transfer 0.37 0.30 0.55 LLM labels + Monolingual language transfer 0.38 0.30 0.61 LLM labels + Cross-lingual task transfer 0.37 0.29 0.53 LLM labels + Both (TartuNLP_2) 0.34 0.29 0.48 Table 3: The results of the ablation study. ples appear to be expository or narrative in nature. It is difficult to assign a particular emotive polarity to the texts of that kind. Furthermore, Sprugnoli et al. (2023) mention that the annotators were instructed to assign labels on the sentence level. However, they were also presented with the wider context of the sentence. This leads us to believe that some labels are actually contextual, especially when the annotated sentence contains only a single word (for example, the sentence \"Mentior?\" is labeled as mixed). Secondly, the manual analysis of the examples shows that it is quite difficult to distinguish between mixed and neutral texts. This appears to be true for the trained models, as well. One possibility of improvement is to reframe the task as a multi-label classification problem instead. The model would be expected to predict the probabilities for the negative and positive labels independently. If the probability of both labels is low, the assigned label can be \"neutral\"; if both probabilities are high, the label can be \"mixed\"; otherwise, the label corresponding to the highest probability would be assigned. 6. Conclusion This paper described our solution to the Emotion Polarity Detection task of the EvaLatin Evaluation Campaign. Our submission obtained with a model trained on a dataset with LLM-generated labels achieved the overall first place, showing that LLMbased annotations can be useful for processing texts in Latin. 7. Bibliographical"
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01175v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01175v1",
|
| 3 |
+
"title": "Uncertainty-aware self-training with expectation maximization basis transformation",
|
| 4 |
+
"abstract": "Self-training is a powerful approach to deep learning. The key process is to\nfind a pseudo-label for modeling. However, previous self-training algorithms\nsuffer from the over-confidence issue brought by the hard labels, even some\nconfidence-related regularizers cannot comprehensively catch the uncertainty.\nTherefore, we propose a new self-training framework to combine uncertainty\ninformation of both model and dataset. Specifically, we propose to use\nExpectation-Maximization (EM) to smooth the labels and comprehensively estimate\nthe uncertainty information. We further design a basis extraction network to\nestimate the initial basis from the dataset. The obtained basis with\nuncertainty can be filtered based on uncertainty information. It can then be\ntransformed into the real hard label to iteratively update the model and basis\nin the retraining process. Experiments on image classification and semantic\nsegmentation show the advantages of our methods among confidence-aware\nself-training algorithms with 1-3 percentage improvement on different datasets.",
|
| 5 |
+
"authors": "Zijia Wang, Wenbin Yang, Zhisong Liu, Zhen Jia",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CV",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CV",
|
| 11 |
+
"cs.AI"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Semantic AND Segmentation AND Image",
|
| 15 |
+
"gt": "Uncertainty-aware self-training with expectation maximization basis transformation",
|
| 16 |
+
"main_content": "Introduction Deep neural networks have been developed for many years and achieved great outcomes. However, its superiority relies on large-scale data labeling. In some real situations, like agriculture, it is difficult to obtain labeled data. To alleviate the burden of data labeling, many methods like domain adaption Chen et al. (2018, 2017b); Hoffman et al. (2018); Kim et al. (2019); Long et al. (2017a), and self-training Busto et al. (2018); Chen et al. (2019); Inoue et al. (2018); Lee et al. (2013); Saito et al. (2017a); Zou et al. (2018) have been proposed. For example, BERT Devlin et al. (2018) and GPT Radford et al. (2018, 2019); Brown et al. (2020), directly leverage a large amount of unlabeled data to pretrain the model. However, they cannot be generally applied in other areas. Among these methods, self training methodsScudder (1965); He et al. (2019) show promising results and it attracts much attention. Self training is a semi-supervised learning method Chapelle et al. (2009), which iteratively generates task specific pseudo-labels using a model trained on some labeled data. It then retrains the model using the labeled data. However, there are many issues in this bootstrap process, one of them is the noise in the pseudo-labeled data. Some researchers resolve this problem by learning from noisy labels Natarajan et al. (2013); Reed et al. (2014); Sukhbaatar et al. (2014); Yu et al. (2018). It can also be optimized by sample selection Mukherjee and Awadallah (2020a) or label smoothing Zou et al. (2019a). However, none of the previous works focused on data properties. Recently, a novel 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2405.01175v1 [cs.CV] 2 May 2024 \fFigure 1: Uncertainty-aware representations. In the right part of this figure, dashed curves represent the basis distributions while the blue curve represent the uncertainty-aware representation and uncertainty-aware labels of the data. The expectation of the labels could be used as the final label and the variance could be used to evaluate the uncertainty. Figure 2: One self training round. Pseudo-label generation (a) use EM algorithm to update the Gaussian basis and the classifier, then it generates some pseudo-labels with uncertainty information while the classifier is also trained in this stage. Then in model retraining stage (b), an uncertaintyaware training strategy is used to update the whole model (CNN and classifier). knowledge distillation Hinton et al. (2015) is proposed to distill the large dataset into a small one Sucholutsky and Schonlau (2019); Wang et al. (2018).The intuition of these methods is to find the key samples, like means in the feature spaces, to capture the data properties. These means could also be referred as basis of the data. They can be used to formulate the latent representations of the data in a probabilistic way using expectation maximization algorithm Li et al. (2019); Moon (1996). Therefore, as shown in figure 1, we propose a probabilistic model to extract uncertainty for selftraining. Concretely, expectation maximization algorithm is adapted to get the probabilistic latent representations of the data and their corresponding pseudo-label distributions can be obtained. Then the samples are selected based on the variance of the (pseudo-)label distribution where distributions with lower variance represent good (pseudo-)labels. Finally, an uncertainty-aware training process is used to retrain the model using the new dataset where the expectation of distributions becomes the final pseudo-labels. Overall, our contributions in this paper are: 2 \f\u2022 Adapt Expectation Maximization algorithm to perform basis transformation on data features. We use neural networks for expectation maximization process to generate the latent probabilistic representations of the data using base transformation. These representations are low-rank while keeping the uncertainty information and deprecating the noises. \u2022 A novel regularizer is used for pseudo-label generation. Variance and classification loss are combined in the pseudo-label generation process to get the best pseudo-label distributions which contain comprehensive uncertainty information. \u2022 A basis generation process with basis regularizer is proposed. An attention-like module (ATT block) is introduced here to extract basis from the dataset or feature space. To make the basis more robust, we propose a basis regularizer to make all basis orthogonal, which could lower the rank of final latent representations. 2 Related work Self-training: Self-training is a wide and meaningful research area in semi-supervised learning Amini and Gallinari (2002); Yarowsky (1995); Grandvalet et al. (2005), one basic direction in this area is to train a student net using a teacher net Laine and Aila (2016); Tarvainen and Valpola (2017); Luo et al. (2018), some other works use a pseudo-label-based method for self-training Zou et al. (2018). In this paper, we choose to use pseudo-label-based method while keeping the uncertainty information in the label, an iterative training framework is proposed according to the self-training paradigm and uncertainty information to improve the network performance. Expectation-Maximization and Gaussian Mixture Model: Expectation-maximization (EM) Dempster et al. (1977) is to find solutions for latent variables models using likelihood maximization algorithm while Gaussian mixture model (GMM) Richardson and Green (1997) is also one kind of EM algorithm with specific constraints. Latent variables models with GMM could naturally capture the uncertainty information considering the data properties. In GMM, the data could be represented in the distribution form: p( \u02c6 xn) = K X k=1 znkN(xn|\u00b5k, \u03a3k), (1) where the latent representation \u02c6 xn is viewed as a linear superposition of k Gaussian basis N(xn|\u00b5k, \u03a3k) and K is the basis number, znk represents the weight of this linear composition. In the GMM, znk could be updated in the E step: znew nk = N(\u00b5new k , \u03a3k) PK j=1 N(\u00b5new j , \u03a3j) , (2) Notably, the \u03a3k in the Gaussian basis is set to be identity matrix I in this paper, so the \u03a3 update process is ignored in our algorithm. 3 Problem definition In this part, we formally define the uncertainty-aware self-training problem. Given a set of labeled samples {XL, YL} and a set of unlabeled data XU where XU and XL belong to same domain. Then the goal is to find a latent representation \u02c6 X and uncertainty-aware pseudo-labels YU by using a CNN feature extractor and a simple classifier. As shown in Figure 2, our problem could be solved by alternating the following steps Zou et al. (2019a): a) Pseudo-label generation: Given all the data, EM algorithm is used to generate the pseudo-labels with uncertainty information while the classifier is also trained in this process based on a combined loss to reduce the variance of pseudo-labels and optimize the classification accuracy for labeled data. 3 \fFigure 3: Whole training process for basis initialization net. Concretely, we train the model like classical machine learning training process and add a small module (attention block) to extract the processed weights which then become the initialized basis of EM algorithm. b) Network retraining. Data are sampled from the pseudo-labeled data based on the label variance, then the sampled data, along with the original labeled data, are used to train the whole classification network. 4 Uncertainty-aware self training To generate the pseudo-label for unlabeled data XU, we first use a base extraction net trained on labeled data to get basis for XL, then these bases could be used as the initialized \u00b5(0) of EM stage to speed up the convergence. Notably, as mentioned in related work section, the \u03a3 is set to be identity matrix and not updated in our algorithm considering a good basis should have identical variance. After the initialization, the EM algorithm is adapted to update the \u00b5 while the prediction net is simultaneously updated in the EM stage. Concretely, the details of base extraction net is shown in section 4.1, then two losses which are used in the EM stage to update the pseudo label generator parameters (classifier in figure 2 a) are demonstrated in section 4.2. After the definition of losses, the whole EM stage is described in section 4.2.1. 4.1 Basis Extraction net As shown in figure 3, we demonstrate the generalized basis initialization net. In this paper, we use classification as an example where the model trained in this stage has 3 components: \u2022 Feature extractor. In fig 3, CNN functions as the feature extractor. The weights we extracted are from this part. \u2022 Classifier. The fully connected layer could be the classifier in our setting, this part is for the original machine learning tasks like classification. \u2022 Weight extractor. An additional ATT block is added to extract the informative basis from the feature space. Clearly in training process, there are 2 tasks: classification and weights extraction. For classification, we use classical classification loss negative log likelihood loss (Lnll). Then for weight extraction part, we want our weights to be basis with low rank, so they need to be orthogonal: L2 = W \u2217W T \u2212I (3) Where W is the weight and I is the unity matrix. Therefore, the loss becomes: Ls1 = Lnll + L2 (4) 4 \fIn Attention block (ATT block), given a matrix X \u2208RN\u00d7d which contains the features of all data samples, we try to extract the inherent low-rank properties of features by basis extraction. The basis extraction, says the problem to find the most informative projection of features, can be formally expressed as min\u00b5 \r \rX \u2212\u00b5Z \r \r F s.t.\u00b5T \u00b5 = I Z = \u00b5T X (5) where \u00b5 \u2208RK\u00d7d represents the basis matrix of the latent features. Through the process, the inherent data structure can be founded. However, as an unsupervised method, the problem is reported easily suffer from the model collapse problems. Considering the important label information in classification problems. then we can modify the problem above into a semi-supervised manner as min\u00b5 \r \rX \u2212\u00b5Z \r \r F + \r \rZZT \u2212Y Y T \r \r F + \r \r\u00b5T \u00b5 \u2212I \r \r F s.tZ = \u00b5T X (6) where Y donates all the labels. We can solve the problems above with standard gradient decent methods. Then, after stage I, we generated some basis which the latent space features of data samples effectively and precisely. 4.2 Pseudo-label generation Recall that the latent representation should be transformed into the pseudo label using a function f\u03b8. Given a latent representation \u02c6 xn will obey the fallowing distribution: p( \u02c6 xn) = K X k=1 znkN(xn|\u00b5k, \u03a3k), (7) where K is the number of basis, G(\u00b5, \u03a3) is the final distribution basis representation. Then the corresponding pseudo label for sample \u02c6 xn(m) is \u02c6 yn(m) = f\u03b8( \u02c6 xn(m)). With the will know reparameter trick, distribution p(yn) can be formally expressed as p(yn) = ZZ p(yn|xn)p(xn|\u03f5)dxnd\u03f5, \u03f5 \u223cN(0, I) (8) where p(xn|\u03f5) = K X k=1 znk\u00b5k + \u03a3k\u03f5 (9) Then, we could easily compute the variance V AR( \u02c6 yn) and expectation E( \u02c6 yn) using these sampled pseudo label. For latent representations in XL which have label yn, the loss function for f\u03b8 is: LossL = E( \u02c6 yn) \u2212yn (10) For latent representations in XU which don\u2019t have label, the loss is basically the variance, therefore the final loss for pseudo label prediction model is: L = \u03bbLossL + (1 \u2212\u03bb)V AR( \u02c6 yn), (11) where \u03bb = 1 if the latent representation is from XU and vice versa. 4.2.1 Expectation-Maximization Now we can get the ideally orthogonal base vectors from weights and use them as initialized \u00b5 in the base generation block and compute the loss. Then in this section, we formally define the adapted EM process. At first, we need to update znk: 5 \fznew nk = K(xn, \u00b5k) PK j=1 K(xn, \u00b5j) , (12) where K(a, b) is a kernel function to evaluate the similarity between a and b. Then in the algorithm, the t-th Z could be formulated as: z(t) = softmax(\u03bbX(\u00b5(t\u22121)) T ), (13) where \u03bb is manually set to control Z distribution. Then in the M step (likelihood maximization), we update the \u00b5 based on the weighted summation of X to make them in one space. Then the update process in t-th iteration could be formulated as: \u00b5(t) k = z(t) nkxn PN m=1 z(t) mk (14) After T iterations, we could get the final basis \u00b5k(T), \u03a3k(T) and the prediction model \u03b8k(T). The generated pseudo label for each sample is a distribution, which can be formulated as: yn = f\u03b8(xn), (15) where f\u03b8 is a linear transformation, so distribution of yn could be easily calculated. The whole process of pseudo-label generation is summarized in algorithm 1. Algorithm 1: Pseudo-label generation Input :XL, XU, YL, f\u03b8 Output :\u00b5k(T), \u03a3k(T), \u03b8k(T) Initialize \u00b5k(0), \u03a3k(0), \u03b8(0) for t \u21901 to T do update znk(t) (eq 13) compute \u02c6 xn(t) (eq 10) compute pseudo-label yn (eq 15) compute loss function (eq 11) update \u03b8(t) using back propagation update \u00b5k(t) (eq 14) return 4.3 Network retraining Because in section 4.1, we define the problem as a classification task, so in this part we simply use classification as our final task. Considering we have the distribution for pseudo-labels, there are mainly two steps in the retraining part sample selection and model retraining. Method A\u2192W D\u2192W W\u2192D A\u2192D D\u2192A W\u2192A Mean ResNet-50 He et al. (2016) 68.4\u00b10.2 96.7\u00b10.1 99.3\u00b10.1 68.9\u00b10.2 62.5\u00b10.3 60.7\u00b10.3 76.1 DAN Long et al. (2015) 80.5\u00b10.4 97.1\u00b10.2 99.6\u00b10.1 78.6\u00b10.2 63.6\u00b10.3 62.8\u00b10.2 80.4 RTN Long et al. (2016) 84.5\u00b10.2 96.8\u00b10.1 99.4\u00b10.1 77.5\u00b10.3 66.2\u00b10.2 64.8\u00b10.3 81.6 DANN Ganin et al. (2016) 82.0\u00b10.4 96.9\u00b10.2 99.1\u00b10.1 79.7\u00b10.4 68.2\u00b10.4 67.4\u00b10.5 82.2 ADDA Tzeng et al. (2017) 86.2\u00b10.5 96.2\u00b10.3 98.4\u00b10.3 77.8\u00b10.3 69.5\u00b10.4 68.9\u00b10.5 82.9 JAN Long et al. (2017b) 85.4\u00b10.3 97.4\u00b10.2 99.8\u00b10.2 84.7\u00b10.3 68.6\u00b10.3 70.0\u00b10.4 84.3 GTA Sankaranarayanan et al. (2018) 89.5\u00b10.5 97.9\u00b10.3 99.8\u00b10.4 87.7\u00b10.5 72.8\u00b10.3 71.4\u00b10.4 86.5 MRKLD+LRENT Zou et al. (2019b) 89.4\u00b10.7 98.9\u00b10.4 100\u00b10.0 88.7\u00b10.8 72.6\u00b10.7 70.9\u00b10.5 86.8 Ours 92.2\u00b10.5 98.2\u00b10.3 99.6\u00b10.4 87.2\u00b10.5 72.8\u00b10.3 72.4\u00b10.4 87.1 Table 1: Comparison on Office-31 experiments 6 \fMethod Aero Bike Bus Car Horse Knife Motor Person Plant Skateboard Train Truck Mean Source Saito et al. (2017b) 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81 26.5 73.5 8.5 52.4 MMD Long et al. (2015) 87.1 63 76.5 42 90.3 42.9 85.9 53.1 49.7 36.3 85.8 20.7 61.1 DANN Ganin et al. (2016) 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4 ENT Grandvalet et al. (2005) 80.3 75.5 75.8 48.3 77.9 27.3 69.7 40.2 46.5 46.6 79.3 16 57 MCD Saito et al. (2018) 87 60.9 83.7 64 88.9 79.6 84.7 76.9 88.6 40.3 83 25.8 71.9 ADR Saito et al. (2017b) 87.8 79.5 83.7 65.3 92.3 61.8 88.9 73.2 87.8 60 85.5 32.3 74.8 SimNet-Res152Pinheiro (2018) 94.3 82.3 73.5 47.2 87.9 49.2 75.1 79.7 85.3 68.5 81.1 50.3 72.9 GTA-Res152 Sankaranarayanan et al. (2018) 77.1 MRKLD+LRENT Zou et al. (2019b) 88.0 79.2 61.0 60.0 87.5 81.4 86.3 78.8 85.6 86.6 73.9 68.8 78.1 Ours 89.1 81.7 82.1 57.7 83.2 79.7 83.9 77.2 86.2 82.7 83.8 65.9 79.4 Table 2: Comparison on VisDA17 experiments Method Backbone Road SW Build Wall Fence Pole TL TS Veg. Terrain Sky PR Rider Car Truck Bus Train Motor Bike mIoU Source 42.7 26.3 51.7 5.5 6.8 13.8 23.6 6.9 75.5 11.5 36.8 49.3 0.9 46.7 3.4 5 0 5 1.4 21.7 CyCADA Hoffman et al. (2018) DRN-26 79.1 33.1 77.9 23.4 17.3 32.1 33.3 31.8 81.5 26.7 69 62.8 14.7 74.5 20.9 25.6 6.9 18.8 20.4 39.5 Source 36.4 14.2 67.4 16.4 12 20.1 8.7 0.7 69.8 13.3 56.9 37 0.4 53.6 10.6 3.2 0.2 0.9 0 22.2 MCD Saito et al. (2018) DRN-105 90.3 31 78.5 19.7 17.3 28.6 30.9 16.1 83.7 30 69.1 58.5 19.6 81.5 23.8 30 5.7 25.7 14.3 39.7 Source 75.8 16.8 77.2 12.5 21 25.5 30.1 20.1 81.3 24.6 70.3 53.8 26.4 49.9 17.2 25.9 6.5 25.3 36 36.6 AdaptSegNet Tsai et al. (2018) DeepLabv2 86.5 36 79.9 23.4 23.3 23.9 35.2 14.8 83.4 33.3 75.6 58.5 27.6 73.7 32.5 35.4 3.9 30.1 28.1 42.4 AdvEnt Vu et al. (2019) DeepLabv2 89.4 33.1 81 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5 Source 29.2 FCAN Zhang et al. (2018) DeepLabv2 46.6 Ours DeepLabv2 87 47.7 80.3 25.9 26.3 47.9 34.7 29 80.9 45.7 80.3 60 29.2 81.7 37.9 47.5 37.2 29.8 47.7 50.4 Table 3: Adaptation results of experiments transferring from GTA5 to Cityscapes. 4.3.1 Sample selection After pseudo-label generation process, the generated pseudo-labels are formulated in a distribution format (Gaussian form) shown in equation 8 which contains variance and mean information. Then for classification task, a class-dependent selection Mukherjee and Awadallah (2020b) could be performed to construct a dataset with hard labels DS,U = {xu,s \u2208Su,c, yu}. Here, Su,c \u2208XU is constructed based on the score rank of each sample, if the sample\u2019s pseudo-label has higher variance, then it\u2019s more likely to be discarded. For yu, one can simply use its mean as its hard pseudo label, but here we want to accurately model the uncertainty information. Therefore, we randomly sample hard labels from the pseudo-label distribution to incorporate the uncertainty information encoded in the distribution. 4.3.2 Uncertainty aware retraining After the sample selection, a retraining dataset is derived as Dr = {XL, YL} S{xu,s, yu}, then for the retraining part, the final goal is to minimize following loss: minW LL + LU V ar(y) (16) Where W is the model parameter, LL and LU represent the task loss for labeled data and unlabeled data respectively, here in this classification example, they represent same classification loss like cross entropy. V ar(y) represents the sample uncertainty, for samples x \u2208XU, variance is same to the variance in the distribution to catch the uncertainty information of teacher model. In this setting, samples with higher variance, which basically means that the previous model is not confident on this sample, have lower weights in the back propagation process of training. After the retraining, one round shown in figure 2 is completed. Then we simply repeat the whole process until the ideal results are derived. Method Backbone Road SW Build Wall* Fence* Pole* TL TS Veg. Sky PR Rider Car Bus Motor Bike mIoU mIoU* Source DRN-105 14.9 11.4 58.7 1.9 0 24.1 1.2 6 68.8 76 54.3 7.1 34.2 15 0.8 0 23.4 26.8 MCD Saito et al. (2018) 84.8 43.6 79 3.9 0.2 29.1 7.2 5.5 83.8 83.1 51 11.7 79.9 27.2 6.2 0 37.3 43.5 Source DeepLabv2 55.6 23.8 74.6 6.1 12.1 74.8 79 55.3 19.1 39.6 23.3 13.7 25 38.6 AdaptSegNetTsai et al. (2018) 84.3 42.7 77.5 4.7 7 77.9 82.5 54.3 21 72.3 32.2 18.9 32.3 46.7 Source ResNet-38 32.6 21.5 46.5 4.8 0.1 26.5 14.8 13.1 70.8 60.3 56.6 3.5 74.1 20.4 8.9 13.1 29.2 33.6 CBST Zou et al. (2019b) 53.6 23.7 75 12.5 0.3 36.4 23.5 26.3 84.8 74.7 67.2 17.5 84.5 28.4 15.2 55.8 42.5 48.4 AdvEnt Vu et al. (2019) DeepLabv2 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 84.1 57.9 23.8 73.3 36.4 14.2 33 41.2 48 Source DeepLabv2 64.3 21.3 73.1 2.4 1.1 31.4 7 27.7 63.1 67.6 42.2 19.9 73.1 15.3 10.5 38.9 34.9 40.3 Ours 68 29.9 76.3 10.8 1.4 33.9 22.8 29.5 77.6 78.3 60.6 28.3 81.6 23.5 18.8 39.8 42.6 48.9 Table 4: Adaptation results of experiments transferring from SYNTHIA to Cityscapes. 7 \f5 Experiment In this section, we demonstrate the advantages of proposed methods by comparing the performance of proposed methods with the SOTA confidence-aware self-training strategy on 2 tasks image classification and image segmentation. To make the results comparative, we basically follow the settings in Zou et al. (2019b) which achieves SOTA results in confidence-aware self-training domain, details will be illustrated in following sections. 5.1 Dataset and evaluation metric 5.1.1 Image classification. For domain adaption in image classification task, VisDA17 Peng et al. (2018) and Office-31 Saenko et al. (2010) are used to evaluate the algorithm performance. In VisDA17, there are 12 classes with 152, 409 virtual images for training while 55, 400 real images from MS-COCO Lin et al. (2014) are target dataset. For Office-31, 31 classes collected from Amazon(A, 2817 images), Webcam(W, 795 images) and DSLR(D, 498 images) domains are included. We strictly follow the settings in Saenko et al. (2010); Sankaranarayanan et al. (2018); Zou et al. (2019b) which evaluate the domain adaption performance on A \u2192W, D \u2192W, W \u2192D, A \u2192D, D \u2192A, W \u2192A. For evaluation, we simply use the accuracy for each class and mean accuracy across all classes as the evaluation metric. 5.1.2 Semantic segmentation For domain adaption in image segmentation tasks, 2 virtual datasets GTA5 Richter et al. (2016), SYNTHIA Ros et al. (2016) and 1 real dataset Cityscapes Cordts et al. (2016) are used to evaluate the performance of proposed method. Concretely, GTA5 contains 24, 966 images based on the game GTA5, SYNTHIA-RAND-CITYSCAPES (subset of SYNTHIA) has 9400 images. For the experiment setup, we also strictly follow Hoffman et al. (2018); Tsai et al. (2018); Zou et al. (2019b) which use Cityscapes as target domain and view virtual datasets (GTA5 and CITYSCAPES) as training domain. For evaluation, the Intersection over Union (IoU) is used to measure the performance of models where. 5.2 Experiment setup To make our results comparable with current SOTA confidence-aware method, we adapt the settings in Zou et al. (2019b). Besides, all the training process is performed on 4 Tesla V100 GPUs which have 32GB memory. Image Classification: ResNet101/ ResNet-50 He et al. (2016) are used as backbones, which are pretrained based on ImageNet Deng et al. (2009). Then in source domain, we fine-tune the model using SGD while the learning rate is 1 \u00d7 10\u22124, weight decay is set to be 5 \u00d7 10\u22125, momentum is 0.8 and the batch size is 32. In the self-training round, the parameters are same except for the different learning rates which are 5 \u00d7 10\u22124. Image Segmentation: In image segmentation part, we mainly use the older DeepLab v2 Chen et al. (2017a) as backbone to align with previous results. DeepLab v2 is first pretrained on ImageNet and then finetuned on source domain using SGD. Here we set learning rate as 5 \u00d7 10\u22124, weight decay is set to be 1 \u00d7 10\u22125, momentum is 0.9, the batch size is 8 while the patch size is 512 \u00d7 1024. In self-training, we basically run 3 rounds which has 4 retraining epochs. 5.3 Experiment results Comparison on image classification. As shown in table 1 and table 2, compared with previous SOTA result in confidence-aware self-training and other self-training algorithms, although our algorithm does not achieve best performance in all sub-tasks, the mean results (87.1 and 79.4 for Office-31 and VisDA17 respectively) achieves SOTA while our results (derivations and means) are obtained from 5 runs of the experiment. Comparison on image segmentation.As shown in table 3 and 4, in semantic segmentation task, our results of average IoU (mIoU) achieves SOTA among confidence-aware self-training algorithms. 8 \f6 Conclusion and future work In this paper, we propose a new confidence-aware self-training framework and compare our algorithm with current SOTA results of confidence-aware self-training which proves that our pseudo-label could better catch the uncertainty information and thus alleviate the over-confident issue in self-training. Furthermore, the idea underlying our method could be used in many self-training related tasks while the over-confidence is a common question faced by most self-training algorithms."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.01217v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01217v1",
|
| 3 |
+
"title": "CromSS: Cross-modal pre-training with noisy labels for remote sensing image segmentation",
|
| 4 |
+
"abstract": "We study the potential of noisy labels y to pretrain semantic segmentation\nmodels in a multi-modal learning framework for geospatial applications.\nSpecifically, we propose a novel Cross-modal Sample Selection method (CromSS)\nthat utilizes the class distributions P^{(d)}(x,c) over pixels x and classes c\nmodelled by multiple sensors/modalities d of a given geospatial scene.\nConsistency of predictions across sensors $d$ is jointly informed by the\nentropy of P^{(d)}(x,c). Noisy label sampling we determine by the confidence of\neach sensor d in the noisy class label, P^{(d)}(x,c=y(x)). To verify the\nperformance of our approach, we conduct experiments with Sentinel-1 (radar) and\nSentinel-2 (optical) satellite imagery from the globally-sampled SSL4EO-S12\ndataset. We pair those scenes with 9-class noisy labels sourced from the Google\nDynamic World project for pretraining. Transfer learning evaluations\n(downstream task) on the DFC2020 dataset confirm the effectiveness of the\nproposed method for remote sensing image segmentation.",
|
| 5 |
+
"authors": "Chenying Liu, Conrad Albrecht, Yi Wang, Xiao Xiang Zhu",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CV",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CV"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Semantic AND Segmentation AND Image",
|
| 14 |
+
"gt": "CromSS: Cross-modal pre-training with noisy labels for remote sensing image segmentation",
|
| 15 |
+
"main_content": "INTRODUCTION In the realm of Big Geospatial Data, one critical challenge is the lack of labeled data for deep learning model training. Self-Supervised Learning (SSL) received significant attention for its ability to extract representative features from unlabeled data (Wang et al., 2022). Popular SSL algorithms include generative Masked Autoencoders (MAE) (He et al., 2022) and contrastive learning methods such as DINO (Caron et al., 2021) and MoCo (Chen et al., 2020). MAE is inspired by image reconstruction, as most works utilizing vision transformers (ViTs) (Dosovitskiy et al., 2021). Constrastive learning methods can make a difference for both, convolutional backbones and ViTs. Recent studies suggest that deep learning models exhibit a degree of robustness against label noise (Zhang et al., 2021; Liu et al., 2024). Promising results were observed in pretraining models with extensive volumes of noisy social-media labels for image classification (Mahajan et al., 2018) and video analysis (Ghadiyaram et al., 2019). In the realm of remote sensing (RS), pretraining on crowdsourced maps such as OpenStreetMap for building and road extraction has been surveyed (Kaiser et al., 2017; Maggiori et al., 2017). These results indicate that inherently noisy labels can significantly reduce the level of human supervision required to effectively train deep learning models. Moreover, as the number of launched satellites grows, we are increasingly exposed to a variety of satellite data types, including but not limited to multi-spectral, Light Detection And Ranging (LiDAR), and Synthetic Aperture Radar (SAR) data. Multi-modal learning has emerged as a prominent area of study, where the complementary information showcases efficacy in boosting the learning from different modalities, such as optical and LiDAR data (Xie et al., 2023), multi-spectral and SAR data (Chen & Bruzzone, 2022). However, the application of multi-modal learning to improve learning from noisy labels remains for detailed exploration. 1 arXiv:2405.01217v1 [cs.CV] 2 May 2024 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop Crops Grass Trees Water Bareland Built area Shrub & scrub Ice & snow Flooded vegetation Figure 1: An example of sentinel-1 (VV, right) and sentinel-2 (RGB, left) data paired with noisy labels (middle) from 4 seasons. In this work, we study the potential of noisy labels in multi-modal pretraining settings for RS image segmentation, where a novel Cross-modal Sample Selection method, referred to as CromSS, is introduced to further mitigate the adverse impact of label noise. In the pretraining stage, we first employ two U-Nets (Ronneberger et al., 2015) backboned with ResNet-50 (He et al., 2016) to separately extract features and generate confidence masks within each modality. After that, the sample selection is implemented for each modality on its enhanced confidence masks by fortifying the shared information across modalities. Given that radar and optical satellites are sensitive to distinct features on the ground1, such cross-modal enhancement bears potential to boost the mutual learning between modalities. We test middle and late fusion strategies to improve the architecture design for multi-modal learning. In our experiments, we utilize Sentinel-1 (S1) of radar and Sentinel-2 (S2) of multi-spectral data from the SSL4EO-S12 dataset (Wang et al., 2023) as two modalities. We pair those scenes with pixel-wise noisy labels of the Google Dynamic World (DW) project (Brown et al., 2022) for pretraining. Evaluation of the pretrained ResNet-50 encoders is based on the DFC2020 dataset (Yokoya, 2019) referenced to pretrained DINO and MoCo models presented as baselines in the SSL4EO-S12 work. 2 DATA In the pretraining stage, we utilize the extended version of the SSL4EO-S12 dataset, a large-scale self-supervision dataset in Earth observation, plus 9-class noisy labels sourced from the DW project on the Google Earth Engine as illustrated in Figure 1. SSL4EO-S12 sampled data globally from 251,079 locations. Each location corresponds to 4 S1 and S2 image pairs of 264\u00d7264 pixels from 4 different seasons, among which 103,793 locations have noisy label masks matched for all the seasons. We only utilize the image-label pairs of these 103,793 locations for pretraining with noisy labels. Notice that this dataset is a good reflection of real cases, where noisy labels are still harder to obtain compared to images, thus of a smaller size than unlabeled data. We utilize DFC2020 as the downstream segmentation task, where the 986 validation patches are used as the fine-tuning training data with the 5128 test ones for test. 3 METHODOLOGY Our methodology links semantic segmentation maps of single-modal models by two principles: (a) consistent prediction of the physical ground truth (consistency loss Lc), and (b) tolerance to noisy supervision (segmentation loss Ls). For the latter, we extend the idea of Cao & Huang (2022) working on a single modality to multiple modalities with cross-modal interactions for estimating the uncertainty of a given pixel-level class label. Each modality-specific model predicts the probability P (d) of a given noisy label at a physical location. While one model d = 1 may be certain about the label y, another d = 2 may assign low probability: P (1)(y) \u226bP (2)(y). Section 3.2 details on how we integrate these information to obtain a cross-modality score of a label perceived noisy. Similarly, we exploit the entropy of P (d) to introduced a criterion for a cross-modality consistency loss on label predictions between single-modality models. The overall approach is summarized by Figure 2, where Q(d) represents an estimate of P (d). 1e.g., persistant metal scatterers in SAR have little signatur in optical sensors 2 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop Modality 2 (S2) Modality 1 (S1) Noisy labels \ud835\udc3f\ud835\udc50 (1/2): Unweighted consistency loss \ud835\udc3f\ud835\udc60 (1/2): Unweighted segmentation loss \ud835\udc44(1) \u229b \u2295 element-wise production element-wise addition Detached copy summation of all elements \u2211 \ud835\udc44(1) \ud835\udc44(1) \ud835\udc44(2) \ud835\udc44(2) \ud835\udc44(2) \ud835\udc3f\ud835\udc50 (1) \ud835\udc3f\ud835\udc50 (2) \ud835\udc3f\ud835\udc60 (1) \ud835\udc3f\ud835\udc60 (2) Encoder Decoder Encoder Decoder \u229b \u229b \u229b \u2295 \u2295 Predicted probability of the noisy label class \ud835\udc44(1)(\ud835\udc99, \ud835\udc50= \ud835\udc66(\ud835\udc99)) \ud835\udc44(2)(\ud835\udc99, \ud835\udc50= \ud835\udc66(\ud835\udc99)) \ud835\udc4a \ud835\udc59 (1/2): Label-based selection mask \ud835\udc4a \ud835\udc59 (2) \ud835\udc4a \ud835\udc59 (1) \ud835\udc39 \ud835\udc59 (2) \ud835\udc39 \ud835\udc59 (1) Cross-modal informed label selection \u229b \u229b \u229b \u2295 \u2295 Entropy Entropy \ud835\udc4a \ud835\udc52 (1/2) : Entity-based selection mask \ud835\udc39 \ud835\udc52 (2) \ud835\udc39 \ud835\udc52 (1) \ud835\udc4a \ud835\udc52 (1) \ud835\udc4a \ud835\udc52 (2) Cross-modal informed consistency Calculating \ud835\udc3f\ud835\udc46 Calculating \ud835\udc3f\ud835\udc46 Calculating \ud835\udc3f\ud835\udc36 Calculating \ud835\udc3f\ud835\udc36 Segmentation loss \u2211 \u2211 Consistency loss \u2211 \u2211 Figure 2: Illustration of the proposed CromSS. The decoders in the middle share the weights when middle fusion is applied. In late fusion, they are separately optimized per modality. The shaded areas (green to the left, purple to the right) highlight the key components of cross-modal sample selection. 3.1 MULTI-MODAL FUSION We employ middle and late multi-modal fusion (Chen & Bruzzone, 2022) to explore the complementary information across modalities to aid model training. Our fusion strategies do not concatenate feature vectors of different modalities. While middle fusion shares a common decoder for all modalities, late fusion retains individual decoders. 3.2 CROSS-MODAL SAMPLE SELECTION As depicted by Figure 2, the key in CromSS when compared to naive multi-modal training is the introduction of sample selection masks W (d) l/e (the shaded areas in Figure 2). They serve as weights for calculating the segmentation and consistency losses, Ls and Lc, cf. the label-based masks W (d) l and the entity-based masks W (d) e for modality d. To compute W (d) l and W (d) e , we first generate the corresponding confidence masks F (d) l and F (d) e from the softmax outputs, i.e., the estimated class distributions Q(d) for P (d). Let q(d) i,j,c \u2208Q(d) denote the softmax output at image pixel location (i, j) and class c, and yi,j be its given noisy label. Then, we take q(d) i,j,c with c = yi,j as the estimated label-based confidence scores in F (d) l . For the entity-based confidence, we define f (d) (e)i,j \u2208F (d) e using the entropy of its softmax vector h(d) i,j as follows, f (d) (e)i,j = 1 \u2212h(d) i,j /K = 1 + 1 K C X c=1 q(d) i,j,c log q(d) i,j,c (1) where C is the total number of classes, K = log C is the upper bound of hi,j \u2208[0, K] when qi,j,c = 1/C for c = 1, \u00b7 \u00b7 \u00b7 , C, i.e., equal distribution of maximum entropy. For two modalities d \u2208{1, 2}, the final confidence masks are combined into the following: F \u2032(1/2) l/e = 1 2 \u0010 F (1/2) l/e + F (1) l/e F (2) l/e \u0011 = 1 2F (1/2) l/e \u0010 1 + F (2/1) l/e \u0011 , (2) where the factor F (1/2) l/e F (2/1) l/e serves to magnify the selection probabilities of the samples exhibiting high confidence while diminishing cases where both modalities d = 1 and d = 2 agree on low confidence score. To generate final sample selection masks, we utilize a soft selection strategy rather than the one-hot selection masks for W (d) l , in order to avoid models from enforcing their own prediction errors. Mathematically speaking: given the selection ratio \u03b1 \u2208[0, 1], we define w(d) i,j \u2208W (d) l as, w(d) i,j = min h 1, f \u2032(d) i,j /w i , (3) 3 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop where f \u2032(d) i,j \u2208F \u2032(d) l , w is the (\u03b1 \u00b7 n)th highest value in F \u2032(d) l with n denoting the size of F \u2032(d) l . For the consistency loss, we utilize the weighting factor \u03b3 \u2208[0, 1] to generate W (d) e from F \u2032(d) e as W (d) e = (1 \u2212\u03b3) + \u03b3F \u2032(d) e with \u03b3 gradually ramping up from 0 to 1 during the training. With the losses weighted by W (d) l and W (d) e , the samples of lower confidence can contribute less in the optimization process. 4 EXPERIMENTS We pretrained ResNet-50 (He et al., 2016) nested in U-Nets (Ronneberger et al., 2015) using the combined segmentation losses of CrossEntropy and Dice (Jadon, 2020) along with Kullback-Leibler divergence (Kullback & Leibler, 1951) serving as the consistency losses. The selection proportion \u03b1 we set to 50% after exponentially ramping down from 100% for the first 80 epochs. At the same time, the weighting factor \u03b3 ramps up from 0 to 1 in parallel. We employed a seasonal data augmentation strategy, where the data from a randomly selected season were fed to U-Nets in each iteration. An Adam optimizer (Kingma & Ba, 2017) was used with a learning rate of .5 \u00b7 10\u22123. We employed the ReduceLROnPlateau scheduler to cut in half the learning rate when the validation loss is not decreasing over 30 consecutive epochs. We randomly split off 1% of the entire training set as the validation set. The pretraining was implemented on 4 NVIDIA A100 GPUs running approx. 13 hours for 100 epochs. When transferred to the DFC2020 dataset, pretrained ResNet-50 encoders were embedded into PSPNets (Zhao et al., 2017), fine-tuned with Adam and a learning rate of .5 \u00b7 10\u22124 for 50 epochs. As reference, we also present the results of single-modal pretraining (S1/S2) as well as multi-modal pretraining without sample selection, in which midF and lateF denote middle and late fusion, respectively. Pretrained weights by DINO and MoCo were provided by Wang et al. (2023). Results reported with error bars stem from 3 repeated runs of each setup. Table 1: Transfer learning results on the DFC2020 dataset with S1 and S2 as inputs, respectively, where \u201cFine-tuned\u201d and \u201cFrozen\u201d indicate whether the encoder weights would be adjusted along with decoder ones or not. Modality Encoder Frozen Fine-tuned Metrics OA AA mIoU OA AA mIoU S1 Random 54.41\u00b10.35 40.68\u00b10.23 29.16\u00b10.06 52.65\u00b10.42 42.17\u00b10.29 28.36\u00b10.22 MoCo 60.88\u00b10.41 47.46\u00b10.52 34.25\u00b10.27 60.31\u00b10.40 44.98\u00b10.66 31.80\u00b10.46 single-modal (S1) 61.73\u00b10.58 46.13\u00b10.34 34.77\u00b10.30 61.07\u00b10.19 45.78\u00b10.48 34.13\u00b10.19 midF 62.08\u00b10.73 45.01\u00b10.40 34.64\u00b10.48 61.24\u00b10.44 45.44\u00b10.84 33.86\u00b10.16 lateF 61.09\u00b10.11 45.77\u00b10.29 34.15\u00b10.14 62.19\u00b10.49 47.43\u00b10.41 34.58\u00b10.48 CromSS-midF 61.66\u00b10.41 45.07\u00b10.28 34.38\u00b10.02 62.32\u00b11.01 47.19\u00b10.84 35.17\u00b10.63 CromSS-lateF 62.58\u00b10.36 46.37\u00b10.53 34.80\u00b10.37 60.92\u00b10.76 46.13\u00b10.60 33.94\u00b10.55 S2 random 56.42\u00b10.49 45.12\u00b10.18 31.50\u00b10.14 58.68\u00b10.77 46.03\u00b10.43 33.56\u00b10.28 DINO 64.82\u00b10.22 48.83\u00b10.08 37.81\u00b10.08 63.64\u00b10.72 49.92\u00b11.33 36.95\u00b10.55 MoCo 63.25\u00b10.47 51.00\u00b10.28 37.67\u00b10.57 61.19\u00b10.39 47.29\u00b10.36 34.86\u00b10.63 single-modal (S2) 66.66\u00b10.19 53.24\u00b10.21 40.88\u00b10.07 67.11\u00b10.22 53.14\u00b10.69 41.06\u00b10.24 midF 68.36\u00b10.65 53.23\u00b10.42 41.52\u00b10.35 68.07\u00b10.64 52.60\u00b10.52 41.17\u00b10.28 lateF 67.61\u00b10.91 54.08\u00b10.92 41.59\u00b10.75 68.43\u00b11.18 53.72\u00b10.76 41.76\u00b10.76 CromSS-midF 69.41\u00b10.68 55.97\u00b10.31 42.89\u00b10.35 69.20\u00b10.66 54.86\u00b10.59 42.58\u00b10.34 CromSS-lateF 66.61\u00b11.20 54.23\u00b11.06 41.12\u00b10.11 69.10\u00b10.29 54.86\u00b10.42 42.55\u00b10.36 As shown in Table 1, the proposed CromSS can improve the effectiveness of the pretrained encoders in remote sensing image segmentation\u2014in particular for S2 multi-spectral data. The improvement for S1 radar data is less significant. We attribute this discrepancy to the different capabilities of two modalities in the pretraining task, i.e., land cover classification in this work. The sample selection in CromSS is still fundamentally based on its own confidence masks for each modality. S1, which can be regarded as a weak modality in this case, can potentially take more advantages from S2 with additional specific strategies. Furthermore, the middle fusion strategy showcases a larger margin compared to late fusion, which indicates that the implicit data fusion via decoder weight sharing can boost the learning across modalities to some extent. We can also observe some improvements of single-modal pretraining with noisy labels compared to DINO and MoCo. These outcomes further demonstrate the potential of using noisy labels in task-specific pretraining for segmentation downstream tasks. 4 \fICLR 2024 Machine Learning for Remote Sensing (ML4RS) Workshop 5 CONCLUSIONS With CromSS we introduce a pretraining strategy guided by noisy labels for large-scale remote sensing image segmentation. CromSS exploits a cross-modal sample selection strategy to reduce the adverse effects of label noise. We combine this approach with a consistency loss correlating models each of which operates on a single modality, only. Transfer learning results on the DFC2020 dataset demonstrate the effectiveness of the CromSS-pretrained ResNet-50 encoders. In future works, we will explore the potential of CromSS for ViT pretraining such as in Masked-Image-Modelling as well as on more kinds of noisy labels to test its robustness to different noise rates. ACKNOWLEDGMENTS The work of C. Liu, Y. Wang and C. Albrecht was funded by the Helmholtz Association through the Framework of HelmholtzAI, grant ID: ZT-I-PF-5-01 \u2013 Local Unit Munich Unit @Aeronautics, Space and Transport (MASTr). The compute related to this work was supported by the Helmholtz Association\u2019s Initiative and Networking Fund on the HAICORE@FZJ partition. C. Albrecht receives additional funding from the European Union\u2019s Horizon Europe research and innovation programme under grant agreement No. 101082130 (EvoLand). The work of X. X. Zhu is jointly supported by the Excellence Strategy of the Federal Government and the L\u00a8 ander through the TUM Innovation Network EarthCare, by the German Federal Ministry of Education and Research (BMBF) in the framework of the international future AI lab \u201cAI4EO \u2013 Artificial Intelligence for Earth Observation: Reasoning, Uncertainties, Ethics and Beyond\u201d (grant number: 01DD20001) and by Munich Center for Machine Learning."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01229v1.json
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01229v1",
|
| 3 |
+
"title": "Boosting Jailbreak Attack with Momentum",
|
| 4 |
+
"abstract": "Large Language Models (LLMs) have achieved remarkable success across diverse\ntasks, yet they remain vulnerable to adversarial attacks, notably the\nwell-documented \\textit{jailbreak} attack. Recently, the Greedy Coordinate\nGradient (GCG) attack has demonstrated efficacy in exploiting this\nvulnerability by optimizing adversarial prompts through a combination of\ngradient heuristics and greedy search. However, the efficiency of this attack\nhas become a bottleneck in the attacking process. To mitigate this limitation,\nin this paper we rethink the generation of adversarial prompts through an\noptimization lens, aiming to stabilize the optimization process and harness\nmore heuristic insights from previous iterations. Specifically, we introduce\nthe \\textbf{M}omentum \\textbf{A}ccelerated G\\textbf{C}G (\\textbf{MAC}) attack,\nwhich incorporates a momentum term into the gradient heuristic. Experimental\nresults showcase the notable enhancement achieved by MAP in gradient-based\nattacks on aligned language models. Our code is available at\nhttps://github.com/weizeming/momentum-attack-llm.",
|
| 5 |
+
"authors": "Yihao Zhang, Zeming Wei",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"cs.AI",
|
| 12 |
+
"cs.CL",
|
| 13 |
+
"cs.CR",
|
| 14 |
+
"math.OC"
|
| 15 |
+
],
|
| 16 |
+
"label": "Original Paper",
|
| 17 |
+
"paper_cat": "LLM AND Jailbreak",
|
| 18 |
+
"gt": "Boosting Jailbreak Attack with Momentum",
|
| 19 |
+
"main_content": "INTRODUCTION The discovery of adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2014) for modern deep learning models has raised continuous concern for their deployment, yet the defense for these adversarial attacks remains an open research problem (Carlini & Wagner, 2017; Athalye et al., 2018; Croce & Hein, 2020; Chen et al., 2023a; 2024a). Moreover, with the milestone success of Large Language Models (LLMs), they have also received serious safety concerns for this vulnerability against malicious usage, which is typically referred to as the jailbreak attack (Wei et al., 2023a; Shen et al., 2023; Dong et al., 2023). To tackle this issue, numerous efforts have been dedicated to \ufb01ne-tuning the pre-trained language models to reduce their generation toxicity, which is considered a part of the alignment process (Ouyang et al., 2022; Bai et al., 2022b). Nevertheless, as demonstrated by Zou et al. (2023b), LLMs remain susceptible to gradient-based attacks, wherein adversaries manipulate prompts to induce the LLM to generate harmful or undesirable content. Drawing inspiration from AutoPrompt (Shin et al., 2020), which employs gradient heuristics and search techniques to automatically generate prompts for eliciting knowledge from LLMs, the proposed Greedy Coordinate Gradient (GCG) attack (Zou et al., 2023b) optimizes an adversarial suf\ufb01x for a given malicious prompt (e.g., how to build a bomb) using gradient information and greedy search. By attaching the suf\ufb01x to the target harmful request, GCG can effectively circumvent various popular LLMs. Furthermore, one intriguing property of the GCG attack is the universality that the crafted adversarial suf\ufb01x can work across different prompts. Speci\ufb01cally, by crafting an adversarial suf\ufb01x s from optimizing over a batch of malicious prompts {p1, \u00b7 \u00b7 \u00b7 , pn} (referred to as training set), such suf\ufb01x s can also be used to jailbreak unseen prompts. Besides, several concurrent studies (Yong et al., 2023; Yuan et al., 2024) have introduced black-box attack methods that do not rely on access to input gradients or model parameters, underscoring the urgency of addressing this safety concern. We provide a more detailed review of adversarial machine learning and its recent advances on LLMs in Appendix A. However, such a gradient-based attack encounters ef\ufb01ciency bottlenecks, with the optimization process being notably time-consuming. The default optimization epochs for the GCG attack typically span 500 steps, each involving numerous forward passes, resulting in tens of minutes required to \u2217Equal contribution. \u2020Corresponding author: Zeming Wei ([email protected]). 1 \fPublished at ICLR 2024 Workshop on Reliable and Responsible Foundation Models attack a single prompt. To address this challenge, our paper proposes a rethinking of the attack process from an optimization standpoint. Each iteration of the GCG attack can be conceptualized as a step of gradient descent over the heuristic loss. Drawing inspiration from stochastic gradient descent (SGD) and momentum methods, we demonstrate that the optimization of the suf\ufb01x can be expedited by incorporating a momentum term, akin to widely employed techniques in modern deep learning. We introduce our novel gradient-based attack as the Momentum Accelerated GCG (MAC) attack, distinguished by the inclusion of a momentum term to enhance the optimization process. The MAC attack dynamically adjusts the adversarial suf\ufb01x following each forward-backward pass, ensuring stability across diverse prompts in the training set. We further conduct experiments for both the individual and multiple prompt attack cases to showcase the notable acceleration achieved by MAC, along with an improvement in the attack success rate. For example, MAC can achieve a higher multiple attack success rate (ASR) of 48.6% on vicuna7b Zheng et al. (2023) with only 20 steps, signi\ufb01cantly than vanilla GCG (38.1%). Overall, our work provides a novel technique to accelerate the jailbreak attacks on aligned language models, along with new insights into the safety evaluations of AI systems. 2 BACKGROUND AND RELATED WORK With the signi\ufb01cant success of the fast-paced development of large language models (LLMs), concerns regarding their potential for harmful generation and malicious usage have emerged (Bommasani et al., 2022; Chen & Shu, 2023; Liu et al., 2023a), among which the jailbreaking issue (Wei et al., 2023a; Dong et al., 2023; Wei et al., 2023b) has been identi\ufb01ed as one of the major concerns (Yao et al., 2023; Chen & Shu, 2023; Zhang et al., 2024a). Recent works have shown that it is possible to manipulate the LLMs to generate harmful outputs with discrete optimization over the prompt (Guo et al., 2021; Wen et al., 2023), even though these models are safely trained or \ufb01ne-tuned for improved alignment (Bai et al., 2022a;b; Ouyang et al., 2022). In addition, recent works also proposed black-box attacks on aligned language models without access to the gradient or model parameters (Wei et al., 2023b; Zhu et al., 2023; Li et al., 2023a; Xu et al., 2023), further underscoring the jailbreaking issue of LLMs. On the other hand, some preliminary defense methods are also proposed in this context (Wei et al., 2023b; Jain et al., 2023; Li et al., 2023b; Wang et al., 2024). As a representative attack, GCG (Zou et al., 2023b) optimizes an adversarial suf\ufb01x for each harmful prompt in the following manner: for each iteration, \ufb01rst calculate the cross-entropy loss of each token in the suf\ufb01x with respect to generating the target pre\ufb01x (e.g., Sure, here\u2019s). Then, randomly select a batch of substitute tokens in the suf\ufb01x based on the gradient of the loss for each token. Finally, calculate the loss of each substituted suf\ufb01x in the batch, then replace the current suf\ufb01x with the one with the lowest loss, as shown in Algorithm 1. This attacking process is referred to as the individual prompt attack and can be generalized to the multiple prompts scenario. Algorithm 1: One-step Greedy Coordinate Gradient (GCG) (Zou et al., 2023b) Input: A LLM f\u03b8, training prompt p with corresponding optimization loss \u2113, batch size B, top-k k, suf\ufb01x s with length l, suf\ufb01x gradient g Output: Updated jailbreak suf\ufb01x s\u2032 1 Initialize s = [s1, \u00b7 \u00b7 \u00b7 , sl]; 2 for i : 1 \u2192l do 3 Xi \u2190Top-k(\u2212g(i)); 4 end 5 for b : 1 \u2192B do 6 sb \u2190s; 7 s(i) b \u2190xj, where i \u223cUniform([1, \u00b7 \u00b7 \u00b7 , l]), xj \u223cUniform(Xi); 8 end 9 s\u2032 \u2190arg minb \u2113(sb); 10 return s\u2032; 2 \fPublished at ICLR 2024 Workshop on Reliable and Responsible Foundation Models 3 METHODOLOGY In this section, we present our Momentum Accelerated GCG (MAC) attack on aligned language models. We \ufb01rst show that the suf\ufb01x optimization procedure of GCG can be viewed as a stochastic gradient descent (SGD). However, this method may suffer from unstableness over different prompts, as the proper suf\ufb01xes for different prompts or epochs may vary. Therefore, inspired by boosting conventional adversarial attacks with momentum methods (Dong et al., 2018) which has achieved great success in attacking vision models, we also propose to introduce a momentum term to the gradient for the search process. Algorithm 2: MAC attack on individual prompt Input: A LLM f\u03b8, training prompt p with corresponding optimization loss \u2113, optimizing epochs T , top-k k, suf\ufb01x length l, momentum decay factor \u00b5 Output: A jailbreak suf\ufb01x s 1 Initialize s = [s1, \u00b7 \u00b7 \u00b7 , sl], momentum-gradient g \u2190\u2207s\u2113([p, s]); 2 for t : 1 \u2192T do 3 gt \u2190\u2207s\u2113([p, s]); 4 g \u2190\u00b5g + (1 \u2212\u00b5)gt; 5 for i : 1 \u2192l do 6 Xi \u2190Top-k(\u2212g(i)); 7 end 8 for b : 1 \u2192B do 9 sb \u2190s; 10 s(i) b \u2190xj, where i \u223cUniform([1, \u00b7 \u00b7 \u00b7 , l]), xj \u223cUniform(Xi); 11 end 12 s \u2190arg minb \u2113(sb); 13 end 14 return s; The overall process is outlined in Algorithm 2. We maintain a momentum gradient term g. During each epoch, we calculate the loss \u2113i for the current prompt xi. The loss is designed as the crossentropy loss for generating a target pre\ufb01x (e.g., Sure, here\u2019s) (Zou et al., 2023b). Then, we update the momentum gradient g with the gradient of each token in the current suf\ufb01x w.r.t. the current loss with a decay factor \u00b5. Finally, we apply the momentum gradient to the one-step-GCG to update the suf\ufb01x. Note that the case of \u00b5 = 0 is exactly the GCG attack, which we believe is not stable and ef\ufb01cient enough over different prompts. We provide an example of the optimized suf\ufb01x by MAC in Appendix C. Furthermore, we extend our MAC attack to accommodate multiple prompts, as depicted in Algorithm 3. Unlike updating the gradient over steps for a single harmful request, in this case, our MAC maintains the momentum term across different prompts during optimization, thereby enhancing the stability of the process across multiple prompts and achieving improved generalization ability. 4 EXPERIMENT Experiment set-up. Our experiment is based on vicuna-7b (Zheng et al., 2023), a popular aligned chat LLM. Following Zou et al. (2023b), we randomly select 100 adversarial prompts in their AdvBench dataset which contains hundreds of harmful request prompts. For individual prompt attacks, we run the attack 5 times with different randomly chosen seeds and calculate their average performance. For multiple prompt attacks, we split them into 5 subsets which contain 20 prompts each, and use each subset as the training set and attack all the 100 prompts to run 5 independent experiments. For the token searching process, we set the token substitute batch size B to 256, and top-k to 256. Since we focus on the ef\ufb01ciency of the attack, we only optimize the suf\ufb01x for 20 epochs (T = 20). The criterion of attack success is whether the response contains any of the defensive tokens (details in Appendix B). 3 \fPublished at ICLR 2024 Workshop on Reliable and Responsible Foundation Models Algorithm 3: MAC attack on multiple prompts Input: A LLM f\u03b8, training prompts P = {p1, \u00b7 \u00b7 \u00b7 , pn} with corresponding optimization losses {\u21131, \u00b7 \u00b7 \u00b7 , \u2113n}, optimizing epochs T , batch size B, top-k k, suf\ufb01x length l, momentum decay factor \u00b5 Output: A universal jailbreak suf\ufb01x s 1 Initialize s = [s1, \u00b7 \u00b7 \u00b7 , sl], momentum-gradient g \u2190\u2207s\u2113([p1, s]); 2 for t : 1 \u2192T do 3 for i : 1 \u2192n do 4 gt,i \u2190\u2207s\u2113i([pi, s]); 5 g \u2190\u00b5g + (1 \u2212\u00b5)gt,i; 6 for i : 1 \u2192l do 7 Xi \u2190Top-k(\u2212g(i)); 8 end 9 for b : 1 \u2192B do 10 sb \u2190s; 11 s(i) b \u2190xj, where i \u223cUniform([1, \u00b7 \u00b7 \u00b7 , l]), xj \u223cUniform(Xi); 12 end 13 s \u2190arg minb \u2113(sb); 14 end 15 end 16 return s; Individual prompt attack. The effectiveness of the MAC attack in enhancing jailbreak effectiveness is discernibly analyzed through the comparative data presented in Table 1. For each \u00b5, we demonstrate our MAC\u2019s effectiveness by conducting 5 individual experiments, calculating their average attack success rate (ASR) and the average number of steps needed to successfully attack the prompt, as well as the standard deviation of the ASR and steps across these experiments to show the robustness of the results. Table 1: Evaluation of MAC Attacks in individual prompts jailbreak performance Attack Momentum Average ASR (\u2191) Std. (\u2193) Average Steps (\u2193) Std. (\u2193) GCG \u00b5 = 0 75.0 1.22 12.62 0.27 MAC (ours) \u00b5 = 0.2 76.6 2.07 12.37 0.31 \u00b5 = 0.4 76.2 4.82 12.46 0.36 \u00b5 = 0.6 76.0 2.12 12.55 0.24 \u00b5 = 0.8 72.4 3.58 13.05 0.57 The table provided sheds light on the nuanced dynamics of individual attacks within the MAC framework. Note that GCG with \u00b5 = 0 can be considered as essentially the original GCG attack, which is the baseline of our method. For \u00b5 \u2208{0.2, 0.4, 0.6}, our method signi\ufb01cantly outperforms the original GCG method, achieving an average ASR increase of 1.3%, and reducing the average attack steps from 12.62 to 12.46; as the effectiveness is shown to be improved in those experiments, the standard deviation of these indices remains relatively unchanged across all experiments. For \u00b5 = 0.8, the effectiveness begins to decrease because the emphasis on stability becomes excessive. This context underscores the ef\ufb01cacy of incorporating momentum merely as a means to augment stability, a strategy that evidently enhances the success rate of attacks. The introduction of momentum, despite the inherent instability of results across different con\ufb01gurations, generally supersedes the performance of the momentum-less scenario. Notably, at \u00b5 = 0.6, the bene\ufb01ts of this approach are both signi\ufb01cant and stable, demonstrating a clear advantage with minimal expenditure in terms of the number of steps required for a successful attack. Multiple prompt attack. In multi-prompt attacks, the importance of generalization grows, and the bene\ufb01ts of adding a momentum term become evident. We further evaluate the performance of the MAC attack and compare it with vanilla GCG in Table 2. Similarly, we apply the average value 4 \fPublished at ICLR 2024 Workshop on Reliable and Responsible Foundation Models and standard deviation of ASR across the 5-fold experiments as metrics to demonstrate our methods\u2019 effectiveness. In addition, we introduce Maximum ASR, i.e. the highest ASR achieved on the test set throughout all attack steps as a crucial metric since the goal of multiple prompt attacks is to craft an effective adversarial suf\ufb01x and the suf\ufb01xes crafted during any epoch is available. Table 2: Evaluation of MAC attacks in multiple prompts jailbreak performance Attack Momentum Average ASR (\u2191) Std. (\u2193) Maximum ASR (\u2191) Std. (\u2193) GCG \u00b5 = 0 38.1 8.66 72.7 15.36 MAC (ours) \u00b5 = 0.2 35.9 5.95 74.9 15.30 \u00b5 = 0.4 44.3 7.31 81.9 12.45 \u00b5 = 0.6 48.6 14.97 86.1 9.00 \u00b5 = 0.8 43.4 13.69 83.8 9.18 As illustrated in Table 2, Experiments with \u00b5 \u2208{0.4, 0.6, 0.8} show our method\u2019s effectiveness, particularly in the improved average ASR (increasing by 6.2%, 10.5%, and 5.3%, respectively), highlighting the critical role of dataset selection in improving attack ef\ufb01cacy and robustness. Furthermore, our MAC excels higher Maximum ASRs (increasing by 2.2%, 9.2%, 13.4%, and 11.1%, respectively) and lower standard deviations, indicating enhanced ef\ufb01ciency and robustness for crafting an effective adversarial suf\ufb01x. Notably, when \u00b5 \u22650.6, the standard deviation of Maximum ASR signi\ufb01cantly decreases from 15.36 to approximately 9, indicating that higher momentum values contribute to better stability of Maximum ASR. Detailed analysis shows that a \u00b5 of 0.6 yields the optimal performance. At a momentum value of \u00b5 = 0.6, the MAC attack achieves its highest Average ASR (48.6%) and Maximum ASR (86.1%). This con\ufb01guration also results in the lowest standard deviation for Maximum ASR (9.00), indicating it optimizes both effectiveness and consistency. However, it\u2019s crucial to note that both lower and higher momentum values fail to balance stability and learning rate adaptation effectively, potentially compromising performance. This indicates a clear sweet spot for the momentum (\u00b5) value, underscoring the importance of nuanced parameter tuning in adversarial contexts. 5 DISCUSSION AND LIMITATIONS In this work, we rethink a current popular gradient-based attack on LLMs from the optimization perspective. We \ufb01rst propose a new jailbreak attack through the lens of optimization named Momentum Accelerated GCG (MAC), and demonstrate that accelerated optimization methods like momentum SGD can effectively boost such attacks with notably higher attack success rates and fewer optimization steps, providing deeper insights into the current jailbreaking issue research. While the recent research thread mainly focuses on black-box attacks (Wei et al., 2023b; Zhu et al., 2023; Li et al., 2023a; Chen et al., 2023b), we consider it still important to develop an ef\ufb01cient white-box attack for developers to ef\ufb01cient evaluation and red-teaming on LLMs. This is similar to the conventional adversarial robustness in the vision domain, where white-box attack methods for evaluation are still valuable (Athalye et al., 2018; Croce & Hein, 2020). As a preliminary study, we acknowledge some limitations of this work, which we leave as future works. First, our MAC only consider the case of batch size as 1 in the multiple prompts attack case. It would be interesting to see whether a larger batch size can \ufb01nd a better trade-off between ef\ufb01ciency and stableness. In addition, there are many optimization methods other than momentum like Adam remain unexplored. Finally, our experiment only focuses on one model, and its effectiveness can be further substantiated by evaluating more models. ACKNOWLEDGEMENT This work was sponsored by the Beijing Natural Science Foundation\u2019s Undergraduate Initiating Research Program (Grant No. QY23041) and the National Natural Science Foundation of China (Grant No. 62172019). 5 \fPublished at ICLR 2024 Workshop on Reliable and Responsible Foundation Models"
|
| 20 |
+
}
|
title_10K/test_title_short_2405.01248v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01248v1",
|
| 3 |
+
"title": "DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines",
|
| 4 |
+
"abstract": "Diffusion models have emerged as dominant performers for image generation. To\nsupport training large diffusion models, this paper studies pipeline parallel\ntraining of diffusion models and proposes DiffusionPipe, a synchronous pipeline\ntraining system that advocates innovative pipeline bubble filling technique,\ncatering to structural characteristics of diffusion models. State-of-the-art\ndiffusion models typically include trainable (the backbone) and non-trainable\n(e.g., frozen input encoders) parts. We first unify optimal stage partitioning\nand pipeline scheduling of single and multiple backbones in representative\ndiffusion models with a dynamic programming approach. We then propose to fill\nthe computation of non-trainable model parts into idle periods of the pipeline\ntraining of the backbones by an efficient greedy algorithm, thus achieving high\ntraining throughput. Extensive experiments show that DiffusionPipe can achieve\nup to 1.41x speedup over pipeline parallel methods and 1.28x speedup over data\nparallel training on popular diffusion models.",
|
| 5 |
+
"authors": "Ye Tian, Zhen Jia, Ziyue Luo, Yida Wang, Chuan Wu",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.DC",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.DC"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Diffusion AND Model",
|
| 14 |
+
"gt": "DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines",
|
| 15 |
+
"main_content": "INTRODUCTION Diffusion models have become the dominant choice for content generation today, including text-image synthesis (Choi et al., 2021) and video generation (Ramesh et al., 2022). Large diffusion models such as Stable Diffusion (Rombach et al., 2022), ControlNet (Zhang & Agrawala, 2023), and Imagen (Saharia et al., 2022) achieve state-of-the-art performance in various scenarios. There is a continuing trend to develop larger diffusion models by increasing the backbone size (Rombach et al., 2022; Peebles & Xie, 2022a; Bao et al., 2023; Podell et al., 2023), cascading multiple backbones to enable higher resolution image generation (Nichol et al., 2021; Peebles & Xie, 2022a; Saharia et al., 2022; Ho et al., 2022; Podell et al., 2023), and combining different transformer architectures with diffusion models (Peebles & Xie, 2022a; Zhang & Agrawala, 2023; Wu et al., 2023). Data parallelism is adopted for distributed diffusion model training (Falcon & The PyTorch Lightning team, 2019; Bian et al., 2021; von Platen et al., 2022). For large diffusion models, this method duplicates parameters, which limits the training batch size (Rombach et al., 2022; Ho et al., 2022; Saharia et al., 2022) and device utilization, and causes significant synchronization overhead, especially when the *Work done during internship at AWS. 1The University of Hong Kong, Hong Kong 2Amazon Web Services, USA 3The Ohio State University, USA. Correspondence to: Ye Tian <[email protected]>. Proceedings of the 7 th MLSys Conference, Santa Clara, CA, USA, 2024. Copyright 2024 by the author(s). training scale is large (Narayanan et al., 2019). Pipeline parallelism (Huang et al., 2019; Narayanan et al., 2019; Luo et al., 2022) has been widely adopted to train large DNN models, which partitions networks across multiple devices and pipelines micro-batch processing across model partitions, substantially alleviating memory consumption on a single device and enabling larger training batch sizes. Although pipeline parallelism is potentially useful in enabling larger diffusion model training, it has not been well explored for diffusion models and its application faces several challenges as follows: First, the structural characteristics and special training procedures of diffusion models cannot be handled well by traditional pipelining methods. A diffusion model typically contains a trainable part with one or multiple backbone models (e.g., U-Net) (Rombach et al., 2022), and a nontrainable part with frozen text and image encoders, and they are usually trained with special techniques such as selfconditioning (Chen et al., 2022), which involves an additional forward computation pass on the backbone. Pipeline training involves only the trainable part, while the nontrainable part is not readily handled by existing pipeline training methods because it does not require pipelining. Self-conditioning is beyond the scope of existing pipeline systems, as they assume that there is only one forward pass. Second, pipeline bubbles are often significant in synchronous pipeline training (Huang et al., 2019; Fan et al., 2021; Luo et al., 2022), which is more widely used in practice due to not altering model performance but involves arXiv:2405.01248v1 [cs.DC] 2 May 2024 \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines periodic pipeline flushing. We identify a unique opportunity to fill the pipeline bubbles using the computation of nontrainable model components, to substantially improve device utilization and expedite training speed. However, there are dependencies between the trainable and non-trainable part that block pipeline bubble filling by overlapping their execution. In addition, how to partition the non-trainable part into sets of layers and insert them into the pipeline bubble is not studied. Third, Non-trainable layers with extra-long execution time are common in frozen encoders (Kingma & Welling, 2013). Such layers may not fit into any pipeline bubble and block filling pipeline bubble with all subsequent layers in the nontrainable part, which cannot be solved by only partitioning the non-trainable part into sets of layers. In addition, as nontrainable layers\u2019 execution time is discrete, it is unlikely to fully utilize idle time in individual pipeline bubble, leading to performance degradation. In this paper, we propose DiffusionPipe, an efficient pipeline training system designed specifically for large diffusion models. DiffusionPipe systematically determines optimized model partitioning, stages, and replication settings while applying pipeline bubble filling techniques. These optimizations are tailored for a variety of representative diffusion models and training methods. To the best of our knowledge, we are the first to enable efficient pipeline parallel training of diffusion models. Our contributions can be summarized as follows: \u25b7We propose a unified dynamic programming-based algorithm for optimized model partitioning, that can handle various training scenarios, e.g., models with different numbers of backbones and models trained with self-conditioning. The proposed partitioning algorithm optimizes the model partitioning scheme under various settings of number of stages and number of micro-batches, with performance comparable to state-of-the-art pipeline paradigms under traditional pipelining, and effectively handles scenarios beyond traditional pipelining and specific to diffusion models. \u25b7We design a novel pipeline bubble filling strategy that fills the non-trainable part computation into the bubble time of the pipeline training of the backbone(s), effectively eliminating pipeline bubbles. It efficiently partitions the nontrainable components and the input data for bubble filling, and effectively addresses dependencies between the non-trainable part and the trainable part by allowing crossiteration overlapping of backbone training of an iteration and non-trainable part computation of the next iteration and filling pipeline bubbles of the former with the latter. \u25b7We effectively handles extra-long non-trainable layers which do not fit into individual pipeline bubbles, by a partialbatch processing design, for the non-training layer to proConditional encoder First stage encoder Backbone (forward computation) Text input Image input Backbone (back propagation) Self-conditioning feedback Figure 1. Training process of Stable Diffusion v2.1 (Rombach et al., 2022) and additional feedback of self-conditioning (Chen et al., 2022). Non-trainable components are marked in grey boxes. Table 1. Ratio of the forward time of the non-trainable part to the forward and backward time of the trainable part on A100 GPU Model / Batch size 8 16 32 64 Stable Diffusion v2.1 38% 41% 43% 44% ControlNet v1.0 76% 81% 86% 89% cess only a portion of a training batch. Partial-batch layer\u2019s execution time can be precisely controlled by its input batch size, enabling it to be inserted into bubbles. In addition, partial-batch layers help eliminate the remaining idle time in pipeline bubbles after inserting non-trainable layers (processing a complete batch). We implement DiffusionPipe and compare it to state-ofthe-art data parallel training systems (Rasley et al., 2020) and ZeRO-3 (Rajbhandari et al., 2021), together with synchronous pipeline training paradigms, including SPP (Luo et al., 2022) and GPipe (Huang et al., 2019). Experimental results show that DiffusionPipe achieves up to 1.28x speedup over data parallel training and up to 1.41x speedup over existing pipeline parallel methods on representative diffusion models. We observe that DiffusionPipe achieves almost complete elimination of pipeline bubbles and effectively handles multiple training scenarios of diffusion models. 2 BACKGROUND AND MOTIVATION 2.1 Diffusion models and training Diffusion models (Ho et al., 2020; Song et al., 2020; Chen et al., 2022; Rombach et al., 2022; Ho et al., 2022; Saharia et al., 2022; Podell et al., 2023) are generative models that learn to reverse the diffusion process that gradually turns data into noise. They typically comprise a backbone model that performs image generation and multiple frozen encoders that encode image and conditional information, e.g., class information (Yu et al., 2015), text description (Deng et al., 2009), canny edge (Canny, 1986) and human pose (Kreiss et al., 2021), and provide it as input to the backbone. During diffusion model training, the encoders are typically fixed and executed in advance in the forward computation pass (referred to as the non-trainable part), while the backbone (the trainable part) is trained with both forward computation and backward propagation (Fig. 1). Table 1 compares the execution time of the non-trainable part and the training time (forward and backward) of the trainable part. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 0 0 0 0 1 1 1 2 2 3 0 1 1 2 2 0 2 3 3 3 0 0 1 1 1 2 2 2 3 3 3 3 Warm-up Stable Cool-down Device 0 Device 1 Device 2 Device 3 Figure 2. FIFO-1F1B schedule of a DNN. Gray blocks without numbers indicate pipeline bubbles. Potential critical paths are marked with a dashed line. Numbers indicate micro-batch index in both forward (blue) and backward (pink) steps. Device 0 Device 1 Device 2 Device 3 0 0 0 0 4 4 4 4 1 1 1 1 5 5 5 5 6 6 6 6 2 2 2 2 3 3 3 3 7 7 7 7 0 1 2 3 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 4 5 6 7 4 5 6 7 4 5 6 7 Warm-up Stable Cool-down Figure 3. Bidirectional pipeline schedule of a DNN. Communication omitted. The same meaning of number and color with Fig. 2. Micro-batch 0 to 3 pipeline from device 0 to 3 (down direction), while micro-batch 4 to 7 pipeline from device 3 to device 0 (up direction). Some diffusion models, e.g., Cascaded Diffusion Models (CDM) (Ho et al., 2022; Ramesh et al., 2022; Podell et al., 2023), involve multiple backbones of different capacities for high-resolution image generation. Multiple backbones accept the same encoder outputs, and each backbone also takes the output of the preceding backbone as input. The training of backbones in a CDM are typically independent, and each is trained on a different set of devices using the same procedure, as shown in Fig. 1. In the current mainstream diffusion models, U-Net (Ho et al., 2020; Rombach et al., 2022) is widely used as the backbone model. Transformer models can also serve as the backbone (Peebles & Xie, 2022a; Bao et al., 2023). T5xxl (Raffel et al., 2020), BERT (Devlin et al., 2018) and CLIP (Radford et al., 2021) text encoders are popular text encoders, while the image encoders are often variational auto-encoders (Kingma & Welling, 2013), ViT (Dosovitskiy et al., 2020) and CLIP image encoders. There are corresponding encoders (Zhang & Agrawala, 2023) for other modalities, such as canny edge and human pose. Self-conditioning (Chen et al., 2022) has become a very popular technique for training diffusion models (Rombach et al., 2022; Saharia et al., 2022; Yuan et al., 2022), which improves the sampling quality by introducing an additional forward computation pass of the backbone (Fig. 1). The output of this forward pass is fed back to the backbone and serves as a conditional input. The fidelity of the image is then improved because each step is conditioned on the previously generated samples. Table 2. Proportion of synchronization in training iteration time at local batch size 8 on A100 GPUs Model / GPU count 8 16 32 64 Stable Diffusion v2.1 5.2% 19.3% 36.1% 38.1% ControlNet v1.0 6.9% 22.7% 39.1% 40.1% 2.2 Pipeline parallel training, schedule and pipeline bubble Pipeline parallel training partitions the model into stages, and each stage is deployed on a single device; the input data batch in each training iteration is divided into multiple micro-batches, which are processed through the model stages in a pipelined manner. The micro-batch execution pipelines are typically scheduled by a First-In-First-Out (FIFO) heuristic (Chen et al., 2015; Abadi et al., 2016; Sergeev & Del Balso, 2018), which executes micro-batches on model stages according to their ready order. The OneForward-One-Backward (1F1B) schedule is widely adopted with FIFO, that alternatively executes forward computation and back propagation of micro-batches on each model stage in the stable phase of the pipeline execution (when multiple micro-batches are available to run on a model stage at the same time). As illustrated in Fig. 2, this schedule allows releasing intermediate activations and reduces peak memory usage by launching the backward computation when forward computation of the micro-batch is complete. Chimera (Li & Hoefler, 2021) proposes bidirectional pipelining to reduce pipeline bubbles while retaining training synchronous. Chimera maintains two pipelines of micro-batch execution in different device rank orders (i.e., pipeline directions) on the same set of model stages, with the two pipeline execution schedules being symmetric along the device dimension. An example of bidirectional pipelining is shown in Fig. 3. Each micro-batch\u2019s execution can fit perfectly into pipeline bubbles of its counterpart in the pipeline of the other direction (when the number of stages is even). In synchronous pipeline training, pipeline bubbles generally exist in the pipeline schedule (Fig. 2). There is a barrier that gradient synchronization imposes between pipeline stages of the trainable part of diffusion models at different iterations, disabling pipeline bubbles be filled by the trainable part at different iterations. Therefore, although pipeline bubbles can be partially reduced by applying a better model partitioning and pipeline schedule, e.g., SPP (Luo et al., 2022) and Chimera (Li & Hoefler, 2021), such approaches cannot fundamentally eliminate pipeline bubbles, as they only manipulate the trainable part of the model and do not take the non-trainable part into consideration. 2.3 Synchronization overhead and memory consumption of data parallel training Diffusion models are largely trained using data parallelism nowadays (Rombach et al., 2022; Ho et al., 2022; Saharia \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 1 2 3 4 Number of micro-batches 4 3 2 Number of stages 67.6% 684.3% 51.0% 342.2% 41.0% 228.1% 34.3% 171.1% 58.2% 456.2% 41.0% 228.1% 31.7% 152.1% 25.8% 114.1% 41.0% 228.1% 25.8% 114.1% 18.8% 76.0% 14.8% 57.0% (a) Stable Diffusion v2.1 1 2 3 4 Number of micro-batches 4 3 2 Number of stages 61.3% 335.4% 44.2% 167.7% 34.5% 111.8% 28.4% 83.9% 51.4% 223.6% 34.5% 111.8% 26.0% 74.5% 20.9% 55.9% 34.5% 111.8% 20.9% 55.9% 15.0% 37.3% 11.7% 28.0% (b) ControlNet v1.0 Figure 4. Ratio of pipeline bubble time to iteration time (upper) and ratio of pipeline bubble time to non-trainable part execution time (lower) at batch size 64 using FIFO-1F1B scheduling. et al., 2022; Podell et al., 2023), which involves significant parameter synchronization overhead among devices and large memory consumption on each device that restricts the maximum feasible local batch size and the device utilization. For example, Stable Diffusion is trained at a local batch size of only 8 on each TPU-v3 (32GB) in (Rombach et al., 2022) consuming about 24.3 GB memory, which results in limited device utilization and exacerbates the synchronization portion of the training time. The synchronization overhead in Table 2 is computed as the ratio of parameter synchronization time to the end-to-end time of a training iteration. As the number of devices increases, parameter synchronization soon takes up a significant portion of the iteration time. In summary, the data parallel style of diffusion model training limits the training batch size and imposes high synchronization overhead. 2.4 Efficient pipeline bubble filling with non-trainable components We profile the iteration training time of two popular diffusion models (without self-conditioning) by pipelining their backbones under different model stage and micro-batch number settings, and executing the non-trainable part using data parallelism before backbone training. Fig. 4 shows the pipeline bubble ratios, where the iteration time is the sum of pipeline training time of the backbone and the execution time of the non-trainable part in each training iteration. Pipeline bubbles can take up to 68% of the overall training time, which is quite significant, according to the upper line in Fig. 4. In the lower line, a ratio close to 1 indicates that the pipeline bubble time can be almost completely filled by scheduling the non-trainable part in pipeline bubbles, under the respective model stage and micro-batch numbers. This observation motivates us to advocate pipeline bubble filling with the non-trainable part, and to study the detailed bubble filling strategies. Fig. 5 shows that many non-trainable layers (indexed 0 to 21) in both models have short execution times, which belong to the frozen text encoder. Most layers (indexed 22 to 41) from the frozen image encoder take a moderate amount of time to compute (less than 30 ms). Such a distribution of 0 10 20 30 40 Non-trainable layer index 10 1 10 0 10 1 10 2 Execution time (ms) (a) Stable Diffusion v2.1 0 20 40 60 Non-trainable layer index 10 1 10 0 10 1 10 2 Execution time (ms) (b) ControlNet v1.0 Figure 5. Execution time of non-trainable layers at batch size 64. 0 20 40 60 Batch size 0 100 200 300 400 Execution time (ms) Top-1 Top-2 Top-3 2 stages 3 stages 4 stages (a) Stable Diffusion v2.1 0 20 40 60 Batch size 0 100 200 300 400 Execution time (ms) Top-1 Top-2 Top-3 2 stages 3 stages 4 stages (b) ControlNet v1.0 Figure 6. Execution time of top-3 non-trainable layers with longest execution time under different batch sizes, compared to longest pipeline bubble time when there are 4 micro-batches and different number of stages at batch size 64 using FIFO-1F1B scheduling. non-trainable layers with a large proportion of short and moderately long layer execution times provides excellent opportunities for executing individual layers in pipeline bubbles ranging from 10 to 100 ms. There are also some non-trainable layers with extra-long execution times (more than 400 ms), as shown in Fig. 5. Such layers may not fit into any pipeline bubble. Nevertheless, we observe that the layer execution time can be precisely controlled by adjusting the input batch size. Fig. 6 shows the execution times of the layers with the longest execution times at different batch sizes. When the batch size is reduced to 16, most of these non-trainable layers can fit into the longest pipeline bubble obtained by the way we identify bubbles in Fig. 2, implying that we can run such layers in pipeline bubbles by partitioning their input. We seek to design an efficient algorithm to schedule the execution of non-trainable layers into pipeline bubbles. 3 SYSTEM DESIGN Fig. 7 presents an overview of DiffusionPipe, which comprises of two modules: (1) The front-end carries out our workflow of generating an optimized pipeline training schedule for an input diffusion model, including pipeline training configurations of the backbone(s) and bubble-filling strategies of the non-trainable part; (2) The back-end is an execution engine that performs pipeline training according to the optimized pipeline schedule. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines Profile records Parallel profile Trainable part partition schemes Schedule Overall partition schemes Front-end Dynamic programming Greedy filling Back-end Instruction generation Invoke Schedule 1 2 4 5 3 6 Model stages Execution engine Pipeline instructions of current device Optimal overall pipeline schedule Trainable part pipeline schedules Pipeline instructions of all devices Pipeline instruction implementations Synchronization Trainable stage forward Trainable stage backward Non-trainable stage forward Load micro-batch data Send or receive data Input model Figure 7. The architecture of DiffusionPipe Table 3. Pipeline training hyper-parameters Symbol Description S Number of model stages M Number of micro-batches D Pipeline parallel group size1 3.1 Workflow DiffusionPipe takes the diffusion model configuration, the training batch size, and the cluster configuration (i.e., number of machines and number of devices per machine) as inputs. DiffusionPipe first performs parallel profiling on the entire cluster to obtain the model layer execution time at different batch sizes (step 1 in Fig. 7), which is used in steps 2 to 5. Based on the input specifications, DiffusionPipe searches for pipeline training hyper-parameters as listed in Table 3. Note that DiffusionPipe supports mixed pipeline and data parallelism, as shown in Fig. 8. For each feasible hyper-parameter combination (S, M, and D), DiffusionPipe generates a near-optimal partitioning scheme for the trainable backbone(s) (\u00a74, step 2), including the number of layers in each model stage and the number of devices on which each stage replicates. According to the corresponding pipeline schedule generated in step 3, DiffusionPipe further partitions the non-trainable part and fills it into pipeline bubbles (\u00a75, step 4). Then DiffusionPipe generates the overall pipeline training schedules, and selects the optimal one with minimum iteration time (step 5). Finally, DiffusionPipe generates pipeline instructions for the back-end module according to the overall pipeline schedule (step 6). 3.2 Cross-iteration pipelining For effective pipeline bubble filling that respects data dependencies between the non-trainable part and the backbone(s), DiffusionPipe advocates cross-iteration pipeline bubble fill1Pipeline parallel group is a minimum group of devices on which a complete set of pipeline communications is performed. In DiffusionPipe, pipeline parallel group size (i.e., D) = world size (i.e., number of devices in the cluster) / data parallel degree. 0 1 2 3 4 5 6 7 Stage 0 Stage 1 Stage 2 Machine 0 Machine 1 Device mapping Data dependency Figure 8. DiffusionPipe\u2019s data and pipeline parallelism. Devices in the same color run the same model stage. 0 0 1 1 0 1 0 1 N' N' N' N' (F) 0 0 1 1 0 1 0 1 F N N Non-trainable part of the current iteration N' Non-trainable part of the next iteration N(F) Overlapped synchronization and non-trainable part execution Pipeline bubble 0 Forward stage 0 Backward stage F Synchronization (pipeline flush) Backbone pipeline only Cross-iteration pipelining Device 0 Device 1 Device 0 Device 1 Saved time Time elapses Figure 9. Cross-iteration pipelining of a diffusion model. Numbers indicate the micro-batch index of a pipeline stage. ing, filling the bubble time of the backbone pipeline training in one iteration with the non-trainable part computation of the next iteration, as shown in Fig. 9. Non-trainable layers can be computed in a data parallel manner without pipelining, following their inter-layer data dependencies. At the end of a training iteration, the output of the non-trainable part is collected and divided into micro-batches according to the pipeline training configurations of the backbone(s). In the next iteration, these intermediate results are loaded onto the correct devices and fed as input to the pipeline training of the backbone(s). In addition, we only run the non-trainable part in the first iteration to enable such overlapping. The cross-iteration pipeline is mathematically equivalent to data parallel and synchronous pipeline training. 4 BACKBONE PARTITIONING In this section, we present a unified dynamic programming approach to optimize partitioning and device assignment of the trainable part in diffusion models. 4.1 Single backbone We first consider a diffusion model with a single backbone. The high-level idea is to analyze the critical path of FIFO1F1B pipelining of the backbone and derive an upper bound on its execution time, to identify the optimal partitioning scheme that minimizes the execution time. We use the notations in Table 4. FIFO pipeline execution can be divided into 3 phases, i.e., warm-up, stable and cool-down, as shown in Fig. 2. It launches micro-batch processing one by one in the warmup phase and waits for all micro-batches to be completed in the cool-down phase. When we enlarge the last stage\u2019s execution time to the longest among all stages, enforcing it on the critical path, the warm-up phase contains forward \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines Table 4. Notations Symbol Definition L Number of layers in backbone model B, B, b Training batch size, micro-batch size and number of samples in a partial-batch S, s Set of model stages and model stage Pf l (B), Pb l (B) Forward and backward computation time of layer l given batch size B Cf l,l+1(B), Cb l+1,l(B) Data size of communication in forward and backward pass between layers l and l + 1 given batch size B Rx, Lx Bandwidth and latency of communication type x (e.g., allreduce (ar), point-to-point (p2p)) Gl(B) Gradient size of layer l given batch size B Ol(B) Output size of layer l given batch size B TS(s) Synchronization time of stage s TC(s) Compensation time of stage s T0 Maximum micro-batch execution time per stage or inter-stage communication time T S\u2212C 0 Maximum gap between synchronization time and compensation time per stage TB Length of a pipeline bubble (idle time) 0 1 C0,1 0 1 C0,1 Cf Stage 0 Stage 1 Communication 0 1 0 Cf C0,1 C0,1 Warm-up Self-conditioning forward 1 0 C1,0 1 0 1 C1,0 F0 F1 Compensation time (Tc) Cool-down Stable FIFO-1F1B pipelining Figure 10. FIFO-1F1B scheduling of pipelining a backbone model with 2 stages, 2 micro-batches and self-conditioning. The same color and number setting with Fig. 9. Ci,j is communication from stage i to stage j. Cf feeds back the output of the backbone to stage 0. Fi refers to the parameter synchronization of stage i, Tc is the compensation time of stage 1. computation on S \u22121 model stages (aka forward stages). Similarly, the cool-down phase includes backward computation on S \u22121 model stages (aka backward stages). The stable phase of the critical path contains M forward stages and M backward stages, where M is the number of microbatches. Therefore, there are a total of 2(M+S\u22121) forward and backward stages on the critical path of the FIFO-1F1B pipeline schedule in total. Considering the intermediate data communication between model stages in pipeline training, we add S \u22121 inter-stage communications in the forward and backward passes, respectively, which then becomes 2(M + S \u22121) + 2(S \u22121) forward and backward stages, together with communications on the critical path. We use T0 to denote the maximum of the time to run the forward plus backward computation of a micro-batch on a model stage, and the communication time between two stages, among all model stages. Then we have an upper bound T0(M + 2S \u22122) on the execution time of the critical path. We further consider the parameter synchronization time among the micro-batches and add T S\u2212C 0 , i.e., the maximum gap between TS(s) and TC(s) to the pipeline training time of the backbone for all stages s, where TS(s) indicates the synchronization time of stage s and TC(s) is used to compensate the overlapping time of parameter synchronization of stage s and computation of later stages. Fig. 10 gives an illustration. Putting the above together, an upper bound on the FIFO-1F1B pipeline execution time is: T max = T0(M + 2S \u22122) + T S\u2212C 0 (1) We design a dynamic programming approach to identify the backbone partition and device assignment by minimizing T max. We order the D devices in a pipeline parallel group into a chain according to their rank. Let W(L, S, r, D) denote T0 when partitioning the first L consecutive layers of the backbone into S stages, with these stages placed on devices 1 to D and the last stage s replicated on the last r devices (of the 1 to D device chain). Additionally, let Y (L, S, r, D) denote T S\u2212C 0 under the same setting. The optimal partition of the backbone into S stages with the device placement of each stage can be computed by: min 1\u2264r\u2264D{(M + 2S \u22122)W(L, S, r, D) + Y (L, S, r, D)} (2) W(L, S, r, D) can be decomposed into sub-problems that further partition the first l model layers into S \u22121 stages on the remaining D \u2212r devices, with the last stage replicated on r\u2032 devices2. Then, W(L, S, r, D) can be computed by the maximum of W(l, S \u22121, r\u2032, D \u2212r) and the estimation of T0 by the last stage s (i.e., T0(s)), and Y (L, S, r, D) can be computed in the same way, following Eqn. (3) to (8). Then we add the range in Eqn. (9) when optimizing Eqn. (2). T0(s) = max{ X l<i\u2264L Pf i (B r ) + X l<i\u2264L Pb i(B r ), Cf l,l+1( B r ) + Cb l+1,l( B r ) Rp2p + 2Lp2p} (3) TS(s) = X l<i\u2264L Gi(B r )/Rar + Lar (4) TC(s) = X l<i\u2264L Pb i(B r ) (5) T S\u2212C 0 (s) =TS(s) \u2212TC(s) (6) W(L, S, r, D) = max{W(l, S \u22121, r\u2032, D \u2212r), T0(s)} (7) Y (L, S, r, D) = max{Y (l, S \u22121, r\u2032, D \u2212r), T S\u2212C 0 (s)} (8) \u2200l, r\u2032, 1 \u2264l \u2264L \u22121, 1 \u2264r\u2032 \u2264D \u2212r (9) Here B is the micro-batch size, P{f/b} i ( B r ) is the forward / backward computation time of layer i given local batch 2Though we support different model stages using different data parallel degrees (e.g., r \u0338= r\u2032), we find that such cases are rare. They can result in strange bubble filling schemes (\u00a75) and require complex implementations. In evaluation (\u00a76), we force all stages to have the same data parallel degree. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines size B r . Cf l,l+1( B r ) and Cb l+1,l( B r ) are data sizes in forward and backward pass between layer l and l + 1 given local batch size B r . Gi( B r ) is the gradient size of layer i given local batch size B r . Rx and Lx are bandwidth and latency of communication type x, while ar indicates all-reduce used in synchronization and p2p indicates point-to-point communication between model stages. Note that a sub-problem in Eqn. (8) does not know the partition scheme of all subsequent layers (with indices greater than l) for computing the compensation time TC. Instead, we use a lower bound of TC in Eqn. (5), i.e., the sum of the backward computation time of all these layers on r devices. 4.2 Multiple backbones For a cascaded diffusion model with multiple backbones, we advocate bidirectional pipelining to train the backbones on the same set of devices (instead of using a separate set of devices to train each backbone), in order to utilize the devices more efficiently. In particular, we leverage bidirectional pipelining (Li & Hoefler, 2021) to train multiple backbones, with each backbone pipelining in different direction. Here, we consider pipelining from low-rank device to high-rank device as the down direction and vice versa, and the corresponding pipelines are down and up pipelines. Consider 2 backbones in a CDM. As shown in Fig. 3, the duration of the stable phase of the critical path in bidirectional pipelining differs from unidirectional pipelining while the duration of the warm-up and cool-down phases are not affected. We calculate the number of paired forward and backward stages between the down and up pipelines (MCDM), and derive an upper bound on bi-directional pipeline execution time for training two backbones: T0,CDM = max{T0,down, T0,up} (10) T S\u2212C 0,CDM = max{T S\u2212C 0,down, T S\u2212C 0,up } (11) T max CDM = (MCDM + 2S \u22122)T0,CDM + T S\u2212C 0,CDM (12) Here T0,{down/up} is the maximum of the time to perform the forward and backward computation of a micro-batch and the communication time between two stages in the down or up pipeline among all model stages. The sub-problem in the dynamic programming approach in bi-directional pipelining should decide partitioning and placement of model stages for both backbones. Let W(Ld, Lu, S, r, D) denote T0,CDM when partitioning the last Ld and the first Lu consecutive layers of the downand up-pipelined backbones, respectively, into S stages, while placing them on D devices and replicating the last stage sd and the first stage su of the two backbones on the last r devices of the 1 to D device chain, and we have Y (Ld, Lu, S, r, D) similarly. The optimal partitioning of two backbones can be computed by: min 1\u2264r\u2264D{(MCDM + 2S \u22122)W(Ld, Lu, S, r, D) + Y (Ld, Lu, S, r, D)} (13) In Eqn. (14) and (15) we give the definition of W(Ld, Lu, S, r, D) and Y (Ld, Lu, S, r, D). Eqn. (16) presents the additional optimization range. Communication in the bidirectional pipelines may compete for resources, and we reasonably enlarge the communication time in Eqn. (3) by a factor of 2 (as there are two pipelining directions). Other equations are the same with \u00a74.1. W(Ld, Lu, S, r, D) = max{W(ld, lu, S \u22121, r\u2032, D \u2212r), T0(sd), T0(su)} (14) Y (Ld, Lu, S, r, D) = max{Y (ld, lu, S \u22121, r\u2032, D \u2212r), T S\u2212C 0 (sd), T S\u2212C 0 (su)} (15) \u2200ld/u, r\u2032, 1 \u2264l{d/u} \u2264L{d/u} \u22121, 1 \u2264r\u2032 \u2264D \u2212r (16) For a diffusion model with more than two backbones, we can divide the backbones into two groups, one to be pipelined in each direction. We then combine stages of the backbones in the same pipeline direction to form a larger model stage and apply our design for bi-directional pipelining accordingly. 4.3 Backbone(s) with self-conditioning DiffusionPipe performs self-conditioning on the same device to eliminate unnecessary parameter storage and updating, as shown in Fig. 10. There is an additional forward pass at each model stage, and Eqn. (3) is changed to: T0,SC(s) = max{2 X l<i\u2264L Pf i (B r ) + X l<i\u2264L Pb i(B r ), 2Cf l,l+1( B r ) + Cb l+1,l( B r ) Rp2p + 3Lp2p (17) In addition, the communication time for sending the output from the last stage to the first stage (Cf in Fig. 10) should be considered. We use a point-to-point transmission time as the upper bound of this feedback time: TF = OL(B)/Rp2p + Lp2p, where OL( B r ) is the output size of the last layer L at local batch size B r . The upper bound on the pipeline execution time with self-conditioning is: T max SC = (M + 2S \u22122)T0,SC + T S\u2212C 0 + TF (18) The dynamic programming formulation remains the same as in \u00a74.1. Since self-conditioning is usually randomly activated during the training process with a certain probability p (0.5 in (Chen et al., 2022)), the formulation optimizes an expectation of T max SC and T max. In case self-conditioning is applied to CDMs, we can readily extend the formulation in \u00a74.2 by counting the additional number of forward stages in the critical path. 5 PIPELINE BUBBLE FILLING In DiffusionPipe, we divide the pipeline idle time along the timeline and define a pipeline bubble using a tuple (start time, end time, idle devices) so that a bubble contains the \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 0 1 0 1 2 0 1 0 1 0 0 2 0 1 0 1 0 2' Full-batch layer filling candidates Bubble filling candidates with a partial-batch layer Non-trainable component 0 Non-trainable component 1 0 1' Pipeline bubble 0 1 0 2' * Figure 11. Full-batch bubble filling candidates and corresponding candidates with a partial-batch layer. Numbers indicate the index of the non-trainable layer. 1\u2019 and 2\u2019 denote partial-batch layers. * marks the candidate with the longest execution time. same number of idle devices in its time span. For example, in Fig. 2, the first pipeline bubble is in the first time slot with idle devices 1 to 3. Pipeline bubble filling is always performed under the cross-iteration style of pipelining (\u00a73.2), regardless of the number of backbones and whether selfconditioning is applied. We also define a partial-batch layer using a tuple (component index, layer index, number of samples in partial-batch). Non-trainable components in a diffusion model may have inter-dependencies (e.g., ControlNet (Zhang & Agrawala, 2023)), and layers within each component are linearly dependent. We schedule the execution of non-trainable components in pipeline bubbles following a topological order of components according to their dependencies. Especially, we fill in the pipeline bubbles sequentially in their chronological order3. To fill a pipeline bubble, we consider all of the components that are ready at the time, i.e., their dependencies are resolved. Whenever a component becomes ready, we add it to the set of ready components. An efficient algorithm is designed to fill a pipeline bubble with ready components (Alg. 1). Its input mainly includes the bubble time TB and the number of idle devices d, a list u containing the index of the starting layer in each currently ready component (layers of a component can be executed in multiple bubbles). It first finds candidates K containing full-batch layers of ready components to fill the current pipeline bubble (Alg. 2), whose execution completes within the bubble time. Then it adds at most one layer from a component to be executed on a partial batch in the remaining bubble time to each candidate, and finally it produces the optimal bubble filling scheme with the longest execution time (not exceeding the bubble time), as shown in Fig. 11. Note that the component layers assigned to the bubble are executed in a data parallel manner at local batch size B d . Input of Alg. 2 includes the input of Alg. 1, and the component index i that it focuses on. It finds bubble filling candidates containing full-batch layers in a recursive man3For bubble filling efficiency, we only identify pipeline bubbles longer than 10 ms, which is empirically greater than the cost of setting up inputs and outputs for pipeline bubble filling. Chronological order of pipeline bubbles is achieved by analyzing the pipeline schedule, which is simulated using the profiled results obtained in step 1 of Fig. 7. All proposed algorithms work offline only. Algorithm 1 Filling One Pipeline Bubble Input: Number of ready non-trainable components n, training batch size B, pipeline bubble time TB, number of idle devices d, indices of starting layers of components u (list, length is n), numbers of layers of components L (list, length is n) Output: Optimal bubble filling candidate k\u2217 1: K0, K \u2190emptyList(), FFC(n, B, TB, d, u, L, 0) 2: for k in K, h in 0, . . . , n \u22121 do 3: b \u2190maximum of getValidNumSamples(B, d), s.t., 4 4: TB \u2265P i\u2208[n],j\u2208[ki] Pf i,ui+j( B d ) + Pf h,uh+kh( b d) // Bubble time should be greater than the sum of execution time of candidate k and a partial-batch layer 5: K0.append((k, (h, uh+kh, b)) // Add candidate k enhanced with a partial-batch layer (h, uh + kh, b) 6: end for 7: return the candidate in K0 with the longest execution time Algorithm 2 FFC Full-batch Layer Bubble Filling Candidates Input: n, B, TB, d, u, L, current component index i Output: bubble filling candidates K 1: t, k0, K \u21900, 0, emptyList() 2: while t + Pf i,ui+k0( B d ) \u2264TB and ui + k0 < Li do 3: t \u2190t + Pf i,ui+k0( B d ) // Cumulative execution time 4: k0 \u2190k0 + 1 5: end while 6: if i = n \u22121 then 7: return [[k0]] // Add all k0 layers to the candidate as it is the last component 8: else 9: for k in k0, . . . , 0 do 10: T \u2032 B \u2190TB \u2212P h\u2208[k] Pf i,ui+h( B d ) // Remaining bubble time after adding k layers to the candidate 11: K\u2032 \u2190FFC(n, B, T \u2032 B, d, u, L, i + 1) 12: K.extend([concat([k], k\u2032) for k\u2032 in K\u2032]) 13: end for 14: return K 15: end if ner: assuming layers from components with indices smaller than i are already considered, it adds layers from component i to the candidate. Alg. 2 first computes how many layers can be added at most from line 2 to 5, where Pf i,ui+k( B d ) is the computation time of layer ui + k of component i given local batch size B d . Then it adds different numbers of layers to the candidate (with total execution time not exceeding TB), and recursively calls itself to add layers from the next component i + 1 from line 9 to 13. Alg. 2 returns a list K containing bubble filling candidates, where each candidate is a list containing n elements (n is the number of ready components), with each element containing the indices of the layers of that component to be executed in the bubble. Then for each bubble filling candidate k in K, an additional partial-batch layer is added to it. Especially, we identify a layer that is the subsequent layer following the scheduled 4Here [n] := {0, 1, . . . , n \u22121}, [ki] := {0, 1, . . . , ki \u22121}. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 2' 3 2''' 0 2'' Pipeline bubble 0 Full-batch layer 2'('') Partial-batch layer Partial-batch input Partial-batch output Full-batch output Split full-batch inputs Concatenate partial-batch outputs 1 Split or concatenate Figure 12. Input split and output concatenation of partial-batch layer\u2019s processing among pipeline bubbles. The partial-batch layer 2 of a non-trainable component is scheduled in 3 consecutive pipeline bubbles. full-batch layers in the candidate, as well as a partial batch to process, whose execution can occupy the longest of the remaining bubble time (line 3 to 4 of Alg. 1). We then choose among the enhanced bubble filling candidates with a partial-batch layer each, the one achieving the longest execution time to maximally utilize the idle time. To decide the partial batch for the extra layer to process in a pipeline bubble (function getValidNumSamples in line 3 of Alg. 1), we follow two principles: (1) The local batch size b/d should not be too small, as otherwise the benefit of inserting a partial-batch layer will not compensate for the overhead of handling its input and output, as illustrated in Fig. 12; (2) b/d should be a regular value to avoid potential kernel performance degradation at unusual batch sizes. We empirically use 4, 8, 12, 16, 24, 32, 48, 64 and 96 as the local batch size candidates. If pipeline bubbles cannot completely accommodate the non-trainable part, the remaining part will be executed after pipelining completes. Furthermore, after introducing a partial-batch layer (h, uh + kh, b) in a pipeline bubble (line 5 of Alg. 1), the layer uh+kh of component h is the first ready layer of that component to be considered when filling the following pipeline bubbles, and it is treated as a full-batch layer on the remaining batch. In this way, this layer can be scheduled to process all or part of the remaining batch in a subsequent pipeline bubble. Fig 12 shows an example of scheduling part of the remaining batch in a subsequent (i.e., the second) pipeline bubble. 6 EVALUATION We build DiffusionPipe on PyTorch 2.0.1 and CUDA 11.7 with 20k LoC in Python, and integrate it with DeepSpeed 0.8.3 to support pipeline and data parallel training. Communication operations are implemented using PyTorch\u2019s distributed communication package and NCCL 2.17.1. Though DiffusionPipe is integrated into DeepSpeed, it is easy to migrate DiffusionPipe to other vendor frameworks. We only need to switch to the new launching method and replace the communication and optimizer implementations with corresponding implementations. Test-bed We conduct our experiments on a cluster of 8 Amazon EC2 p4de.24xlarge machines, each containing 8 NVIDIA A100-80GB GPUs and 96 vCPU cores. The internode connection (EFA) bandwidth is 400 Gbps and the intra-node connection (NVSwitch) bandwidth is 600 GBps. Models We train these models: Stable-Diffusion (Rombach et al., 2022) v2.1, ControlNet (Zhang & Agrawala, 2023) v1.0, CDM-LSUN and CDM-ImageNet (Ho et al., 2022). For CDM-LSUN, we train its 2 backbones using bidirectional pipelining. For CDM-ImageNet, we only train its second and third backbones because training all of them will exceed the GPU memory. The backbones of the same CDM are trained under the same batch size. The input configurations of all models (Table 5) are the same as in their original papers. Table 5. Diffusion models and training configurations Model Input shape Self-cond Stable Diffusion v2.1 512x512 Enabled ControlNet v1.0 CDM-LSUN 64x64, 128x128 (2 image inputs) Not enabled CDM-ImageNet Baselines We run DeepSpeed (Rasley et al., 2020) with vanilla distributed data parallelism (DDP) and ZeRO-3 (Rajbhandari et al., 2021) as baselines for data parallel training. We use GPipe (Huang et al., 2019) and SPP (Luo et al., 2022) as baselines of pipeline parallelism, which perform the backbone only pipelining in Fig. 9. For GPipe that partitions a model into stages with equal number of layers, we evaluate it with 2 pipeline stages and 4 micro-batches. For SPP that solves a dynamic programming problem to optimize model partitioning, we perform the same hyper-parameter searching as in DiffusionPipe. When self-conditioning is enabled, we also run the extra-forward part in the way shown in Fig. 10 for pipeline parallel baselines. Bubble filling is not performed for pipeline parallel baselines. For cascaded diffusion models, data parallel training is performed in two ways: (1) Training multiple backbones in sequential using all devices, i.e., DeepSpeed(-ZeRO-3)-S; (2) Training multiple backbones in parallel on evenly partitioned sets of devices, i.e., DeepSpeed(-ZeRO-3)-P, which is the default strategy in many CDM works. Both SPP and GPipe do not apply to CDM, because they do not support pipelining of multiple models. Metrics We present the training throughput in terms of the number of samples processed per second. The throughput of DeepSpeed(-ZeRO-3)-S and DeepSpeed(-ZeRO3)-P is computed by total batch size of all backbones sum of iteration time of all backbones and sum of batch size iteration time of all backbones, respectively. We also present the pipeline bubble ratio of DiffusionPipe and pipelined baselines, which is computed by P b\u2208pipeline bubbles Tb\u00d7db iteration time\u00d7total num devices, where Tb and db are the duration and the number of idle devices of the bubble b. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines 64 128 256 384 Batch size 0 1e2 2e2 Throughput 8 GPUs, vanilla case 64 128 256 384 Batch size 0 1e2 Throughput 8 GPUs, self-conditioning 128 256 512 768 Batch size 0 2e2 Throughput 16 GPUs, vanilla case 128 256 512 768 Batch size 0 2e2 Throughput 16 GPUs, self-conditioning 256 512 1024 1536 Batch size 0 2.5e2 5e2 Throughput 32 GPUs, vanilla case 256 512 1024 1536 Batch size 0 2.5e2 5e2 Throughput 32 GPUs, self-conditioning 512 1024 2048 3072 Batch size 0 5e2 1e3 Throughput 64 GPUs, vanilla case 512 1024 2048 3072 Batch size 0 5e2 1e3 Throughput 64 GPUs, self-conditioning DiffusionPipe SPP GPipe DeepSpeed DeepSpeed-ZeRO-3 Out of memory (a) Stable Diffusion v2.1 64 128 256 384 Batch size 0 1e2 Throughput 8 GPUs, vanilla case 64 128 256 384 Batch size 0 1e2 Throughput 8 GPUs, self-conditioning 128 256 512 768 Batch size 0 2e2 Throughput 16 GPUs, vanilla case 128 256 512 768 Batch size 0 2e2 Throughput 16 GPUs, self-conditioning 256 512 1024 1536 Batch size 0 2.5e2 5e2 Throughput 32 GPUs, vanilla case 256 512 1024 1536 Batch size 0 2.5e2 5e2 Throughput 32 GPUs, self-conditioning 512 1024 2048 3072 Batch size 0 5e2 1e3 Throughput 64 GPUs, vanilla case 512 1024 2048 3072 Batch size 0 5e2 1e3 Throughput 64 GPUs, self-conditioning DiffusionPipe SPP GPipe DeepSpeed DeepSpeed-ZeRO-3 Out of memory (b) ControlNet v1.0 128 256 384 512 Batch size 0 2.5e2 5e2 Throughput 8 GPUs 256 512 768 1024 Batch size 0 5e2 1e3 Throughput 16 GPUs 512 1024 1536 2048 Batch size 0 1e3 2e3 Throughput 32 GPUs 1024 2048 3072 4096 Batch size 0 2e3 4e3 Throughput 64 GPUs DiffusionPipe DeepSpeed-S DeepSpeed-P DeepSpeed-ZeRO-3-S DeepSpeed-ZeRO-3-P Out of memory DiffusionPipe DeepSpeed-S DeepSpeed-P DeepSpeed-ZeRO-3-S DeepSpeed-ZeRO-3-P Out of memory DiffusionPipe DeepSpeed-S DeepSpeed-P DeepSpeed-ZeRO-3-S DeepSpeed-ZeRO-3-P Out of memory DiffusionPipe DeepSpeed-S DeepSpeed-P DeepSpeed-ZeRO-3-S DeepSpeed-ZeRO-3-P Out of memory (c) CDM-LSUN 64 128 256 384 Batch size 0 2e2 Throughput 8 GPUs 128 256 512 768 Batch size 0 2.5e2 5e2 Throughput 16 GPUs 256 512 1024 1536 Batch size 0 5e2 1e3 Throughput 32 GPUs 512 1024 2048 3072 Batch size 0 1e3 2e3 Throughput 64 GPUs DiffusionPipe DeepSpeed-S DeepSpeed-P DeepSpeed-ZeRO-3-S DeepSpeed-ZeRO-3-P Out of memory DiffusionPipe DeepSpeed-S DeepSpeed-P DeepSpeed-ZeRO-3-S DeepSpeed-ZeRO-3-P Out of memory DiffusionPipe DeepSpeed-S DeepSpeed-P DeepSpeed-ZeRO-3-S DeepSpeed-ZeRO-3-P Out of memory DiffusionPipe DeepSpeed-S DeepSpeed-P DeepSpeed-ZeRO-3-S DeepSpeed-ZeRO-3-P Out of memory (d) CDM-ImageNet Figure 13. Training throughput (samples/second) 6.1 Training throughput In Fig. 13 we present the throughput of training the diffusion models at different cluster scales and batch sizes. For single backbone models (Fig. 13a and 13b), DiffusionPipe outperforms other pipeline systems both with and without self-conditioning, as it efficiently fills pipeline bubbles with non-trainable layer execution. When training on a machine, device utilization determines performance. DiffusionPipe can outperform data parallel baselines because both trainable model stages and the non-trainable part occupy only part of the cluster, and it processes the input batch with a larger local batch size, thus achieving better device utilization. At batch size 256, DiffusionPipe achieves 1.44x and 1.16x speedups over GPipe and DeepSpeed, respectively, when training Stable Diffusion v2.1. When training on multiple machines, synchronization overhead has a more significant impact on training throughput. DiffusionPipe outperforms data parallel baselines as it can mitigate the overhead in two ways: (1) Each device hosts fewer parameters in pipeline training, so less synchronization communication is required; (2) Synchronization can be overlapped with the non-trainable part, further reducing its impact on the throughput. At batch size 2048 on 64 GPUs, DiffusionPipe achieves 1.41x and 1.28x speedups over GPipe and DeepSpeed training of ControlNet v1.0. \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines For the cascaded diffusion models (Fig. 13c and Fig. 13d), DiffusionPipe\u2019s throughput is comparable to DeepSpeed-P for two reasons: (1) In both CDM models, there is little non-trainable part to fill bubbles, so we cannot get speedup from the non-trainable part; (2) Backbone sizes in both CDMs are relatively close to each other, and DeepSpeed-S already achieves balanced training iteration time with respect to backbones. However, DiffusionPipe can still achieve a higher training batch size compared to DeepSpeed-P because the activation memory of microbatches does not persist during the entire backward process. 6.2 Pipeline bubble ratio In Fig. 14, we observe that DiffusionPipe can reduce the pipeline bubble ratio to less than 5% for both Stable Diffusion v2.1 and ControlNet v1.0, which is dramatically lower than other pipeline training baselines. The unfilled pipeline bubble time can be explained by: (1) The difference between the actual execution time and the profiled execution time (used to drive the bubble-filling algorithm); (2) The noncontinuous execution times of non-trainable layers, which make it unlikely to perfectly fill the bubble. 256 384 Batch size 0 20 Pipeline bubble (%) Stable Diffusion v2.1 256 384 Batch size 0 20 40 Pipeline bubble (%) ControlNet v1.0 DiffusionPipe GPipe SPP Figure 14. Pipeline bubble ratio on 8 GPUs 6.3 Ablation study In Fig. 15, we evaluate the throughput of DiffusionPipe when the partial-batch layer design is disabled and when the pipeline bubble filling design is completely disabled, respectively. We observe that disabling the partial-batch layer significantly degrades throughput, and disabling bubble filling degrades it even more (by 10.9% and 17.6% for ControlNet v1.0 at batch size 256). This demonstrates that pipeline bubble filling and the partial-batch layer design can effectively improve training efficiency. We also observe that at batch size 384, disabling the partial-batch layer achieves almost the same throughput as no bubble filling, indicating that the extra-long layer in Fig. 5 blocks almost all layers during bubble filling, and validating our partial-batch design. 256 384 Batch size 0 1e2 2e2 Throughput Stable Diffusion v2.1 256 384 Batch size 0 1e2 Throughput ControlNet v1.0 DiffusionPipe Partial-batch layer disabled Bubble filling disabled Figure 15. Ablation study on 8 GPUs (samples/second) 6.4 Pre-processing overhead Pre-processing, including profiling, running the model partitioning and the pipeline bubble filling algorithm, is performed once and completes within a few minutes, which is acceptable given training usually takes much longer time. Profiling is executed in parallel on all GPUs, and its overhead is decided by the number of GPUs. A typical profiling time of Stable Diffusion v2.1 on 2 AWS EC2 p4de.24xlarge machines at batch size 512 is 55 seconds. Model partitioning algorithm is executed in parallel on all CPUs in the host machine, and its overhead is decided by the number of CPUs in the host, the number of trainable components and the number of layers in them. For Stable Diffusion v2.1 and ControlNet v1.0 at the same setting, the overhead is about 0.5 second. Pipeline bubble fulling algorithm is executed on only 1 CPU, and its overhead is decided by the number of pipeline bubbles and the number of non-trainable components. For the same models at the same setting, the overhead is less than 1 second. 7 CONCLUSION This paper presents DiffusionPipe, a system that automatically optimizes pipeline training for large diffusion models. Our unified partitioning algorithm for the trainable part optimizes partitioning schemes of multiple training scenarios of diffusion models. We also propose to fill pipeline bubbles with the non-trainable part of diffusion models, which achieves higher training throughput compared to pipelining only the backbone model and training in data parallel. Experimental results demonstrate that DiffusionPipe achieves speedups of up to 1.41x compared to pipeline baselines and 1.28x compared to data parallel baselines. This is accomplished by reducing the pipeline bubble to less than 5% of the training iteration time. Moreover, DiffusionPipe enables the use of larger training batch sizes in comparison to data parallel baselines. Our design of filling pipeline bubbles with non-trainable parts can extend to more applications, e.g., training or fine-tuning diffusion models with transformer backbones (Peebles & Xie, 2022b; Ma et al., 2024; Chen et al., 2023; 2024), together with multimodal models with frozen encoder components (Li et al., 2023a;b; Yu et al., 2023). 8 ACKNOWLEDGEMENT We would like to thank the Program Chairs and anonymous reviewers for their valuable feedback. This work was supported by an Amazon Research Award (ARA) on AWS AI and grants from Hong Kong RGC under the contracts HKU 17208920, 17204423 and C7004-22G (CRF). \fDiffusionPipe: Training Large Diffusion Models with Efficient Pipelines"
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01266v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01266v1",
|
| 3 |
+
"title": "MFTraj: Map-Free, Behavior-Driven Trajectory Prediction for Autonomous Driving",
|
| 4 |
+
"abstract": "This paper introduces a trajectory prediction model tailored for autonomous\ndriving, focusing on capturing complex interactions in dynamic traffic\nscenarios without reliance on high-definition maps. The model, termed MFTraj,\nharnesses historical trajectory data combined with a novel dynamic geometric\ngraph-based behavior-aware module. At its core, an adaptive structure-aware\ninteractive graph convolutional network captures both positional and behavioral\nfeatures of road users, preserving spatial-temporal intricacies. Enhanced by a\nlinear attention mechanism, the model achieves computational efficiency and\nreduced parameter overhead. Evaluations on the Argoverse, NGSIM, HighD, and\nMoCAD datasets underscore MFTraj's robustness and adaptability, outperforming\nnumerous benchmarks even in data-challenged scenarios without the need for\nadditional information such as HD maps or vectorized maps. Importantly, it\nmaintains competitive performance even in scenarios with substantial missing\ndata, on par with most existing state-of-the-art models. The results and\nmethodology suggest a significant advancement in autonomous driving trajectory\nprediction, paving the way for safer and more efficient autonomous systems.",
|
| 5 |
+
"authors": "Haicheng Liao, Zhenning Li, Chengyue Wang, Huanming Shen, Bonan Wang, Dongping Liao, Guofa Li, Chengzhong Xu",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.RO",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.RO",
|
| 11 |
+
"cs.AI"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Temporal AND Graph",
|
| 15 |
+
"gt": "MFTraj: Map-Free, Behavior-Driven Trajectory Prediction for Autonomous Driving",
|
| 16 |
+
"main_content": "Introduction The integration of autonomous vehicles (AVs) with humandriven vehicles and pedestrians necessitates advanced trajectory prediction models. Central to these models is their ability to predict the future trajectories of various road users, leveraging historical data. Despite significant advancements, a pivotal challenge persists: modeling the often unpredictable driving behaviors of road users. These behaviors, shaped by a blend of traffic dynamics, road layouts, and individual cognitive inclinations, offer a unique window into the real-time decision-making processes of humans in complex traffic settings [Schwarting et al., \u2217Authors contributed equally; \u2020Corresponding author. 2019; Li et al., 2023]. Our research has illuminated the pivotal role of understanding human behavioral patterns in trajectory predictions. Recognizing and predicting human driving behavior is not merely about tracing a vehicle\u2019s path; it\u2019s about understanding the cognitive processes that dictate those paths. By understanding behaviors, AVs can anticipate sudden changes in human-driven vehicles or pedestrian movements, leading to safer co-navigation. Furthermore, behaviorfocused predictions can aid in scenarios where traditional data might be ambiguous or incomplete, relying on human behavioral patterns to fill in the gaps. Through the integration of decision-making theories, cognitive psychology, and traffic behavior studies [Yin et al., 2021], trajectory prediction models can be enriched, fostering a harmonious coexistence of AVs and human-driven entities on the road. High Definition (HD) maps, conventionally considered pivotal for trajectory prediction, pose intrinsic challenges. Their creation is resource-intensive, and in the rapidly changing milieu of urban environments, they can quickly become obsolete [Gao et al., 2020; Ren et al., 2024]. This has given rise to map-free models, a paradigm shift that operates independently of HD map data. However, while these models adeptly handle dynamic environments, they may lack the granularity provided by comprehensive road network data. This gap is aptly addressed by the advent of deep learning techniques, notably Graph Neural Networks (GNNs) [Liang et al., 2020; Gao et al., 2020]. GNNs, adept at assimilating extensive data from road users, offer nuanced insights into their interactions and the overarching socio-cognitive context, thereby compensating for the lack of detailed HD maps. Our contributions are as follows: 1. An advanced map-free architecture for trajectory prediction that obviates the need for HD maps, resulting in significant computational savings. 2. A novel dynamic geometric graph that captures the essence of continuous driving behavior, circumventing the limitations of manual labeling. We have integrated metrics and behavioral criteria, drawing from traffic psychology, cognitive neuroscience, and decision-making frameworks, to craft a model that offers more than mere predictions\u2014it elucidates. arXiv:2405.01266v1 [cs.RO] 2 May 2024 \f3. Benchmark assessments underscore MFTraj\u2019s superiority. Demonstrating a commendable performance elevation of nearly 5.0% on the Argoverse, NGSIM, HighD, and MoCAD datasets, its robustness is further accentuated by its consistent performance even with a data shortfall of 25%62.5%, underscoring its adaptability and profound understanding of diverse traffic dynamics. 2 Related Work Recent years have seen an explosion of research in trajectory prediction for autonomous driving (AD), thanks to the burgeoning field of deep learning. These cutting-edge approaches [Liao et al., 2024a; Messaoud et al., 2021; Tian et al., 2024; Liao et al., 2024f] have demonstrated superior performance in complex traffic scenarios. However, they often encounter challenges in adequately representing spatial relationships such as graphic inputs of the scene. To address this, HD maps, rich in scene and semantic information, have attracted increasing research attention. Considering that Convolutional Neural Networks (CNNs) excel at extracting spatial features from input data, such as spatial features from inputs like vectorized maps or raster images, several studies [Zhao et al., 2021; Gilles et al., 2021; Khandelwal et al., 2020] have merged sequential networks with CNNs. This hybrid approach effectively captures both temporal and spatial features from HD maps, providing enriched contextual information for motion prediction tasks. Recent research has explored Graph Neural Networks (GNNs) [Liang et al., 2020; Zeng et al., 2021; Mohamed et al., 2020; Liao et al., 2024c], such as Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs), Transformers [Zhou et al., 2022; Liao et al., 2024d; Chen et al., 2022a; Ngiam et al., 2022], and generative models such as Adversarial Networks (GANs) [Zhou et al., 2023] and Variational Auto Encoders (VAEs) [Walters et al., 2021; Liao et al., 2024e] for direct encoding of HD maps. For example, VectorNet [Gao et al., 2020] simplifies maps by extracting key points from lane splines and encoding them using GNNs. Moreover, LaneGCN [Liang et al., 2020] and TPCN [Ye et al., 2021] build lane graphs using central line segments, employing GCN to capture dynamic interaction. In addition, HiVT [Zhou et al., 2022] and SSLLanes [Bhattacharyya et al., 2023] represent map elements by relative positions, improving the transformer model for time series trajectory data. Despite their effectiveness, the limited availability of HD maps and the extensive resources needed for their creation and maintenance impede the widespread adoption of autonomous driving, particularly in areas lacking current HD map coverage. In response to these challenges, this study introduces a map-free model that utilizes generative models and a VRNN [Chung et al., 2015] to account for the variable nature of traffic scenarios. We propose a novel adaptive GCN to model the complexity of real-time interactions in traffic scenes. To streamline model complexity, we apply the Linformer framework [Wang et al., 2020] for a balance between computational efficiency and prediction accuracy in AD applications. 3 Methodologies 3.1 Inputs and Outputs This study focuses on predicting the trajectory of the target vehicle in interactive driving scenarios, considering all vehicles within the AV\u2019s (termed the target vehicle) sensing range. At time t, the ego vehicle anticipates the target vehicle\u2019s trajectory for the upcoming tf steps. Our model, drawing from historical data X, considers past trajectories of both the target vehicle (indexed by 0) and its surrounding agents (indexed from 1 to n) over a predefined horizon th. Formally, X = \b Xt\u2212th:t 0 ; Xt\u2212th:t i \u2200i \u2208[1, n] \t (1) where Xt\u2212th:t 0 = {xt\u2212th 0 , xt\u2212th+1 0 , . . . , xt 0} and Xt\u2212th:t i = {xt\u2212th i , xt\u2212th+1 i , . . . , xt i} represent the historical trajectories of the target vehicle and those of the surrounding agents from time t \u2212th to t, respectively. The output of the model is the future trajectory of the target vehicle during the prediction horizon tf: Y = n yt+1 0 , yt+2 0 , . . . , yt+tf \u22121 0 , yt+tf 0 o (2) where yt 0 is the 2D coordinates of the target vehicle at time t. Our model uniquely operates without relying on maps, using only the historical trajectory data of the target and nearby agents. The model needs an input sequence of length th and remains functional even if the historical data is not perfectly sequential. For sporadic missing data points, due to reasons like occlusions or sensor glitches, we employ simple linear imputation or similar methods. In addition, Figure 1 illustrates our proposed model\u2019s hierarchical design. Following the encoder-decoder format, it features four key components: behavior-aware, position-aware, interaction-aware modules, and the residual decoder. We delve into each module\u2019s specifics below. 3.2 Behavior-aware Module Moving away from traditional methods that classify driver behaviors into fixed and discrete categories, we offer a more adaptable and flexible solution with our behavior-aware module, which utilizes a continuous portrayal of behavioral attributes. This approach draws inspiration from the multipolicy decision-making framework [Markkula et al., 2020], integrating elements of traffic psychology [Toghi et al., 2022] and dynamic geometric graphs (DGGs) [Boguna et al., 2021] to effectively capture intricate driving behaviors amid ongoing driving maneuvers and evolving traffic conditions. Dynamic Geometric Graphs. We first model the interactions of different agents with a DGG. At time t, the graph Gt is defined as: Gt = {V t, Et} (3) where V t = {vt 0, vt 1 . . . , vt n} is the set of nodes, vt i is the ith node representing the i-th agent, Et = {et 0, et 1 . . . , et n} is the set of edges representing potential interactions between agents, and et i is the edge between the node vt i and other agents that have potential influences with it. An interaction is assumed to exist only if two agents, e.g., vi and vj, are in close proximity to each other, i.e., their shortest distance \fFigure 1: Architecture of the proposed trajectory prediction model. d \u0000vt i, vt j \u0001 is less than or equal to a predefined threshold r. Therefore, we define et i = {vt ivt j | (j \u2208N t i )} (4) where N t i = \b vt j \u2208V t\\ {vt i} | d \u0000vt i, vt j \u0001 \u2264r, i \u0338= j \t . Correspondingly, the symmetrical adjacency matrix At of Gt can be given as: At(i, j) = \u001ad \u0000vt i, vt j \u0001 if d \u0000vt i, vt j \u0001 \u2264r, i \u0338= j 0 otherwise (5) Centrality Measures. Centrality measures are graphtheoretic metrics that are widely used to understand various aspects of network structures. These measures provide valuable insights into the importance, influence, and connectivity of nodes or vertices within a graph. As shown in Table 1, we use six centralities to characterize driving behavior. These measures allow evaluation of individual agent behavior within the DGG and reveal key agents and overall connectivity of the traffic graph. (1) Degree Centrality: Reflects the number of connections an agent has, correlating with its influence on and susceptibility to others. It\u2019s defined as: Jt i (D) = \f \fN t i \f \f + Jt\u22121 i (D) (6) where |Nit| denotes the total elements in N t i . (2) Closeness Centrality: Indicates an agent\u2019s reachability, suggesting its potential influence over others. Defined by: Jt i (C) = |N t i | \u22121 P \u2200vt j\u2208N t i d \u0000vt i, vt j \u0001 (7) (3) Eigenvector Centrality: Measures an agent\u2019s importance by considering both quantity and quality of connections. Expressed as: Jt i (E) = P \u2200vt j\u2208N t i d \u0000vt i, vt j \u0001 \u03bb (8) where \u03bb is the eigenvalue, indicating the collective influence exerted by an agent and its network. (4) Betweenness Centrality: Highlights agents that act as bridges or bottlenecks in traffic, crucial in congested situations. Formulated as: Jt i (B) = X \u2200vt s,vt k\u2208V t \u03c3j,k(vt i) \u03c3j,k (9) where V t denotes the set of all agents present in the scene, \u03c3j,k signifies the total number of shortest paths between agent vt j and agent vt k, and \u03c3j,k(vi) represents the number of those paths traversing the agent vt i. (5) Power Centrality: Identifies agents in recurrent interactions, hinting at traffic patterns. Defined by: Jt i (P) = X k Ak ii k! (10) where Ak ii denotes the i-th diagonal element of the adjacency matrix raised to the k-th power, while k! signifies the factorial of k, shedding light on its contribution to the network\u2019s \fTable 1: Centrality measures and their interpretations. Centrality Measures Magnitude (Original Measure) Gradient (1st Derivative) Curvature (2nd Derivative) Degree (J t i (D)) Closeness (J t i (C)) Agent\u2019s potential and capability for interaction in the traffic environment Agent\u2019s sensitivity to traffic density variations Driver\u2019s capability to react to fluctuating traffic conditions Eigenvector (J t i (E)) Betweenness (J t i (B)) Agent\u2019s significance in dynamic traffic scenarios Variation in agent\u2019s importance in dynamic traffic scenes Influence of driver behavior alterations on overall traffic conditions Power (J t i (P)) Katz (J t i (K)) Extent of influence an agent exerts on others via direct and indirect interactions at time t Agent\u2019s adaptability to shifts in driving behaviors Agent\u2019s capacity to modify interactions in complex and congested traffic scenarios structural integrity and dynamism. (6) Katz Centrality: Emphasizes both direct and distant interactions of an agent, capturing intricate driving patterns. Given as: Jt i (K) = X k X j \u03b1kAk ij + \u03b2k, \u2200i, j \u2208[0, n], where \u03b1k < 1 \u03bbmax (11) where n represents the number of agents in the real-time traffic scenario, \u03b1k is the decay factor, \u03b2k denotes the weight assigned to the immediate neighboring agents, and Ak ij is the i,j-th element of the k-th power of the adjacency matrix. Behavior-aware Criteria. Inspired by the relationship between velocity, acceleration, and jerk, we introduce behavioral criteria. These criteria, consisting of Behavior Magnitude Index (BMI) Ct i, Behavior Tendency Index (BTI) Lt i, and Behavior Curvature Index (BCI) It i, evaluate different driving behaviors for the target vehicle and its surroundings. They compute thresholds, gradients, and concavities of centrality measures that reflect behaviors such as lane changes, acceleration, deceleration, and driving style. We find that behaviors with significant, fluctuating centrality values in short time frames are likely to affect nearby agents, consistent with human risk perception and time-sensitive decision-making. They are respectively given as follows: Ct i = h\f \f \fJ t i (D) \f \f \f , \f \f \fJ t i (C) \f \f \f , \f \f \fJ t i (E) \f \f \f , \f \f \fJ t i (B) \f \f \f , \f \f \fJ t i (P ) \f \f \f , \f \f \fJ t i (K) \f \f \f iT (12) Lt i = \"\f \f \f \f \f \u2202J t i (D) \u2202t \f \f \f \f \f , \f \f \f \f \f \u2202J t i (C) \u2202t \f \f \f \f \f , \u00b7 \u00b7 \u00b7 , \f \f \f \f \f \u2202J t i (K) \u2202t \f \f \f \f \f #T (13) It i = \"\f \f \f \f \f \u22022J t i (D) \u22022t \f \f \f \f \f , \f \f \f \f \f \u22022J t i (C) \u22022t \f \f \f \f \f , \u00b7 \u00b7 \u00b7 , \f \f \f \f \f \u2202J t i (K) \u22022t \f \f \f \f \f #T (14) Behavior Encoder. Incorporating behavior-aware criteria, symbolized as J = {Ct\u2212th:t 0:n , Lt\u2212th:t 0:n , It\u2212th:t 0:n }, our behavior encoder comprises VRNN and GRU components. This encoder succinctly models relationships between random variables across time, yielding precise sequential behavioral features \u00af Ot\u2212th:t behavior. Formally: \u00af Ot\u2212th:t behavior = \u03d5GRU (\u03d5VRNN(J )) (15) where \u03d5GRU and \u03d5VRNN denote the GRU and VRNN functions. This encoder captures human driving patterns and their temporal dynamics. Next, behavioral features \u00af Ot\u2212th:t behavior, fuse with positional features from the position-aware module, subsequently processed by the interaction-aware module for comprehensive feature extraction. 3.3 Position-aware Module Contrary to traditional methods that emphasize absolute positions [Wang et al., 2022a; Gao et al., 2020] or fixed grids [Deo and Trivedi, 2018], our model emphasizes relative positions. The position-aware module captures individual and group spatial dynamics, interpreting the scene\u2019s geometric nuances. These insights are then encoded to produce positional features. Pooling Mechanism. Our pooling mechanism captures dynamic position data from the traffic environment around the target vehicle, utilizing individual stk i and multi-agent stk i,j position vectors. This strategy gleans historical trajectories and spatial relationships without depending on fixed positions or grids. The relationships are formulated as: stk i = {ptk i \u2212ptk\u22121 i }, ptk i,j = {ptk i \u2212ptk j } (16) Position Encoder. The position encoder employs an LSTM to transform discrete position vectors into continuous spatiotemporal representations, thereby enhancing temporal and spatial interactions between agents and map elements. Given the historical position vectors for the target and surrounding agents, it embeds them temporally: \u00af Ot\u2212th:t position = \u03d5LSTM \u0000\u00af ht\u2212th:t\u22121 i , st\u2212th:t\u22121 i , pt\u2212th:t\u22121 i,j \u0001 (17) where \u00af Ot\u2212th:t\u22121 position is the positional features output by the position encoder, and \u03d5LSTM denotes the two-layer LSTM encoder, and \u00af ht\u2212th:t\u22121 i represents the hidden position state updated by the encoder on a frame-by-frame basis, with the weights of the LSTM shared among all agents. 3.4 Interaction-aware Module Effective trajectory prediction in complex traffic scenarios hinges upon a system\u2019s ability to comprehend and anticipate interactions among vehicles. Classic GCN-based methods, although proficient at encapsulating geometric inter-agent relationships, often exhibit limitations in fluid traffic conditions due to their fixed adjacency matrix configurations. To tackle this, we introduce a novel adaptive structure-aware GCN, taking cues from advancements in crystal graphs and material design. This novel approach stands out by its capability to craft spatial feature matrices dynamically, adjusting to the number of agents observed in real-time, which ensures a more fluid and adaptable response to traffic changes. A graphical illustration of this concept is provided in Figure 2. Breaking away from conventional models that predominantly lean on distance-based positional features, our design holistically blends continuous behavioral features into \fFigure 2: Overview of our adaptive structure-aware GCN. The real-time trajectories of the target and observed agents are captured using a topology graph to form a feature matrix. This matrix undergoes aggregation, updating, and iteration within the GCN. As new agents are observed in real-time, the GCN dynamically adjusts its topology, updating features for the added nodes. its graph structure. This not only addresses the multifaceted spatio-temporal interactions but also considers the intricate physical interplays between agents, offering a noticeable enhancement in prediction precision. Our design blueprint encompasses an adaptive convolutional neural network rooted in a fully connected interaction multigraph. This structure is adept at simultaneously capturing sequential behavioral and dynamic positional interactions among agents. The multigraph\u2019s operational layer is distinguished by nodes, which symbolize sequential behavioral features \u00af Ot\u2212th:t behavior and edges representing positional features \u00af Ot\u2212th:t position, as defined below: \u02dc zk i = F \u0010 \u02dc zk\u22121 i , \u02dc rk\u22121 i,j \u0011 = \u02dc zk\u22121 i + \u03d5sgm \u0010 \u02dc rk\u22121 i,j Wk\u22121 g + bk\u22121 g \u0011 \u2299\u03d5spu \u0010 \u02dc rk\u22121 i,j Wk\u22121 h + bk\u22121 h \u0011 (18) where the variable k denotes the layer within the GCN, k \u2208[1, 3], and the symbols \u2299, \u03d5sgm, and \u03d5spu represent the element-wise product, sigmoid activation function, and softplus activation function, respectively. Consequently, Wk\u22121 g and Wk\u22121 h are learnable matrices, bk\u22121 g , and bk\u22121 h are the bias of the k\u2212th layer. \u02dc rk\u22121 i,j can be represented as follows: \u02dc rk\u22121 i,j = \u0000\u02dc zk\u22121 i \r \r\u02dc zk\u22121 j \r \r pt\u2212th:t i,j \u0001 (19) Additionally, the initial feature vector \u02dc z(0) i is defined as follows: \u02dc z(0) i = \u0010 \u00af Ot\u2212th:t behavior\u2225\u00af Ot\u2212th:t position \u0011 (20) Furthermore, the output of the adaptive structure-aware GCN for the target vehicle i is then passed to Linformer, an extension architecture of Transformer, Furthermore, the output of the adaptive structure-aware GCN for the target vehicle i is subsequently fed into a lightweight transformer-based framework\u2014\u2014 Linformer [Wang et al., 2020], to efficiently quantify and compute the dynamic attention weight vectors for the surrounding agents, ultimately output the contextual mapping \u00af O. This allows for a favorable trade-off between accuracy and efficiency. 3.5 Residual Decoder The residual decoder, comprising a linear residual and projection layer, processes node vectors to forecast the target vehicle\u2019s future trajectory, producing the prediction Yt:t+tf 0 . This is given by: Y = Yt:t+tf 0 = F\u03b8 \u0000F\u03b8( \u00af O) \u0001 (21) such that, F\u03b8(\u00b7) = \u03d5ReLU [\u03d5GN (\u03d5Linear(\u00b7))] (22) where \u03d5ReLU denotes the ReLU activation function, and \u03d5GN denotes the Group Normalization (GN) function [Wu and He, 2018], which is applied to improve the training stability of our model. In addition, the \u03d5Linear corresponds to the fully connected layer, while F\u03b8 denotes the residual decoder function. \fTable 2: Performance comparison of various models on complete and missing datasets for Argoverse. Models use either HD map or vectorized map (Map) and trajectory (Traj.) data or solely Trajectory data, with some not specifying (\u2019-\u2019). Metrics include minADE (k=1), minFDE (k=1), and MR (k=1). Bold and underlined values represent the best and second-best performance in each category. Model Input minADE (m)\u2193 minFDE (m)\u2193 MR (%)\u2193 Argoverse Baseline [Chang et al., 2019] Map + Traj. 2.96 6.81 81.00 Constant Velocity [Chang et al., 2019] 3.55 7.89 75.00 SGAN [Gupta et al., 2018] 3.61 5.39 87.11 TPNet [Fang et al., 2020] Map + Traj. 2.33 5.29 PRIME [Song et al., 2022] Map + Traj. 1.91 3.82 58.67 Uulm-mrm (2rd) [Chang et al., 2019] Map + Traj. 1.90 4.19 63.47 Jean (1st) [Mercat et al., 2020] Map + Traj. 1.74 4.24 68.56 WIMP [Khandelwal et al., 2020] Map + Traj. 1.82 4.03 62.88 Scene-Transformer [Ngiam et al., 2022] Map + Traj. 1.81 4.06 59.21 TNT [Zhao et al., 2021] Map + Traj. 1.77 3.91 59.72 mmTransformer [Liu et al., 2021] Map + Traj. 1.77 4.00 61.78 CtsConv (Aug.) [Walters et al., 2021] Map + Traj. 1.77 4.05 HOME [Gilles et al., 2021] Map + Traj. 1.72 3.73 58.40 LaneGCN [Liang et al., 2020] Map + Traj. 1.71 3.78 59.05 GOHOME [Gilles et al., 2022] Map + Traj. 1.69 3.65 57.21 LaneRCNN [Zeng et al., 2021] Map + Traj. 1.68 3.69 56.85 DenseTNT [Gu et al., 2021] Map + Traj. 1.68 3.63 58.43 VectorNet [Gao et al., 2020] Map + Traj. 1.66 3.67 TPCN [Ye et al., 2021] Map + Traj. 1.66 3.69 58.80 SSL-Lanes [Bhattacharyya et al., 2023] Map + Traj. 1.63 3.56 56.71 LTP [Wang et al., 2022a] Map + Traj. 1.62 3.55 56.25 HiVT-128 [Zhou et al., 2022] Map + Traj. 1.60 3.52 MFTraj Traj. 1.59 3.51 55.44 MFTraj (drop 3-frames) Traj. 1.68 3.59 56.95 MFTraj (drop 5-frames) Traj. 1.76 3.74 59.08 MFTraj (drop 8-frames) Traj. 1.86 3.90 61.12 MFTraj (drop 10-frames) Traj. 1.97 3.96 62.72 4 Experiments 4.1 Experimental Setup Datasets. We tested model\u2019s efficacy on Argoverse [Chang et al., 2019], NGSIM [Deo and Trivedi, 2018], HighD [Krajewski et al., 2018], and MoCAD [Liao et al., 2024b] datasets. Data Segmentation. For Argoverse, we predicted a 3-second trajectory from a 2-second observation, while for NGSIM, HighD, and MoCAD, we use 6-second intervals split into 2 seconds of observation and 4 seconds of prediction. These datasets, referred to as the complete dataset, help assess our model in diverse traffic scenarios. Recognizing that realworld conditions often lead to incomplete data, we further assessed our model\u2019s resilience using the Argoverse dataset by introducing four subsets with varying levels of missing data: drop 3-frames, drop 5-frames, drop 8-frames, and drop 10-frames. These missing datasets simulate data loss scenarios. For data gaps, we applied simple linear interpolation. Metrics. Our experimental protocol was aligned with the Argoverse Motion Forecasting Challenge and prior work [Liao et al., 2024b], we evaluated the performance of our model using standard metrics: minADE, minFDE, MR, and RMSE. Implementation Details. We implemented our model using PyTorch and PyTorch-lightning on an NVIDIA DGX-2 with eight V100 GPUs. Using the smooth L1 loss as our loss function, the model was trained with the Adam optimizer, a batch size of 32, and learning rates of 10\u22123 and 10\u22124. 4.2 Experimental Results Performance Evaluation on the Complete Dataset. Tables 2 and Table 4 present a comparative evaluation of our trajectory prediction model against 25 baselines from 2016 to 2023. Unlike most approaches that depend on HD maps or vectorized map data, our model omits map-based inputs. Still, it consistently outperforms the baselines across metrics like minADE, minFDE, MR, and RMSE for both Argoverse and MoCAD datasets. Specifically, for the Argoverse dataset, MFTraj outperforms most of the SOTA models by margins of 2.9% in minADE, 2.4% in minFDE, and 3.8% in MR, while being on par with HiVT. It excels particularly in challenging long-term predictions (4s-5s) on NGSIM, HighD, and MoCAD datasets, with reductions in forecast error surpassing at least 11.5%, 29.6%, and 21.9%, respectively. This emphasizes its potential for accurate long-term predictions in highway and urban settings. Performance Evaluation on the Missing Dataset. Table 2 showcases the resilience of our model when faced with incomplete data sets. Our model consistently outperforms all other baselines on the drop 3-frames and drop 5-frames datasets. Notably, on the drop 3-frames dataset, it surpasses nearly all state-of-the-art (SOTA) models trained on full data, highlighting its remarkable predictive strength even with missing data. While its performance on the drop 5frames dataset excels over most baselines, there are exceptions in specific metrics against models like TNT, WIMP, and mm Transformer. As the number of missing frames increases, as in the drop 8-frames and drop 10-frames datasets, there\u2019s an expected decline in performance. Yet, even with half the input data missing, our model still competes strongly against top baselines, emphasizing its potential in environments with data interruptions. Comparative Analysis of Model Performance and Complexity. In Table 3, we compare our model\u2019s performance and complexity with various SOTA baselines. While our model doesn\u2019t have the lowest parameter count, it excels in all performance metrics. Impressively, it achieves this while using 90.42% and 87.18% fewer parameters than WIMP and Scene-Transformer, respectively. Compared to top-10 SOTA models, our model not only surpasses them in accuracy but is also as efficient, if not more so, than HiVT-128, SSLLanes, LaneGCN, and HOME+GOHOME. This underlines our model\u2019s optimal balance of robustness, efficiency, and trajectory prediction accuracy. Table 3: Comparative evaluation of MFTraj with SOTA baselines. Model minADE (m)\u2193 minFDE (m)\u2193 MR (%)\u2193 #Param (K) WIMP [Khandelwal et al., 2020] 1.82 4.03 62.88 >20,000 Scene-Transformer [Ngiam et al., 2022] 1.81 4.06 59.21 15,296 CtsConv (Aug.) [Walters et al., 2021] 1.77 4.05 1,078 mmTransformer [Liu et al., 2021] 1.77 4.00 61.78 2,607 LaneGCN [Liang et al., 2020] 1.71 3.78 59.05 3,701 HOME+GOHOME [Gilles et al., 2022] 1.69 3.65 57.21 5,100 DenseTNT [Gu et al., 2021] 1.68 3.63 58.43 1,103 SSL-Lanes [Bhattacharyya et al., 2023] 1.63 3.56 56.71 1,840 HiVT-128 [Zhou et al., 2022] 1.60 3.52 2,529 MFTraj 1.59 3.51 55.44 1,961 4.3 Ablation Studies We executed an ablation study to assess the impact of individual components within our trajectory prediction model, with the results summarized in Table 5. Model F, i.e., MFTraj, which integrates all components, stands out in all metrics, signifying the synergy of its parts. When the behavior-aware module is excluded in Model A, there are noticeable drops in minADE, minFDE, and MR by 12.6%, 8.8%, and 8.5% re\fTable 4: Evaluation results for MFTraj and the other SOTA baselines without using HD maps in the NGSIM, HighD and MoCAD datasets over a different horizon. RMSE (m) is the evaluation metric, with some not specifying (\u201c-\u201d). Bold and underlined values represent the best and second-best performance in each category. Dataset Model Prediction Horizon (s) 1 2 3 4 5 NGSIM S-LSTM [Alahi et al., 2016] 0.65 1.31 2.16 3.25 4.55 S-GAN [Gupta et al., 2018] 0.57 1.32 2.22 3.26 4.40 CS-LSTM [Deo and Trivedi, 2018] 0.61 1.27 2.09 3.10 4.37 DRBP[Gao et al., 2023] 1.18 2.83 4.22 5.82 DN-IRL [Fernando et al., 2019] 0.54 1.02 1.91 2.43 3.76 WSiP [Wang et al., 2023] 0.56 1.23 2.05 3.08 4.34 CF-LSTM [Xie et al., 2021] 0.55 1.10 1.78 2.73 3.82 MHA-LSTM [Messaoud et al., 2021] 0.41 1.01 1.74 2.67 3.83 HMNet [Xue et al., 2021] 0.50 1.13 1.89 2.85 4.04 TS-GAN [Wang et al., 2022b] 0.60 1.24 1.95 2.78 3.72 Stdan [Chen et al., 2022b] 0.39 0.96 1.61 2.56 3.67 iNATran [Chen et al., 2022a] 0.39 0.96 1.61 2.42 3.43 DACR-AMTP [Cong et al., 2023] 0.57 1.07 1.68 2.53 3.40 FHIF [Zuo et al., 2023] 0.40 0.98 1.66 2.52 3.63 MFTraj 0.38 0.87 1.52 2.23 2.95 HighD S-GAN [Gupta et al., 2018] 0.30 0.78 1.46 2.34 3.41 WSiP [Wang et al., 2023] 0.20 0.60 1.21 2.07 3.14 CS-LSTM [Deo and Trivedi, 2018] 0.22 0.61 1.24 2.10 3.27 MHA-LSTM [Messaoud et al., 2021] 0.19 0.55 1.10 1.84 2.78 NLS-LSTM [Messaoud et al., 2019] 0.20 0.57 1.14 1.90 2.91 DRBP[Gao et al., 2023] 0.41 0.79 1.11 1.40 EA-Net [Cai et al., 2021] 0.15 0.26 0.43 0.78 1.32 CF-LSTM [Xie et al., 2021] 0.18 0.42 1.07 1.72 2.44 Stdan [Chen et al., 2022b] 0.19 0.27 0.48 0.91 1.66 iNATran [Chen et al., 2022a] 0.04 0.05 0.21 0.54 1.10 DACR-AMTP [Cong et al., 2023] 0.10 0.17 0.31 0.54 1.01 GaVa [Liao et al., 2024d] 0.17 0.24 0.42 0.86 1.31 MFTraj 0.07 0.10 0.19 0.38 0.56 MoCAD S-GAN [Gupta et al., 2018] 1.69 2.25 3.30 3.89 4.69 CS-LSTM [Deo and Trivedi, 2018] 1.45 1.98 2.94 3.56 4.49 MHA-LSTM [Messaoud et al., 2021] 1.25 1.48 2.57 3.22 4.20 NLS-LSTM [Messaoud et al., 2019] 0.96 1.27 2.08 2.86 3.93 WSiP [Wang et al., 2023] 0.70 0.87 1.70 2.56 3.47 CF-LSTM [Xie et al., 2021] 0.72 0.91 1.73 2.59 3.44 Stdan [Chen et al., 2022b] 0.62 0.85 1.62 2.51 3.32 HLTP [Liao et al., 2024a] 0.55 0.76 1.44 2.39 3.21 BAT [Liao et al., 2024b] 0.35 0.74 1.39 2.19 2.88 MFTraj 0.34 0.70 1.32 2.01 2.57 spectively, highlighting its pivotal role. Model B, with absolute coordinates, underperforms, emphasizing the relevance of spatial relationships. Model C, without the interactionaware module and Linformer extension, and Model D, lacking Linformer, both show diminished performance. Similarly, Model E, which uses a standard GCN instead of the adaptive one, also lags, underscoring the latter\u2019s efficiency. In essence, this study solidifies the importance of each component in Model F. Every part, from understanding behavioral nuances to updating features effectively, bolsters the model\u2019s precision and resilience. In essence, this study solidifies the importance of each component in Model F. Every part, from understanding behavioral nuances to updating features effectively, bolsters the model\u2019s precision and resilience. Figure 3: Qualitative results of MFTraj and HiVT on Agroverse. 4.4 Qualitative Results Figure 3 presents the qualitative results of our model using the Argoverse dataset. We\u2019ve limited the display to the tarTable 5: Ablation analysis of individual components in Argoverse. Ablation Models (\u2206Model F) minADE (m)\u2193 minFDE (m)\u2193 MR (%)\u2193 Model A 1.82 3.85 60.61 Model B 1.69 3.59 56.14 Model C 1.78 3.71 59.07 Model D 1.71 3.61 57.59 Model E 1.68 3.70 56.94 Model F 1.59 3.51 55.44 Figure 4: Qualitative results of MFTraj on NGSIM. Target vehicle is depicted in red, while its surrounding agents are shown in blue. get vehicle\u2019s trajectories for clarity. Interestingly, without the aid of HD maps, our model adeptly discerns road semantics, enabling it to make precise and logical predictions for target vehicles in intricate urban settings. Importantly, Figure 4 illustrates a comparison between the trajectories predicted by MFTraj and the SOTA baselines in the same traffic scenarios. MFTraj outperforms Stdan and WSiP in trajectory prediction, especially in complex scenarios such as lane changes and merging. These results demonstrate the superior adaptability and reliability of MFTraj in complex traffic conditions. 5 Conclusion This work presents a map-free and behavior-aware trajectory prediction model for AVs, integrating four components: behavior-aware, position-aware, interaction-aware modules, and a residual decoder. These components work in concert to analyze and interpret various inputs, understand humanmachine interactions, and account for the inherent uncertainty and variability in the prediction. Evaluated with the Argoverse, NGSIM, HighD, and MoCAD datasets, MFTraj outperformed SOTA baselines in prediction accuracy and efficiency without additional map information. Furthermore, this approach ensures its robustness and adaptability even in the presence of significant missing data; it achieved impressive performance even with a 50% sequential input data deficit. This underscores the resilience and efficiency of MFTraj in predicting future vehicle trajectories and suggests its potential to drastically reduce the data requirements for training AVs, especially in corner cases, like data-missing and limited data scenes. \fAcknowledgements This research is supported by the Science and Technology Development Fund of Macau SAR (File no. 0021/2022/ITP, 0081/2022/A2, 001/2024/SKL), and University of Macau (SRG2023-00037-IOTSC)."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.01270v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01270v1",
|
| 3 |
+
"title": "The Importance of Model Inspection for Better Understanding Performance Characteristics of Graph Neural Networks",
|
| 4 |
+
"abstract": "This study highlights the importance of conducting comprehensive model\ninspection as part of comparative performance analyses. Here, we investigate\nthe effect of modelling choices on the feature learning characteristics of\ngraph neural networks applied to a brain shape classification task.\nSpecifically, we analyse the effect of using parameter-efficient, shared graph\nconvolutional submodels compared to structure-specific, non-shared submodels.\nFurther, we assess the effect of mesh registration as part of the data\nharmonisation pipeline. We find substantial differences in the feature\nembeddings at different layers of the models. Our results highlight that test\naccuracy alone is insufficient to identify important model characteristics such\nas encoded biases related to data source or potentially non-discriminative\nfeatures learned in submodels. Our model inspection framework offers a valuable\ntool for practitioners to better understand performance characteristics of deep\nlearning models in medical imaging.",
|
| 5 |
+
"authors": "Nairouz Shehata, Carolina Pi\u00e7arra, Anees Kazi, Ben Glocker",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Graph AND Structure AND Learning",
|
| 14 |
+
"gt": "The Importance of Model Inspection for Better Understanding Performance Characteristics of Graph Neural Networks",
|
| 15 |
+
"main_content": "INTRODUCTION Understanding biological sex-based differences in brain anatomy provides valuable insights into both neurodevelopmental processes and cognitive functioning. Recent strides in the field of geometric deep learning [1], particularly the advent of Graph Neural Networks (GNNs), have revolutionised the analysis of complex, non-Euclidean data [2] to make predictions at a node, edge, or graph-level. This allows us to treat brain shapes as graphs, leveraging the power of GNNs to learn from complex structural anatomical data [3]. Discriminative feature embeddings can be withdrawn from these models, representing brain shapes as a continuous vector of numerical features that capture valuable structural and geometrical information for downstream prediction tasks [4]. Techniques like Principal Component Analysis (PCA) can be used to reduce the dimensionality of graph embeddings for visualisation, aiding the exploration of subgroup biases in the feature space beyond the target label. This analysis may help practitioners ensure the reliability of their predictions, and is particularly important in applications where GNNs feature embeddings may be leveraged for new tasks, such as fine-tuning, domain transfer, or multi-modal approaches. In this study, we dissect GNN models trained under different settings for the task of sex classification using 3D meshes of segmented brain structures. We inspect the learned feature embeddings at different layers within a multi-graph neural network architecture. Through this granular analysis, we reveal critical insights into the inner workings of our models, identifying important effects of different modelling choices. This research demonstrates the utility of conducting a model inspection framework as part of model development, highlighting insights that may guide practitioners in the selection of models with desired characteristics, avoiding biases, overfitting and better understanding the driving forces behind predictions. 2. METHODS 2.1. Imaging datasets We used four neuroimaging datasets, including data from the UK Biobank imaging study (UKBB) 1 [5], the Cambridge Centre for Ageing and Neuroscience study (CamCAN) [6, 7], the IXI dataset2, and OASIS3 [8]. Both UKBB and CamCAN brain MRI data were acquired with Siemens 3T scanners. The IXI dataset encompassed data collected from three clinical sites, each employing different scanning systems. CamCAN and IXI are acquired from healthy volunteers, while UKBB is an observational population study. The OASIS3 dataset consists of 716 subjects with normal cognitive function and 318 patients exhibiting varying stages of cognitive decline. For all four datasets, subjects with missing biological sex or age information were excluded. Data from UKBB was split into three sets, with 9,900 scans used for training, 1,099 for validation, and 2,750 for testing. CamCAN, IXI and OASIS3 1Accessed under application 12579. 2https://brain-development.org/ixi-dataset/ arXiv:2405.01270v1 [cs.LG] 2 May 2024 \fwere used as external test sets, with sample sizes of 652, 563, and 1,034, respectively. The UKBB data is provided with a comprehensive preprocessing already applied, using FSL FIRST [9] to automatically segment 15 subcortical brain structures from T1-weighted brain MRI, including the brain stem, left/right thalamus, caudate, putamen, pallidum, hippocampus, amygdala, and accumbens-area. We apply our own pre-processing pipeline to the CamCAN, IXI, and OASIS3 datasets, closely resembling the UKBB pre-processing. Our pipeline includes skull stripping using ROBEX 3 [10], bias field correction using N4ITK [11], and brain segmentation via FSL FIRST. 2.2. Graph representation The anatomical brain structures are represented by meshes as an undirected graph composed of nodes, connected by edges forming triangular faces. The number of nodes for most structures is 642 and up to 1,068, whereas the number of edges per structure ranges between 3,840 and 6,396. The meshes are automatically generated by the FSL FIRST tool. 2.2.1. Node features Each graph node can carry additional information, encoded as feature vectors. This can include spatial coordinates or more complex geometric descriptors. While computer vision has transitioned from hand-crafted features to end-to-end deep learning, we have previously demonstrated the value of using geometric feature descriptors in GNN-based shape classification [12]. We employ Fast Point Feature Histograms (FPFH) [13], a pose invariant feature descriptor shown to substantially boost classification performance. To compute FPFH features on a mesh, a point feature histogram is first generated, involving the selection of neighboring points within a defined radius around each query point. The Darboux frame is subsequently defined, and angular variations are computed. This process involves several steps, including the estimation of normals and the calculation of angular variations, resulting in a vector of 33 features at each node. 2.2.2. Mesh registration Mesh registration is an optional pre-processing step, with the goal to remove spatial variability across subjects and datasets. Here, we investigate the use of rigid registration aligning all meshes for a specific brain structure to a standardised orientation using the closed-form Umeyama approach [14]. This method employs a singular value decompositionbased optimisation to obtain an optimal rigid transformation between two given meshes. For each of the 15 brain structures, we select a reference mesh from a random subject from the UKBB dataset, and align the meshes from all other 3https://www.nitrc.org/projects/robex Fig. 1: Model architecture consisting of a graph convolutional network (GCN) submodel feeding graph embeddings into a classification head with two fully connected layers (FC1 and FC2). Where N is the number of brain substructures, 15. For our model inspection, we read out the feature vectors from the GCN submodel, FC1, and FC2. subjects to this reference. As a result, shape variability due to orientation and position differences is minimised and the remaining variability is expected to primarily represent anatomical differences across subjects. 2.3. Multi-graph neural network architecture Our general GNN architecture is comprised of two main components; the GCN submodel which aims to learn graph embeddings over 3D meshes using multiple graph convolutional layers [12] and an MLP classification head that takes the graph embeddings as inputs and performs the final classification using two fully connected layers (cf. Fig. 1). The input to our models are 15 subgraphs representing 15 brain structures, extracted from T1-weighted brain scans. We consider two approaches for learning graph embeddings with GCN submodels. The first approach, referred to as shared submodel, uses a single GCN submodel that learns from all 15 subgraphs. Here, the weights of the graph convolutional layers are shared across brain structures. The shared submodel approach is parameter-efficient and aims to learn generic shape features. For the second approach, referred to as non-shared submodel, each subgraph is fed into a structure-specific GCN submodel. The non-shared submodel approach has more parameters and may capture structure-specific shape features. In both approaches, the architecture of the GCN submodel is identical and consists of three graph convolutional layers [15] with Rectified Linear Unit (ReLU) activations. A global average pooling layer is used as a readout layer, aggregating node representations into a single graph-level feature embedding. The embeddings from individual structures are stacked to form a subject-level feature embedding which is passed to the classification head. 2.4. Model inspection Our model inspection approach is focused on evaluating the separability of the target label (biological sex, Male and \fFemale) and data source classes (UKBB, CamCAN, IXI or OASIS3) through feature inspection. Each test set sample is passed through the complete pipeline and its feature embeddings are saved at three different stages: at the output layer of the GCN submodel and at the first (FC1) and second (FC2) fully connected layers of the classification head. The dimensions of these embeddings are, respectively, 480 (15 substructures times the hidden layer size, 32), 32 and 2. To allow for visual inspection, the feature embeddings from the GCN and FC1 layers are inputted to a PCA model to reduce their dimensionality. The PCA modes capture the directions of the largest variation in the high-dimensional feature space, allowing us to visualise feature separation in 2D scatter plots. We randomly sample 500 subjects from each dataset for the visualisations. Given that all the models were trained to classify biological sex, a clear separation should be expected between the Male and Female classes in the first PCA modes. 3. EXPERIMENTS & RESULTS For a thorough evaluation, we trained and tested the four models shared and non-shared GCN submodels, and with and without mesh rigid registration on identical data splits. All code was developed using PyTorch Geometric and PyTorch Lightning for model implementation and data handling. We used the Adam optimiser [16] with a learning rate of 0.001, and employed the standard cross entropy loss for classification. Random node translation was used as a data augmentation strategy with a maximum offset of 0.1mm [17]. This was shown to improve performance in our previous study [12]. Model selection was done based on the loss of the validation set. Our code is made publicly available4. 3.1. Classification performance Figure 2 summarises the classification performance of the four models, showing the ROC curves together with the area under the curve (AUC) metric, reported separately for each of the four test datasets. There are two main observations: (i) There are very little differences in the absolute performance across the four models. Comparing the shared vs non-shared submodel, the AUC performance is comparable. When comparing models with and without mesh registration, we find the generalisation gap decreases between in-distribution test (UKBB) and the external test data (CamCAN, IXI, OASIS3). However, we also observe a small drop in performance on the in-distribution test data when using mesh registration, compared to not using registration. A practitioner using internal test results for final model selection may opt for using a shared submodel, due to its parameter efficiency, without mesh registration, due to convenience. As we will see next, this choice may be suboptimal as test accuracy alone is insufficient to identify important model characteristics. 4https://github.com/biomedia-mira/medmesh 3.2. Effect of using structure-specific submodels For the models that use a shared submodel, we observe that the GCN feature embeddings are non-discriminative with respect to the target label. Separation seems completely missing in the shared model without registration (see Fig. 3a), with only weak separation in the shared model with registration (see Fig. 3c). For these models, the classification heads will primarily contribute to the model performance. For the models with a non-shared submodel, we find a much better separability for the GCN features with and without mesh registration (cf. Figs. 3b, d). Here, the GCN features will meaningfully contribute to the models\u2019 classification performance. 3.3. Effect of mesh registration When studying the effect of mesh registration, we can clearly observe that without registration, the GCN feature embeddings from the submodel strongly encode data source, showing separate clusters for UKBB and external test data (cf. Figs. 3a,b). When introducing mesh registration as a pre-processing step, we note a significant improvement, with an almost entirely removed separation of datasets in the GCN layer independent of whether a shared and non-shared submodel is used (Figs. 3c, d). The separability of the target label in the GCN layer is well defined for the non-shared submodel (Fig. 3d), while remaining weak for the shared submodel (Fig. 3c). Rigid registration as a pre-processing step seems to not only improve the learning efficiency of the GCN submodel, but also its ability to generalise across data distributions. 4. CONCLUSION Our findings underscore the limitations of relying solely on test accuracy for model selection, particularly when focusing on in-distribution test accuracy. We demonstrate that this may lead practitioners to select models with undesired characteristics where GCN features are non-discriminative for the prediction task and/or strongly encode biases such as data source. Using a comprehensive model inspection, we were able to identify variations in the model characteristics and better understand what drives the final prediction (GCN submodel vs classification head). The importance of this becomes evident when considering applications such as finetuning, domain transfer, or multi-modal approaches, where GCN feature embeddings may be leveraged for new tasks. Our model inspection framework can be easily applied to other models, tasks, and purposes. It was previously used to detect biases in chest radiography disease detection models [18]. Here, we strongly advocate for the wider use of model inspection as an integral part of comparative performance analyses. We hope that our work can contribute to improving the reliability of model selection in all areas of deep learning for biomedical image analysis. \f(a) (b) (c) (d) Fig. 2: Sex classification performance for four models; (a) shared and (b) non-shared submodel without mesh registration, (c) shared and (d) non-shared submodel with mesh registration. We observe that the generalisation gap between the in-distribution test data (UKBB) and the external test data (CamCAN, IXI, OASIS3) closes with mesh registration. Overall, there are only small differences in performance, illustrating that test accuracy alone is insufficient to identify variations in model characteristics. (a) Shared submodel, without mesh registration (b) Non-shared submodel, without mesh registration (c) Shared submodel, with mesh registration (d) Non-shared submodel, with mesh registration Fig. 3: Effect of modelling choices on feature separability for four different models at their the GCN layer (left), first fully connected layer FC1 (middle), and output layer FC2 (right). Models: (a,c) shared and (b,d) non-shared GCN submodel, and (a,b) without and (c,d) with mesh registration. For each model, we show the separation by target label in the top row, and the separation by dataset in the bottom row. Effect of submodel: The models in (a,c) with a shared submodel are unable to learn discriminative GCN features for the prediction task, while the models in (b,d) with a non-shared submodel show much better task-related separability in the GCN features. Effect of registration: The models models in (a,b) without registration strongly encode information about the data source in the GCN layer. This is much reduced for the models in (c,d) with mesh registration. \f5. ACKNOWLEDGMENTS Nairouz Shehata is grateful for the support by the Magdi Yacoub Heart Foundation and Al Alfi Foundation. 6. COMPLIANCE WITH ETHICAL STANDARDS This study uses secondary, fully anonymised data which is publicly available and is exempt from ethical approval. 7."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01280v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01280v1",
|
| 3 |
+
"title": "Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation",
|
| 4 |
+
"abstract": "Non-autoregressive (NAR) language models are known for their low latency in\nneural machine translation (NMT). However, a performance gap exists between NAR\nand autoregressive models due to the large decoding space and difficulty in\ncapturing dependency between target words accurately. Compounding this,\npreparing appropriate training data for NAR models is a non-trivial task, often\nexacerbating exposure bias. To address these challenges, we apply reinforcement\nlearning (RL) to Levenshtein Transformer, a representative edit-based NAR\nmodel, demonstrating that RL with self-generated data can enhance the\nperformance of edit-based NAR models. We explore two RL approaches: stepwise\nreward maximization and episodic reward maximization. We discuss the respective\npros and cons of these two approaches and empirically verify them. Moreover, we\nexperimentally investigate the impact of temperature setting on performance,\nconfirming the importance of proper temperature setting for NAR models'\ntraining.",
|
| 5 |
+
"authors": "Hao Wang, Tetsuro Morimura, Ukyo Honda, Daisuke Kawahara",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "Model AND Based AND Reinforcement AND Learning",
|
| 14 |
+
"gt": "Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation",
|
| 15 |
+
"main_content": "Introduction Non-autoregressive (NAR) language models (Gu et al., 2018) generate translations in parallel, enabling faster inference and having the potential for real-time translation applications. However, despite their computational efficiency, NAR models have been observed to underperform autoregressive (AR) models due to the challenges posed by the large decoding space and difficulty in capturing dependency between target words accurately (Gu et al., 2018). To bridge the performance gap, many NAR architectures and training methods have been proposed, including edit-based models like Insertion Transformer (Stern et al., 2019) and Levenshtein Transformer (Gu et al., 2019). Prior research has also explored knowledge distilla*Work done during internship at CyberAgent AI Lab. tion (Ghazvininejad et al., 2019), which is effective but introduces additional complexity. Unlike AR models, preparing teacher data and designing appropriate training objectives have always been challenging for NAR models (Li et al., 2023). Teacher forcing with inappropriate teacher data may exacerbate the exposure bias problem (Ranzato et al., 2016), affecting model performance. Reinforcement learning (RL) is known for its ability to tackle the exposure bias (Ranzato et al., 2016) and alleviate the object mismatch issue (Ding and Soricut, 2017). Despite its importance, explorations of RL for NAR are still scarce. Shao et al. (2021) proposed a method for reducing the estimation variance. However, this method is only applicable to NAR models with a fixed output length, which is unsuitable for edit-based models. In this paper, we empirically analyze conditions for performance improvement in applying RL to edit-based NAR models in neural machine translation (NMT). Specifically, we focus on Levenshtein Transformer (LevT) (Gu et al., 2019), a prominent edit-based NAR architecture that has shown promise in reducing decoding latency and flexible length adjustment. We demonstrate that RL with self-generated data significantly improves LevT\u2019s performance. Importantly, our methods are orthogonal to existing research on NAR architectures, indicating potential for widespread applicability. We explore two RL approaches: stepwise reward maximization, which computes rewards after each edit operation, and episodic reward maximization, which only computes rewards after all generations are completed. We analyze these two approaches\u2019 respective advantages and disadvantages and empirically verify them. Furthermore, through a series of experiments, we investigate the impact of temperature settings on softmax sampling, aiming to identify the optimal temperature that strikes a balance between exploration and exploitation during the RL training process. arXiv:2405.01280v1 [cs.CL] 2 May 2024 \f2 Background Reinforcement Learning Reinforcement learning has been widely applied to improve the performance of AR NMT models (Ranzato et al., 2016; Bahdanau et al., 2016; Wu et al., 2016) because its ability to train models to optimize nondifferentiable score functions and tackle the exposure bias problem (Ranzato et al., 2016). In practice, REINFORCE (Williams, 1992) with a baseline is commonly used for estimating the policy gradient, which can be computed as follows: \u25bd\u03b8L(\u03b8) \u2248\u2212(r(y) \u2212b(s)) \u25bd\u03b8 log\u03c0\u03b8(y|s), (1) where r is the reward function, b is the baseline, y is a sample from policy \u03c0\u03b8 and state s. Softmax with Temperature In the domain of RL, we need to consider the explorationexploitation trade-off (Sutton and Barto, 2018), where temperature \u03c4 is an important parameter. \u03c4 is used to control the softness of the softmax distribution, pi = exp(yi/\u03c4) P i exp(yi/\u03c4). (2) A larger \u03c4 leads to a more uniform distribution, promoting exploration, while a smaller \u03c4 creates a more peaky distribution, emphasizing exploitation. Kiegeland and Kreutzer (2021) shows that training with an increased temperature can mitigate the peakiness effect due to RL (Choshen et al., 2020), indicating that a suitable temperature is significant for RL training in NMT. RL for NAR Compared to AR methods, studies of reinforcement learning for NAR remain unexplored. Shao et al. (2021) proposed a method to reduce the estimation variance of REINFORCE by fixing the predicted word at position t and sampling words of other positions for n times. However, this method is only applicable to models with a fixed length, which is unsuitable for edit-based models. Levenshtein Transformer Levenshtein Transformer (Gu et al., 2019) is an NAR model based on three edit operations: delete tokens, insert placeholders, and replace placeholders with new tokens. It uses a supervised dual-policy learning algorithm to minimize the Levenshtein distance (Levenshtein, 1965) for training and greedy sampling for decoding. The decoding stops when two consecutive refinement iterations return the same output or a maxFigure 1: The illustration of Levenshtein Transformer\u2019s decoding process (Gu et al., 2019). In each decoding iteration, three edit operations are performed sequentially: delete tokens, insert placeholders, and replace placeholders with new tokens. imum number of iterations (set to 10) is reached. We illustrate the decoding process in Figure 1. LevT\u2019s dual-policy learning generates teacher data by corrupting the ground truth and reconstructing it with its adversary policy. This mechanism not only offers a unique approach to data generation but also underscores the inherent difficulty in preparing teacher data. This introduces concerns regarding the exposure bias, particularly whether the training process can maintain consistency with the text during decoding. To address this issue, we employ RL approaches that use self-generated data for training. 3 Approaches In this section, we present our reinforcement learning approaches in detail. We train a Levenshtein Transformer model as our baseline using the dualpolicy learning algorithm. Based on it, we introduce two distinct RL approaches within the REINFORCE framework: stepwise reward maximization and episodic reward maximization. Moreover, we present our methods for temperature control. Stepwise Reward Maximization General RL training methods for AR NMT models are all episodic1, as it is difficult to calculate BLEU (Papineni et al., 2002) when the sentence is not fully generated. In contrast, NAR models can calculate BLEU on outputs at each decoding step. From the perspective of estimating a more accurate gradient, we propose stepwise reward maximization, which 1In this context, \u201cepisodic\u201d denotes training based on entirely generated sequences \fFigure 2: The illustration of the two RL approaches. (A) is the stepwise reward maximization, which randomly samples from a previous node for each edit operation and calculates BLEU and RL gradient after each edit operation (except for the insert operation, since it is not easy to calculate BLEU after inserting placeholders). (B) is the episodic reward maximization, where each sample is edited multiple times in a linear fashion, without branching into different paths, and BLEU and RL gradient are calculated only after the completion of all edit operations. At every orange node, we sample k times from this node (in this example, the sample size k is 2). calculates reward for each edit operation2 using score differences from one previous edit. Since every step\u2019s reward is calculated separately, this approach should be easier to learn than episodic approaches (Sutton and Barto, 2018). However, it is also more prone to learning bias since the editing process is inherently multi-step. This drawback should not be emphasized since maximizing the reward for each step will likely maximize the episodic reward in NAR models\u2019 training. We use a leave-one-out baseline (Luo, 2020) for b(s) in Equation 1 instead of the greedy baseline proposed in SCST (Rennie et al., 2017) because the greedy decoding is too strong in LevT, which makes gaining positive rewards in SCST difficult and may reduce learning efficiency. For each edit, we sample k actions from the policy at this point. Then, we calculate the baseline as follows: bi(s) = 1 k \u22121 X j\u0338=i r(yj), (3) where yj is the jth sample from the current policy. The final RL gradient estimation becomes \u25bd\u03b8L(\u03b8) \u2248\u2212(r(yi) \u2212bi(s)) \u25bd\u03b8 log\u03c0\u03b8(yi|s). (4) In a straightforward implementation, one might consider applying sampling again to all k samples 2In practice, since it is not easy to calculate BLEU after inserting placeholders, we consider placeholder insertion and token replacement as one edit operation. from the last edit. However, this will cause a combination explosion when the number of edit operations increases. Practically, we randomly choose a sample from the previous edit to perform the subsequent operations. We show an illustration of the sampling process in (A) of Figure 2 and pseudo code of our algorithm in Appendix A. Episodic Reward Maximization We also introduce episodic reward maximization, which calculates rewards only once for each sample and gives all actions the same weight. It is a more traditional way to train NMT models in RL. It allows unbiased learning but may not be efficient. We use the leave-one-out baseline for the episodic reward as well as the stepwise reward. We sample k samples from the initial input. Each sample will be edited multiple times without a branch. After the final edit, we calculate the rewards and baselines. We show an illustration of the sampling process in (B) of Figure 2 and pseudo code of our algorithm in Appendix B. Temperature Control Applying RL to NAR differs significantly from AR because there could be various types of actions rather than just predicting the next token, like deletion and insertion. Due to this difficulty, NAR may need more fine-grained temperature control during training. To investigate the impact of exploration and exploitation in the training process, we explore five different settings of the temperature. Due to the large decoding space \fof Levenshtein Transformer, default temperature 1 may result in poor rewards, and too small temperature may result in peaky distribution, which are both harmful to learning. We use three constant temperature settings set to 0.1, 0.5, and 1 to verify the effect of temperature magnitude. An annealing schedule is known for balancing the trade-off between model accuracy and variance during training (Jang et al., 2016). There are two ways of thinking here. First, to reduce the exposure bias, we want to get close to the decoding scenario, which is greedy decoding in our experiments. Thus, we can apply a regular annealing schedule to gradually reduce the temperature from 1 to 0.1 during training. The temperature function can be written as follows: \u03c4i+1 = max(\u03c4i \u2217exp(\u2212log(\u03c40/\u03c4T ) T ), \u03c4T ), (5) where T is the number of total training steps, and \u03c40 and \u03c4T are the initial and the target temperatures. Second, using high temperatures in the early stages of training may lead to poor rewards and result in low learning efficiency. We can apply an inverted annealing schedule to gradually increase the temperature from 0.1 to 1, guaranteeing stable training in the early stages and gradually increasing the exploration space for efficient training. The temperature function can be written as follows: \u03c4i+1 = min(\u03c4i/exp(\u2212log(\u03c4T /\u03c40) T ), \u03c4T ). (6) In each decoding iteration, multiple edit operations occur, and each operation has a different decoding space size. It may be beneficial to optimize this by using varying temperatures for each operation in every iteration. This is a complicated research question and we leave this exploration to future work. 4 Experiments 4.1 Experimental Setup Data & Evaluation We use WMT\u201914 EnglishGerman (EN-DE) (Bojar et al., 2014) and WAT\u201917 English-Japanese (EN-JA) Small-NMT datasets (Nakazawa et al., 2017) for experiments. We use BPE token-based BLEU scores for evaluations. Data preprocessing follows Gu et al. (2019). Baseline We use Levenshtein Transformer as our baseline. Following Gu et al. (2019), we trained a LevT with 300K steps and a max batch size of 65,536 tokens per step. However, like Reid et al. (2023), we cannot reproduce the results of Gu et al. (2019). We use our results in this paper. RL According to Gu et al. (2019), most decodings are gotten in 1-4 iterations, and the average number of decoding iterations is 2.43. To minimize the gap between the training and decoding states, we start with a null string and conduct 3 iterations (8 edits) for each sample during RL training. We set the total training steps T to 50,000, with a max batch size of 4,096 tokens per step. To prevent the out-of-memory issue, we limit the decoding space of placeholder insertion from 256 to 64. The sample size k of the baseline is set to 5. Our implementation is based on Fairseq3. Computational Cost The pre-training phase of LevT on a GCP VM instance with A100x4 GPUs requires roughly 3 days, while the subsequent RL fine-tuning process takes approximately 1 day to complete. 4.2 Results We show the BLEU scores of our approaches in Table 1. The episodic reward model4 showed notable improvement over the baseline. The score is even close to the distillation model, which requires a heavy pre-training5 of AR models. However, the stepwise reward model showed only limited improvement. To explain this, we focus on the advantage, r(y) \u2212b(s), included in the policy gradient (Equation 1), as a larger value of the advantage can increase the policy gradient\u2019s magnitude. A higher standard deviation (SD) of the advantages indicates larger fluctuations in policy gradients. Table 2 shows the SDs of the advantages of the stepwise reward model, with notably higher values in the early stages of edit operations compared to later stages. This suggests that the stepwise reward model disproportionately focuses on early operations, potentially leading to uneven learning and reduced performance. In contrast, the episodic reward model applies the same rewards and advantages across all operations, facilitating more uniform learning and improved performance. 3https://github.com/facebookresearch/fairseq 4The term \u201cepisode/stepwise reward model\u201d specifically refers to the model trained using the \u201cepisode/stepwise reward maximization\u201d approach. 5To produce a distillation model, we need to train an autoregressive Transformer first, which needs additional 3 days of training on our machine. \fModel EN-DE EN-JA LevT 24.03 31.76 LevT + distillation 26.49 LevT + RL (stepwise) 24.29 31.73 LevT + RL (episodic) 25.72 32.75 Table 1: The BLEU scores of our approaches and the baseline. Temperatures are set to 1. Due to the limited computational resources, we only trained the distillation model for the EN-DE dataset using the ready-made distillation dataset. Iteration Edit Operation EN-DE EN-JA 1 Insert + Replace 9.99 8.59 2 Delete 2.05 1.35 Insert + Replace 3.28 2.48 3 Delete 1.67 1.29 Insert + Replace 3.04 1.60 Table 2: Stepwise reward model\u2019s standard deviation (SD) of the advantage in each edit operation. Insertion and replacement share the same reward. We only report scores of applying RL to the model without distillation since we found that RL significantly improved the model without distillation (max 1.69 points) compared to when distillation was applied (max 0.5 point). Moreover, when confronted with distillation models, it raises questions such as which data we should use for RL training, the original or the distillation one. We leave these research questions to future work. We show the BLEU scores of different temperature settings in Table 3. Model performance varies significantly with temperature settings (max 1.01 points in EN-JA). Among constant setting models, the model with a temperature of 0.5 performed best in EN-DE, and the model with a temperature of 0.1 performed best in EN-JA, indicating that too large temperature harms RL training. The two models using annealing schedules performed great in both tasks, showing the effectiveness of the annealing algorithms for improving learning efficiency. However, the annealing models did not always outperform the constant models, which suggests the difficulty of seeking the optimal temperature setting for NAR models\u2019 RL training. Also, we found the inverted annealing model (\u03c4=0.1\u21921) begins dropping performance after 10,000 steps training in EN-JA, indicating that the speed of annealing will significantly affect the model training quality. Temperature EN-DE EN-JA Constant (\u03c4 = 1) 25.72 32.75 Constant (\u03c4 = 0.5) 25.98 33.45 Constant (\u03c4 = 0.1) 25.76 33.60 Annealing (\u03c4 = 1 \u21920.1) 25.83 33.76 Annealing (\u03c4 = 0.1 \u21921) 25.90 33.43 Table 3: The BLEU scores of episodic reward models using different temperature settings. We also quickly surveyed the relationship between performance and the number of decoding iterations in RL. The model performance dropped when we reduced the number of iterations to 2 during training and remained flat when we increased it to 4, indicating that our setting is reasonable. 5 Conclusion and Future Work This paper explored the application of reinforcement learning to edit-based non-autoregressive neural machine translation. By incorporating RL into the training process, we achieved a significant performance improvement. By empirically comparing stepwise and episodic reward maximization, we analyzed the advantages and disadvantages of these RL approaches. We plan to have a deeper exploration of stepwise reward maximization and find a way to alleviate training inequality for multiple edit operations in the future. Our investigation of temperature settings in NAR softmax sampling provided insights into striking a balance between exploration and exploitation during training. Although our annealing methods perform well, they are not optimal and still depend on manually adjusting the parameters such as total training steps. In the future, we plan to develop a self-adaption temperature control method using various indicators like entropy and advantage SD. The experiments in this paper focused on the basics, and we plan to do more study for practical applications in future work. As our methods are orthogonal to existing research on NAR architectures, our next step involves exploring the methods\u2019 applicability across a broader spectrum, including state-of-the-art models. Additionally, we plan to investigate how to effectively apply RL to the distillation model, the impact of different baseline designs on performance, and the impact of RL on output diversity. Applying RL to NAR is a massive and complex research question. We look forward to more researchers joining this topic."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01345v1.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01345v1",
|
| 3 |
+
"title": "The Power of Question Translation Training in Multilingual Reasoning: Broadened Scope and Deepened Insights",
|
| 4 |
+
"abstract": "Bridging the significant gap between large language model's English and\nnon-English performance presents a great challenge. While some previous studies\nattempt to mitigate this gap with translated training data, the recently\nproposed question alignment approach leverages the model's English expertise to\nimprove multilingual performance with minimum usage of expensive, error-prone\ntranslation. In this paper, we explore how broadly this method can be applied\nby examining its effects in reasoning with executable code and reasoning with\ncommon sense. We also explore how to apply this approach efficiently to\nextremely large language models using proxy-tuning. Experiment results on\nmultilingual reasoning benchmarks mGSM, mSVAMP and xCSQA demonstrate that the\nquestion alignment approach can be used to boost multilingual performance\nacross diverse reasoning scenarios, model families, and sizes. For instance,\nwhen applied to the LLaMA2 models, our method brings an average accuracy\nimprovements of 12.2% on mGSM even with the 70B model. To understand the\nmechanism of its success, we analyze representation space, chain-of-thought and\ntranslation data scales, which reveals how question translation training\nstrengthens language alignment within LLMs and shapes their working patterns.",
|
| 5 |
+
"authors": "Wenhao Zhu, Shujian Huang, Fei Yuan, Cheng Chen, Jiajun Chen, Alexandra Birch",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL"
|
| 11 |
+
],
|
| 12 |
+
"label": "Original Paper",
|
| 13 |
+
"paper_cat": "LLM AND Reasoning",
|
| 14 |
+
"gt": "The Power of Question Translation Training in Multilingual Reasoning: Broadened Scope and Deepened Insights",
|
| 15 |
+
"main_content": "Introduction Although large language models (LLMs) have shown the ability to solve complex problems (Chowdhery et al., 2022; Touvron et al., 2023), LLMs still struggle in multilingual contexts (Shi et al., 2022; Huang et al., 2023; Qin et al., 2023). This is unsurprising, given that their 1The project will be available at: https://github. com/NJUNLP/QAlign. LLaMA2-7B LLaMA2-13B LLaMA2-70B LLaMA3-8B 20 30 40 50 60 70 Accuracy (%) 38.4 43.9 50.8 50.0 49.6 57.1 63.0 61.8 mGSM Monoreason QAlign Monoreason Figure 1: An illustration of the benefits of performing question alignment (QAlign) to enhance the LLaMA model\u2019s performance across ten languages. Experiment results on more reasoning scenarios, model families and sizes will be reported in the experiment section. training data is predominantly made of English text and instructions (Blevins and Zettlemoyer, 2022; Wang et al., 2023). However, an important challenge remains: how to improve LLM performance on reasoning tasks in languages other than English with scarce multilingual resources. Previous studies attempt to incorporate more multilingual data for instruction-tuning (Chen et al., 2023) or continued pre-training (Nguyen et al., 2023). However, training for reasoning across potentially dozens or hundreds of languages is both costly and inefficient, and sometimes the necessary resources are either unavailable or of very poor quality (Zhu et al., 2024). A compelling solution is to leverage the LLM\u2019s English proficiency in non-English contexts (Shi et al., 2022). In our previous work (Zhu et al., 2024), we focus on multilingual mathematical reasoning and demonstrate that the devised twostep training framework can effectively enable the sharing of English reasoning capabilities across different languages. Specifically, it employs quesarXiv:2405.01345v1 [cs.CL] 2 May 2024 \ftion translation training to strengthen language alignment, and then utilizes cutting-edge English instruction data to unlock LLM\u2019s reasoning capabilities for multilingual contexts. Despite the progress made in our previous work, there remains three limitations: (1) single reasoning scenario: the scope of consideration was limited to math reasoning with English chain-ofthought, with other reasoning scenarios not yet being taken into account. (2) opaque training effects: the impact of the two-stage training on the LLM\u2019s working patterns and each training stage\u2019s contribution remain unclear. (3) limited model size: the effectiveness and efficiency of this approach has yet to be confirmed on LLMs significantly larger than 13 billions parameters. In this paper, we comprehensively address these limitations. First, we examine how broadly applicable the training framework is across three challenging and varied reasoning tasks. These involve distinict types of problem-solving languages and reasoning objectives: math reasoning with chainof-thought in multiple languages, math reasoning with exectuable Python code (Toshniwal et al., 2024), and common sense reasoning (Lin et al., 2021). By utilizing specialized data, we leverage the LLM\u2019s targeted English expertise to enhance its performance in other languages. We also explore the case of non-English generation. We discover that incorporating En-X general domain translation data during the second training stage can implicitly bias the LLM to generate nonEnglish chain-of-thought. To gain clearer insights, we analyze how our fine-tuned model utilizes its English reasoning capability in non-English contexts. Our analysis begins with the representation space, which fundamentally determines how the model derives the output answer. We discover that question translation training significantly affects the distribution of multilingual representations in LLM\u2019s middle to top layers, aligning them within the same semantic space as English. Additionally, we examine the LLM\u2019s step-by-step answers and find that this unified representation space makes the model to employ more similar problem-solving process to address multilingual questions. Besides, we investigate how well question alignment aproach scales and whether it offers benefits to the most powerful open-source LLMs. We explore to apply question alignment to extremely large models, e.g., LLaMA2-70B (Touvron et al., 2023), LLaMA3-70B (Meta, 2024), Mixtral-8x22B (Jiang et al., 2024). While fully fine-tuning LLM\u2019s parameters is straightforward, the computation cost becomes prohibitive as the number of parameters increases significantly. In constrast, we illustrate that proxy-tuning (Liu et al., 2024) offers as an efficient alternative for both dense models and Mixture-of-Experts (MoE) models. We also discover that by carefully selecting the proxy model, we can achieve 98% performance of fully fine-tuning without updating any parameters. Experiments on several multilingual reasoning benchmarks MGSM, MSVAMP and XCSQA demonstrate that the question alignment approach is a broad technique applicable across a wide variety of difficult reasoning tasks and its modulaized training pipeline enables us to efficiently unlocks LLM\u2019s multilingual capabilities in targeted skills. By applying our approach to extremely large language models, we can further strengthen their multilingual reasoning abilities. For instance, our fine-tuned LLaMA2-70B achieves 63.0% average accuracy on MGSM (Figure 1), pushing the multilingual reasoning performance of open-source models to new a boundary. 2 Related Work LLM\u2019s language bias With a large number of parameters pre-trained on large-scale corpora and instruction-tuned on powerful instruction data, LLMs have demonstrated a remarkable level of intelligence (Touvron et al., 2023; Jiang et al., 2023). However, extensive empirical analyses have shown that LLMs still face challenges in multilingual contexts, especially with low-resource languages (Shi et al., 2022; Huang et al., 2023; Zhu et al., 2023). This is not surprising, given that English predominates both in the pretraining data (Blevins and Zettlemoyer, 2022) and instruction data (Wang et al., 2023). In this paper, we focus on the core capability of LLM, i.e., the reasoning capability, and aim to push forward the boundaries of LLMs in multilingual reasoning. Advancing multilingual reasoning Previously, there have been some attempts to improve LLM\u2019s performance on multilingual reasoning. Huang et al. (2023) and Qin et al. (2023) prompt ChatGPT (OpenAI, 2022) to translate non-English \fResponse (Code) Question (En) Training Stage-2 Response Alignment Response (Code) LLM LLM Question (X) Inference Stage Training Stage-2 Response Alignment Question (En) LLM LLM Question (X) Inference Stage Training Stage-1 Question Alignment Response (En) Response (En) Question (En) Question (De) Question (Zh) Question (Ja) Translation (X) Response (En) Training Stage-2 Response Alignment Inference Stage Response (X) LLM LLM Question (En) Translation (En) Question (X) LLM Question (Fr) Question (Sw) I II III IV Figure 2: Illustration of the two-step training framework and its extension version. Subfigure I and II illustrate the training and inference process of the orignal training framework, in which the fine-tuned model usually responds in English to non-English questions. By maintaining the question alignment stage unchanged and modifying the response alignment stage, we adapt this framework to a wider range of scenarios. In subfigure III, we incorporate En-X translation data in the second stage of training to bias the LLM to generate non-English chain-of-thought. In subfigure IV, we use code instruction data for the second stage of training to unlock the LLM\u2019s capability for reasoning with executable code. query into English and then generate responses based on these translations. However, it has been found that this prompting method is not always effective for open-source LLMs (Zhu et al., 2024). To improve multilingual capabilities of open-source LLMs, Chen et al. (2023) and She et al. (2024) follow the translate-training paradigm and use machine-translated multilingual data for instruction-tuning. But translating chainof-thought response is error-prone (Zhu et al., 2024) and continuously translating an increasing volume of English instruction data is prohibitive. To make the best use of continuously evolving English instruction data and avoid drawbacks of translate-training paradigm, Zhu et al. (2024) propose the question alignment approach to guide LLMs to implicitly relate non-English questions to their English counterparts and solve the reasoning problems with their English expertise. Aligned with that philosophy, Yoon et al. (2024) also recognize the importance of building language alignment and explore the use of an additional multilingual encoder, the encoder of MT5 (Xue et al., 2021), to map multilingual queries into the LLM\u2019s English semantic space for unified processing. In this paper, we build upon our previous efforts and delve deeper to broaden the scope of our work and provide deepened insights. 3 Methodology In this section, we will recap the two-step training framework (Zhu et al., 2024) (Section 3.1) and introduce how we extend this framework to diverse reasoning scenarios (Section 3.2). Finally, we present efficient recipes to scale this framework to extremely large language models (Section 3.3). 3.1 Recap: Two-stage Training Framework Our training work consists of two stages: question alignment and response alignment (illustrated in Figure 2). During quesiton alignment, we utilize X-En question translation data (Xe, Xl) to train the model to associate non-English questions Xl with their English counterparts Xe (subfigure I in Figure 2). This process enhances language alignment within the large language model. The optimization objective can be written as: arg min \u03b8 X l\u2208L \u2212log p\u03b8(Xe|Xl) \fwhere l is the target language, L is the set of considered non-English languages, and \u03b8 denotes the model\u2019s parameters. During response alignment, we utilize English cutting-edge instruction data De to unlock LLM\u2019s English reasoning capability (subfigure II in Figure 2). The optimization objective is: arg min \u03b8 X {Xe,Ye}\u2208De \u2212log p\u03b8(Ye|Xe) where Xe and Ye denotes the English question and its corresponding response respectively. Thanks to the previously established language alignment, the LLM can effective leverage its English expertise in non-English question contexts. 3.2 Extending to Diverse Reasoning Scenarios with Specialized Data In this section, we present how we incorporate specialized data to adapt the two-step training framework to diverse reasoning scenarios. Math Reasoning with chain-of-thought in multiple languages The first reasoning scenario is solving mathematical reasoning task through chain-of-thought. To unlock the model\u2019s capability on this, we utilize the instruction data depicted in Figure 3. In each question-response pair, the math question provides a background and requires a numerical answer, while the response includes both the numerical answer and an annotated stepby-step natural language reasoning process used to derive the answer. The intermediate chain-ofthought not only enhances reasoning accuracy but also improves explainability and verifiability of LLM\u2019s reasoning behaviour (Wei et al., 2022). Training Stage II: Response Alignment Randy has 60 mango trees on his farm. He also has 5 less than half as many coconut trees as mango trees. How many trees does Randy have in all on his farm? Half of the number of Randy's mango trees is 60/2 = 30 trees. So Randy has 30 5 = 25 coconut trees. Therefore, Randy has 60 + 25 = 85 trees on his farm. tuning stage I model with cutting-edge English-only instruction data \u03d5 arg min \u03d5 \u2211 {X,Y}\u2208D \u2212log p\u03d5(Y|X) Question X Response Y Question Response Figure 3: Illustration of our employed instruction data. We use this instruction data to teach model to solve mathematical reasoning task with chain-of-thought. However, we notice that the model, when finetuned in the manner described above, tends to respond in English to multilingual questions. In certain situations, we would like the model to respond in the same language as the question, thereby enabling users from diverse linguistic backgrounds to understand the model\u2019s problem solving process (Kew et al., 2023). We discover that it is possible to achieve this by incorporating publicly available general domain translation data. By mixing English translation data with En-X translation data during response alignment (subfigure III in Figure 2), the fine-tuned LLM will benefit from multi-task training and is capable of spontaneously producing multilingual outputs, even when no annotated multilingual responses are available during training. Training Randy has 60 mango trees on his farm. He also has 5 less than half as many coconut trees as mango trees. How many trees does Randy have in all on his farm? Let's solve this problem using Python code. <llm-code> mango_trees = 60 half_mango_trees = mango_trees / 2 coconut_trees = half_mango_trees 5 total_trees = mango_trees + coconut_trees total_trees </llm-code> <llm-code-output> 85.0 </llm-code-output> tuning stage I model Question X Response Y Question Response Question Figure 4: Illustration of our employed instruction data. We use this instruction data to teach model to solve mathematical reasoning task with executable code. Math reasoning with executable code The second reasoning scenario involves solving mathematical reasoning task with executable code. In this scenario, the response no longer contains a chain-of-though in natrual language; instead, it consists of Python code. One major advantage of reasoning through executable code is that it helps avoid basic computational errors (Toshniwal et al., 2024). To unlock the model\u2019s capacity on this, we utilize the instruction data depicted in Figure 4. Specifically, the model needs to generate python code enclosed within \u201c<llm-code>\u201d and \u201c</llm-code>\u201d tags. A Python interpreter will be used to execute this code block. The derived numerical answer, corresponding to the value of the variable in the last line of the code block, is enclosed within \u201c<llm-code-output>\u201d and \u201c</llm\fcode-output>\u201d tags. Common sense reasoning The third reasoning scenario is common sense reasoning. Different from math reasoning, this scenario requires the model to solve the given question based on its common sense knowledge. To unlock the model\u2019s capability on this, we utilize the instruction data depicted in Figure 5. In this task, the model needs to select the appropriate answer from the given options through common sense reasoning. When does an apple tree have apples that are not yet ready to eat? Option: (A) summer (B) fall (C) spring (D) valley (E) washington state The answer is (A). Question Response Figure 5: Illustration of our employed instruction data. We use this instruction data to teach model to solve common sense reasoning task. 3.3 Scaling to Extremely Large LM Extremely large language models, e.g., LLaMA270B, Mixtral-8\u00d722B, often demonstrate state-ofthe-art performance among open-source LLMs. Therefore, it is non-trivial to examine how well the training framework scales to these models and whether it offers further benefits on the multilingual performance of these powerful LLMs. Vanilla fine-tuning While fully fine-tuning extremely large models is a straightforward solution, the process can be prohibitively time-consuming and computationally expensive, due to the vast number of parameters that need to be optimized. Efficient proxy-tuning Inspired by Liu et al. (2024), we explore proxy-tuning as an efficient alternative solution. The objective of proxy-tuning is to guide a large pre-trained model M to behave like a tuned model without updating any parameters. This is achieved by employing a small pretrained model M\u2212and a small instruction-tuned model M+, which serve as the expert model and the anti-expert model. The small models must have the same vocabulary as the large model to support arithmetic operations on prediction distributions. The underlying assumption of proxytuning is that the difference in logits between M\u2212 and M+ can approximate the difference between M and a truly-tuned model. Specifically, at each inference step t, we condition the base model M, the expert M+ and the anti-expert M\u2212on the question X and the generated prefix Y<t. The probability distribution for the proxy-tuned model \u02dc M is derived from the prediction distributions of these models: p \u02dc M(Yt|X, Y<t) \u221dpM(Yt|X, Y<t) pM+(Yt|X, Y<t) pM\u2212(Yt|X, Y<t) In subsequent experiments, we will illustrate the effectiveness of this strategy across diverse model families. We also discover that the selection of the small expert and anti-expert model is crucial for the final performance. By carefully selecting small models for proxy-tuning, we can almost recover the performance of fully fine-tuning in a trainingfree manner. 4 Experiment setting Base LLMs We consider a range of the most powerful open-source LLMs for our experiments. In most experiments, we use LLaMA2-7B and LLaMA2-13B as the base models. In experiments involving reasoning with python code, we use CodeLLaMA-7B (Roziere et al., 2023) as the base model. In experiments involving extremely large language models, we consider LLaMA270B, LLaMA3-70B, Mixtral-8x7B and Mixtral8x22B from diverse model families. Training dataset In multlingual mathematical reasoning, we use multilingual questions from GSM8KINSTRUCT (Chen et al., 2023) for question alignment. For response alignment, we use METAMATHQA (Yuan et al., 2023) to teach LLM to reasoning with chain-of-thought. To teach LLM to reasoning with Python code, we use OPENMATHINSTRUCT (Toshniwal et al., 2024). In multlingual common sense reasoning, we use multilingual questions from XCSQA-TEST for question alignment and use English supervised data in XCSQA-TRAIN (Lin et al., 2021) for response alignment. Statistics of involved datasets are reported in Table 1. \fDataset # Lang # Sample Usage Question Response Scenario METAMATHQA 1 395,000 Training ! ! 1 \u20dd OPENMATHINSTRUCT 1 1,343,849 Training ! ! 2 \u20dd GSM8KINSTRUCT 10 73,559 Training ! % 1 \u20dd2 \u20dd MGSM 10 2,500 Evaluation ! ! 1 \u20dd2 \u20dd MSVAMP 10 10,000 Evaluation ! ! 1 \u20dd2 \u20dd XCSQA-TRAIN 1 8,888 Training ! ! 3 \u20dd XCSQA-TEST 15 17,184 Training ! % 3 \u20dd XCSQA-DEV 15 16,000 Evaluation ! ! 3 \u20dd Table 1: Statistics of involved datasets. \u201c# Lang\u201d denotes the number of languages covered by the dataset and \u201c# Sample\u201d refers to the total number of samples it contains. The labels \u201cQuestion\u201d and \u201cResponse\u201d denotes whether each sample includes a question annotated with a response. The symbols 1 \u20dd2 \u20dd3 \u20ddcorrespond to the three scenarios discussed in Section 3.2: math reasoning with chain-of-thought, math reasoning with executable code and common sense reasoning. Training details We use QAlign2 as our code base. We use consistent training hyper-parameters across two stages of training. At each stage, we fine-tune LLM\u2019s full parameters for 3 epoch on eight NVIDIA A100 GPUs. The learning rate is set to 2e-5, with a batch size of 128. Baseline models The primary baseline for our approach is fine-tuning with cuttingedge English instruction data (MonoReason). Comparing this baseline with our approach (QAlign\u2192MonoReason) can directly illustrate the benefits of performing question alignment and the effectiveness of leveraging English expertise. In our experiments on mathematical reasoning, we also present results from fine-tuning with various types of English instruction data to demonstrate the supiority of using the most advanced instruction data within our framework. Evaluation dataset We use MGSM (Shi et al., 2022) and MSVAMP (Chen et al., 2023) to evaluate LLM\u2019s performance on multilingual mathematical reasoning. We use XCSQA-DEV to evaluate LLM\u2019s performance on multilingual common sense reasoning. We use exact-match accuracy as the evaluation metric. Dataset statistics are reported in Table 1. 5 Experiment results In this section, we will report experiment results and introduce our main findings. The two-step training framework is broadly applicable across diverse reasoning scenar2https://github.com/NJUNLP/QAlign ios On the mathematical reasoning benchmarks MGSM and MSVAMP, the question alignment approach effectively mitigates the performance gap between English and non-English by leveraging LLM\u2019s English expertise. The improvement in average multilingual performance is evident both when the model reasons with chainof-thought (Table 2) and reasons with executable Python code (Figure 6). We can also see that this approach is effective across different model sizes. Our fine-tuned 70B models reach a new performance ceiling on multilingual reasoning benchmarks, achieving an accuracy of 63.0% on MGSM and 66.4% on MSVAMP. On the common sense reasoning benchamrk XCSQA, the improvements are also significant. The added question alignment stage consistently enhances LLM\u2019s reasoning accuracy across all 15 languages, demonstrating its broad applicability. Incorporating En-X translation data during response alignment can implicitly encourage LLM to generate non-English CoT As shown in Table 3, the question-aligned model (QAlign\u2192MonoReason) usually exhibits extremely low question-response language consistency3 for non-English tasks, because it tends to respond in English to multilingual questions. After incorporating En-X Flores4 (Goyal et al., 2022) translation data, with English instruction data for 3We perform language identification with the OPENLID toolkit (Burchell et al., 2023). 4We use both the development set and test set of the Flores dataset as the multilingual translation data for training. There are totally 36,000 translation pairs covering nine X-En translation directions. \fMGSM (reasoning with chain-of-thought) System (7B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. SFT\u2020 (Touvron et al., 2023) 3.2 4.8 5.2 15.2 22.4 37.2 34.4 28.0 32.4 43.2 22.6 RFT\u2020 (Yuan et al., 2023) 2.4 2.0 2.8 6.8 16.8 33.6 34.0 29.2 34.0 44.8 20.6 MAmmoTH\u2020 (Yue et al., 2023) 3.6 4.8 2.4 10.8 17.2 33.2 32.8 26.0 32.4 49.6 21.3 WizardMath\u2020 (Luo et al., 2023) 2.0 4.0 3.4 24.0 22.4 30.4 30.4 30.8 34.8 47.6 23.0 MetaMath\u2020 (Yu et al., 2023) 6.4 4.0 3.2 39.2 38.8 56.8 52.8 47.2 58.0 63.2 37.0 MonoReason 7.6 5.6 5.2 34.0 45.2 54.0 56.8 51.6 58.8 65.5 38.4 QAlign\u2192MonoReason (Ours) 32.4 39.6 40.4 44.0 48.4 54.8 56.8 52.4 59.6 68.0 49.6 System (13B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. SFT\u2020 (Touvron et al., 2023) 6.0 6.8 7.6 25.2 32.8 42.8 40.8 39.2 45.2 50.4 29.7 RFT\u2020 (Yuan et al., 2023) 3.2 4.4 3.6 26.4 33.6 38.4 44.8 41.6 46.8 52.0 29.5 MAmmoTH\u2020 (Yue et al., 2023) 3.6 5.2 1.6 19.2 31.2 45.6 39.6 36.8 50.0 56.4 28.9 WizardMath\u2020 (Luo et al., 2023) 6.4 5.6 5.6 22.0 28.0 40.4 42.0 34.4 45.6 52.8 28.3 MetaMath\u2020 (Yu et al., 2023) 11.6 6.4 7.6 42.8 49.2 64.8 65.2 63.6 65.2 67.2 44.4 MonoReason 12.4 11.2 6.4 42.0 46.0 64.0 62.4 61.6 64.8 68.4 43.9 QAlign\u2192MonoReason (Ours, FT) 38.4 49.6 46.0 52.4 59.2 62.0 62.4 64.4 67.2 69.2 57.1 System (70B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. MonoReason 13.2 6.4 7.2 56.4 56.8 69.6 71.2 73.2 75.6 78.4 50.8 QAlign\u2192MonoReason (Ours) 44.0 55.6 50.4 61.6 61.2 67.6 70.8 68.8 73.6 76.0 63.0 MSVAMP (reasoning with chain-of-thought) System (7B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. SFT\u2020 (Touvron et al., 2023) 11.5 18.2 17.2 31.6 35.2 39.0 39.1 39.1 39.2 38.8 30.9 RFT\u2020 (Yuan et al., 2023) 7.7 16.9 14.9 33.9 34.9 40.8 41.5 39.5 42.5 42.7 31.3 MAmmoTH\u2020 (Yue et al., 2023) 4.3 6.3 4.2 26.7 26.8 39.6 39.9 33.7 42.9 45.1 26.3 WizardMath\u2020 (Luo et al., 2023) 16.1 17.0 10.3 37.9 36.3 39.2 37.7 37.4 44.8 48.5 32.5 MetaMath\u2020 (Yu et al., 2023) 14.2 17.8 16.5 53.2 53.1 61.4 60.7 58.9 61.2 65.5 46.3 MonoReason 15.0 17.1 15.4 51.9 54.4 60.9 62.2 59.3 63.3 65.5 46.2 QAlign\u2192MonoReason (Ours) 41.7 47.7 54.8 58.0 55.7 62.8 63.2 61.1 63.3 65.3 57.2 System (13B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. SFT\u2020 (Touvron et al., 2023) 13.9 23.4 19.8 41.8 43.3 46.2 47.8 47.8 46.1 50.9 38.1 RFT\u2020 (Yuan et al., 2023) 12.2 24.8 19.4 42.4 42.3 45.1 45.2 46.5 45.6 47.1 37.1 MAmmoTH\u2020 (Yue et al., 2023) 5.0 13.7 12.9 42.2 47.7 52.3 53.8 50.7 53.9 53.4 38.6 WizardMath\u2020 (Luo et al., 2023) 13.7 16.3 12.5 29.5 37.0 48.7 49.4 43.8 49.4 56.3 35.7 MetaMath\u2020 (Yu et al., 2023) 14.6 15.7 17.4 57.0 56.6 67.3 64.7 63.7 65.9 67.7 49.1 MonoReason 20.6 20.5 19.1 57.0 58.8 68.4 68.1 67.5 68.9 68.9 51.8 QAlign\u2192MonoReason (Ours) 49.2 55.5 55.2 64.3 63.8 69.5 68.1 66.4 66.4 67.6 62.6 System (70B) Bn Th Sw Ja Zh De Fr Ru Es En Avg. MonoReason 21.9 16.1 22.3 66.9 68.2 73.9 71.5 72.0 74.7 72.8 56.0 QAlign\u2192MonoReason (Ours) 50.8 56.5 60.8 68.3 67.7 72.0 71.4 69.1 72.4 74.5 66.4 Table 2: Results on MGSM and MSVAMP dataset. \u201cAvg.\u201d represents the average multilingual performance and bold text denotes the highest score among systems of the same size. The dagger symbol denotes that the results for these models are taken from the published results of Zhu et al. (2024). the second traning stage, the fine-tuned model more frequently responds in the same language as the question, demonstrating a significant increase in language consistency (52.3% vs. 9.7%). An intuitive explaination to this phenomenon is that EnX translation training implicitly biases the LLM to convert its English thought to non-English. However, we also notice that while this approach increases question-response language consistency, it compromises reasoning accuracy. Future work will need to address this trade-off and balance both language consistency and reasoning accuracy without significant sacrifices in either area. Question alignment approach scales well to extremely large language models with both vanilla fine-tuning and efficient proxy-tuning. Table 2 shows that question alignment approach scales well on the 70B model. However, when the number of parameters to be optimized increases significantly, the computational cost and time required for fine-tuning become prohibitively high5. Consequently, we explore proxy-tuning as an efficient alternative to fully fine-tuning. Experiment results are reported in Table 4. At first, we apply question alignment to various small base models and observe that this approach consistently yields substantial improvements, with at least an 5For instance, in our experiments, instruction-tuning LLaMA2-70B with METAMATHQA dataset (comprising 395K question-response pairs) on eight A100 GPUs takes 15 days. \fBengali Thai Swahili Japanese Chinese German French Russian Spanish English Average 0 20 40 Accuracy mGSM (reasoning with executable code) Monoreason (7B) QAlign Monoreason (7B) Bengali Thai Swahili Japanese Chinese German French Russian Spanish English Average 20 40 60 Accuracy mSVAMP (reasoning with executable code) Monoreason (7B) QAlign Monoreason (7B) Urdu Hindi Swahili Japanese Vietanmese Polish Chinese Dutch Russian Italian German Portuguese French Spanish English Average 20 40 60 80 Accuracy XCSQA (common sense reasoning) Monoreason (7B) QAlign Monoreason (7B) Figure 6: Experiment results on MGSM, MSVAMP and XCSQA. On the former two dataset, the fine-tuned model will solve the mathematical problems with python code. On the latter dataset, the fine-tuned model will solve multi-choice problems concerning with common sense reasoning. System (7B) Question-Response Language Consistency (MGSM) Bn Th Sw Ja Zh De Fr Ru Es En Avg. QAlign\u2192MonoReason 0.0 0.0 2.9 0.0 0.0 0.5 0.1 0.1 0.0 93.8 9.7 QAlign\u2192MonoReason+Flores 26.8 42.7 49.3 63.1 26.8 63.2 36.9 82.4 37.9 93.4 52.3 System (7B) Reasoning Accuracy (MGSM) Bn Th Sw Ja Zh De Fr Ru Es En Avg. QAlign\u2192MonoReason 32.4 39.6 40.4 44.0 48.4 54.8 56.8 52.4 59.6 68.0 49.6 QAlign\u2192MonoReason+Flores 21.2 21.6 24.8 28.0 41.6 50.8 55.2 45.6 54.4 66.4 41.0 Table 3: Effects of incorporating En-X Flores translation training for the second stage training. En-X translation training can implicitly bias LLM to generate non-English chain-of-thought and increase the question-response language consistency. 11.1% average increase in average non-English performance. Then, by using the small fine-tuned model as the expert model and its untuned version as the anti-expert model, we can guide the large pre-trained model to behave like a tuned model without updating any parameters. Across different model families, proxy-tuning consistently enhances performance in both non-English and English tasks, effectively extrapolating our results from small models to extremely large models. Furthermore, experiments on LLaMA2 demonstrate that carefully selecting small models for proxy-tuning, such as using LLaMA2-13B as the small proxy model, allows us to achieve 98% of the performance of fully fine-tuning (61.8 vs. 63.0). 6 Mechanism Analysis In this section, we conduct further analysis to gain a deeper understanding of the training regime and its impact on LLM\u2019s working pattern. The analysis is performed in the scenario of mathematical reasoning with chain-of-thought on MGSM dataset. Question alignment produces a more unified semantic space, facilitating the utilization of English expertise in non-English contexts Our analysis begins with the representation space, which fundamentally determines how the model derives the output answer. We input multilingual questions from the MGSM dataset into MonoReason and QAlign\u2192MonoReason models, and then visualize6 the representations of the last input to6For visualization, we use T-SNE (Van der Maaten and Hinton, 2008) for dimension reduction. \fSmall tuned Large tuned Small untuned Large untuned MGSM Non-En En Avg. MonoReason (7B) 35.4 65.5 38.4 QAlign\u2192MonoReason (7B) 47.6 68.0 49.6 QAlign\u2192MonoReason (7B) LLaMA2 (7B) LLaMA2 (70B) 55.8 70.8 57.3 MonoReason (13B) 41.2 68.4 43.9 QAlign\u2192MonoReason (13B) 55.7 69.2 57.1 QAlign\u2192MonoReason (13B) LLaMA2 (13B) LLaMA2 (70B) 60.1 76.8 61.8 QAlign\u2192MonoReason (70B) 61.5 76.0 63.0 MonoReason (8B) 47.3 74.4 50.0 QAlign\u2192MonoReason (8B) 58.4 72.0 59.8 QAlign\u2192MonoReason (8B) LLaMA3 (8B) LLaMA3 (70B) 64.0 77.2 65.4 MonoReason (7B) 35.2 70.4 38.7 QAlign\u2192MonoReason (7B) 48.2 70.8 50.4 QAlign\u2192MonoReason (7B) Mistral (7B) Mixtral (8\u00d77B) 49.4 74.4 51.9 QAlign\u2192MonoReason (7B) Mistral (7B) Mixtral (8\u00d722B) 55.6 78.0 57.9 Table 4: Experiment results on proxy-tuning. \u201cNon-En\u201d and \u201cAvg.\u201d denotes the average multilingual performance on non-English languages and all ten languages respectively. Bold text denotes the highest score among the same model families. Layer 1 MonoReason (7B) Layer 8 Layer 16 Layer 24 Layer 32 QAlign MonoReason (7B) Bengali Thai Swahili Japanese Chinese German French Russian Spanish English Figure 7: Visualization analysis on the representations of the last input tokens from multilingual questions. For brevity, we uniformly sample 5 layers out of the total 32 layers to illustrate the visualized distribution. Different colors denote the languages of the questions. kens, which decides the content of the first output tokens and serves as an important clue for how the model starts its reasoning process (Wendler et al., 2024). The visualization results are shown in Figure 7. For both models, the context representations of multilingual queries always stay apart in the bottom layer (1st and 8th layers). But from the middle to top layers, a notable difference emerges between our alignment-enhanced model and its unaligned counterpart: question alignment produce a more compact space, enabling the model to process multilingual queries in a unified way and facilitating the sharing of its English expertise across different languages. The sharing of the English expertise makes the model to employ similar problem-sovling process to address multilingual questions In addition to analyzing the layer-wise generation process of the initial output token, we further analyze the entire chain-of-thought and reveal the larger impact brought by the unified representation space. We evaluate the consistency among them when the model is presented with the same question expressed in different languages. Specifically, we extract intermediate computation results as lists from the chain-of-thought and compute the Jaccard similarity between them to evaluate the consistency of the reasoning paths, as illustrated in Table 5. The quantitative results are depicted in \fEn Es Ru Fr De Zh Ja Sw Th Es Ru Fr De Zh Ja Sw Th Bn 58.7 53.2 50.4 57.7 55.2 49.1 57.2 52.9 48.9 51.1 42.8 41.8 40.0 41.1 38.6 35.0 35.1 35.8 35.6 34.4 37.2 14.7 14.4 15.5 15.8 14.5 17.8 20.6 16.1 16.3 17.7 15.8 15.8 18.7 19.3 20.5 18.7 18.3 18.7 18.1 16.7 20.6 20.1 21.5 19.4 MonoReason (7B) En Es Ru Fr De Zh Ja Sw Th Es Ru Fr De Zh Ja Sw Th Bn 60.3 53.5 56.2 57.4 57.7 55.6 58.0 56.7 55.2 57.4 49.4 50.5 49.6 49.9 49.2 45.9 46.1 44.6 45.3 43.8 44.5 45.8 47.1 46.3 45.8 45.4 42.3 40.6 43.1 44.6 45.7 42.8 42.9 44.3 41.2 41.5 35.6 38.9 38.5 38.6 34.6 36.7 35.6 36.1 36.5 QAlign MonoReason (7B) En Es Ru Fr De Zh Ja Sw Th Es Ru Fr De Zh Ja Sw Th Bn 62.7 55.1 51.6 58.9 59.8 51.4 63.0 62.0 52.8 57.8 43.2 44.1 46.7 41.7 44.2 41.6 40.5 40.9 40.6 42.7 41.6 15.4 16.9 17.9 18.8 18.2 19.4 21.2 16.4 17.4 18.8 17.8 19.3 21.9 20.6 25.9 18.0 20.1 21.1 20.6 20.6 20.7 22.3 24.5 24.6 MonoReason (13B) En Es Ru Fr De Zh Ja Sw Th Es Ru Fr De Zh Ja Sw Th Bn 60.1 60.8 61.4 61.5 59.9 59.2 61.3 64.4 63.7 61.7 52.9 53.2 57.0 54.6 53.5 53.7 51.7 55.3 50.3 52.1 50.5 49.9 47.6 50.1 50.2 49.6 45.9 43.5 46.0 45.8 48.8 47.6 47.7 47.2 45.5 45.7 41.8 39.8 42.2 43.6 41.1 38.7 38.9 39.4 38.8 QAlign MonoReason (13B) 0 10 20 30 40 50 60 70 Figure 8: Comparative analysis of reasoning path consistency across different systems. The number in each cell represents the reasoning path consistency between the languages in the corresponding row and column. The number is in percentage. Darker blue denotes higher level of consistency. [Question (English)] There are 27 unicorns left in the world. One third of them are in the Scottish Highlands. Two thirds of the Scottish unicorns are female. How many female Scottish unicorns are there? [Chain-of-thought response] One third of the unicorns are in the Scottish Highlands, so there are 27/3 = 9 unicorns in the Highlands. Two thirds of the unicorns in the Highlands are female, so there are 9 * 2/3 = 6 female unicorns in the Highlands. Therefore, there are 6 female Scottish unicorns. The answer is: 6 [Extracted intermediate computation results] A = [9, 6, 6, 6] [Question (Chinese)] \u4e16\u754c\u4e0a\u5269\u4e0b27 \u53ea\u72ec\u89d2\u517d\uff0c\u5176\u4e2d\u4e09\u5206\u4e4b\u4e00\u5728\u82cf\u683c\u5170\u9ad8\u5730\u3002 \u82cf\u683c\u5170\u9ad8\u5730\u7684\u72ec\u89d2\u517d\u7684\u4e09\u5206\u4e4b\u4e8c\u662f\u96cc\u6027\u3002\u82cf\u683c\u5170\u96cc\u6027\u72ec\u89d2\u517d\u6709\u591a\u5c11\u53ea\uff1f [Chain-of-thought response] If there are 27 remaining unicorns in the world, and one-third of them are in the Scottish Highlands, then there are 27/3 = 9 unicorns in the Scottish Highlands. If two-thirds of the unicorns in the Scottish Highlands are female, then there are 9 * 2/3 = 6 female unicorns in the Scottish Highlands. Therefore, there are 6 female unicorns in the Scottish Highlands. The answer is: 6 [Extracted intermediate computation results] B = [9, 6, 6, 6] [Reasoning path consistency] \u02dc A = set(A), \u02dc B = set(B) J(A, B) = | \u02dc A \u2229\u02dc B|/| \u02dc A \u222a\u02dc B| = 100% Table 5: With this case, we illustrate the our computation process for evaluating reasoning path consistency. 0k 1k 2k 4k 8k Question Translation Data per Language 10 20 30 40 50 60 70 Accuracy (%) Bengali Thai Swahili Japanese Chinese German French Russian Spanish English Figure 9: The impact of the size of the question translation data on multilingual reasoning accuracy. Figure 8. The alignment-enhanced models generally have higher consistency compared to their unaligned counterparts, particularly in low-resource languages. This evidence further demonstrate that the question alignment approach can facilitate the sharing of reasoning abilities across languages. The impact of the size of question translation data Another important factor that influences the effectiveness of our training framework is the size of the question translation data. To analyze this factor, we uniformly downsample the multilingual question translation data from 8,000 pairs per language to 1,000 per language. Generally, question alignment does not significantly affect the model\u2019s proficiency in English but does impact its performance on non-English tasks. For highresource languages, the performance peak is usu\fally achieved at smaller data scale, indicating that question alignment can be established in a more data-efficient way for these languages. For lowresource languages, such as Bengali, Thai, and Swahili, scaling up the question translation data for quesiton alignment always yields substantial improvement, indicating the potential of this approach to empower LLMs in supporting long-tail low-resource languages. 7 Conclusion In this paper, we present a comprehensive extension of our previous work and address its remaining limitations. At first, we demonstrate how to extend the two-stage training framework to a variety of challenging reasoning scenarios, including math reasoning with chain-of-thought in multiple languages, math reasoning with executable code and common sense reasoning. Extensive experiment results on multilingual reasoning benchmarks MGSM, MSVAMP, XCSQA demonstrate that by incorporating specialized data, the training framework can be broadly applicable across different reasoning scenarios and model families. Our question alignment approach also scales well with extremely large language models, e.g., LLaMA270B, LLaMA3-70B and Mixtral-8\u00d722B. Employing either vanilla fine-tuning or the efficient alternative proxy tuning, we can enhance the multilingual reasoning performance of these most powerful open-source pre-trained models to new heights. In the mechanism analysis section, we provide indepth analysis on the representation space, reasoning paths and question translation data scales. This analysis illuminates how our training regime influences LLM\u2019s multilingual working pattern."
|
| 16 |
+
}
|
title_10K/test_title_short_2405.01350v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01350v1",
|
| 3 |
+
"title": "Community-Invariant Graph Contrastive Learning",
|
| 4 |
+
"abstract": "Graph augmentation has received great attention in recent years for graph\ncontrastive learning (GCL) to learn well-generalized node/graph\nrepresentations. However, mainstream GCL methods often favor randomly\ndisrupting graphs for augmentation, which shows limited generalization and\ninevitably leads to the corruption of high-level graph information, i.e., the\ngraph community. Moreover, current knowledge-based graph augmentation methods\ncan only focus on either topology or node features, causing the model to lack\nrobustness against various types of noise. To address these limitations, this\nresearch investigated the role of the graph community in graph augmentation and\nfigured out its crucial advantage for learnable graph augmentation. Based on\nour observations, we propose a community-invariant GCL framework to maintain\ngraph community structure during learnable graph augmentation. By maximizing\nthe spectral changes, this framework unifies the constraints of both topology\nand feature augmentation, enhancing the model's robustness. Empirical evidence\non 21 benchmark datasets demonstrates the exclusive merits of our framework.\nCode is released on Github (https://github.com/ShiyinTan/CI-GCL.git).",
|
| 5 |
+
"authors": "Shiyin Tan, Dongyuan Li, Renhe Jiang, Ying Zhang, Manabu Okumura",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.LG",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.LG",
|
| 11 |
+
"cs.SI"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Knowledge AND Graph",
|
| 15 |
+
"gt": "Community-Invariant Graph Contrastive Learning",
|
| 16 |
+
"main_content": "Introduction Graph representation learning on graph-structured data, such as molecules and social networks, has become one of the hottest topics in AI (Cao et al., 2023). Typical GNNs (Kipf & Welling, 2017) require large-scale taskspecific labels, which are expensive and labor-intensive to collect. To alleviate this, graph contrastive learning (GCL) has been proposed as one of the most successful graph representation learning methods, drawing a lot of attention (Li *Equal contribution with order determined by flipping a coin. \u2020 Corresponding Author. 1Tokyo Institute of Technology 2The University of Tokyo 3RIKEN & Tohoku University. Correspondence to: Renhe Jiang <[email protected]>. Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). et al., 2022b). The main goal of GCL is to maximize the agreement of node representations between two augmented views to capture graph invariance information (Tian et al., 2020). Among various GCL variations, effective graph augmentation turns out to be the bread and butter for achieving success (Wei et al., 2023). Early studies almost adopt random graph augmentation, such as randomly dropping edges or masking features (You et al., 2020). Researchers also attempt to incorporate expert knowledge to guide graph augmentation. For instance, GCA (Zhu et al., 2021) and MoCL (Sun et al., 2021) use network science or biomedical knowledge to constrain edge dropping probabilities. However, such random or knowledge-based graph augmentations are sensitive to different datasets (Shen et al., 2023) and may yield suboptimal performance (Yin et al., 2022). To achieve better generalization and globally optimal performance, learnable graph augmentation is proposed to automatically disrupt redundant information as much as possible to share minimal yet sufficient core information between augmented views (Tong et al., 2021; Suresh et al., 2021). Although they have achieved great success, there still remain two open challenges worth exploring. (1) Community structure plays a crucial role in various downstream tasks, such as node classification and link prediction (Li et al., 2022a; Chen et al., 2023b). However, current GCL methods often randomly disrupt graphs during graph augmentation, which inevitably leads to the corruption of high-level graph information (i.e., community) and limits the generalization (Chiplunkar et al., 2018). (2) Current constraints employed in learnable graph augmentation methods primarily focus either on topology or node features (Li et al., 2022b). For instance, GAME (Wei et al., 2023) and GCL-SPAN (Lin et al., 2023) use spectrum-based constraints for topology augmentation. Due to the asymmetry of the feature matrix, their methods cannot be extended to feature augmentation. On the other hand, COSTA (Zhang et al., 2022) designs a covariance-preserving constraint for feature augmentation, which, however, lacks effectiveness in topology augmentation. By solely focusing on one type of graph augmentation (topology or feature), models may not fully exploit all available information and lack robustness against different types of noise (Liu et al., 2022b). To solve the aforementioned issues, we propose a general learnable Community-Invariant GCL framework (CI-GCL), arXiv:2405.01350v1 [cs.LG] 2 May 2024 \fCommunity-Invariant Graph Contrastive Learning which unifies constraints from both topology and feature augmentation to maintain CI for learnable graph augmentation. Specifically, when considering topology augmentation with a certain degree of disruption, we observe a nearly negative correlation between community and spectral changes (see Sec 4.1). Therefore, to maximize the topology perturbation while ensuring community invariance, we can simply maximize graph spectral changes during topology augmentation. To extend our CI constraint to feature augmentation, we convert the feature matrix into a symmetric bipartite feature matrix based on the bipartite graph co-clustering technique (Zhang et al., 2023). This approach converts feature augmentation into bipartite feature augmentation, while elucidating the importance of features in maintaining community structure. For bipartite feature augmentation, we also observed a negative relationship between community and spectral changes, which is consistent with topology augmentation. This motivates us to apply our CI constraint to feature augmentation by maximizing graph spectral changes during bipartite feature augmentation. To summarize, the contributions of this research are: \u2022 We propose a learnable CI-GCL framework to automatically maintain CI during graph augmentation by maximizing spectral change loss, improving the model\u2019s downstream performances. \u2022 We theoretically show that the proposed CI constraint can be applied to both topology and feature augmentation, enhancing the model\u2019s robustness. \u2022 Experiments on 21 widely used benchmarks demonstrate the effectiveness and robustness of CI-GCL. Table 1. Graph augmentation (Aug.) method comparison. An ideal method should support both topology and node feature augmentation, be adaptive to different datasets, be end-to-end differentiable and have efficient back-propagation (BP), and be CI and have unified constraints for any augmentation to against various noise. Property Random or Constraint Learnable Graph Augmentation GraphCL JOAO GCA AutoGCL AD-GCL GCL-SPAN Ours Topology Aug. \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 Feature Aug. \u2713 \u2713 \u2713 \u2713 \u2713 Adaptive \u2713 \u2713 \u2713 \u2713 \u2713 \u2713 Differentiable \u2713 \u2713 \u2713 Efficient BP \u2713 \u2713 \u2713 Community \u2713 Unified Constraint \u2713 2. Related Work As an effective self-supervised learning paradigm, contrastive learning has achieved great success to learn text or image representations (Chen et al., 2020; Zhang et al., 2020). DGI (Velickovic et al., 2019) first adopted contrastive learning to learn robust graph representations that are invariant to various noise and operations. However, different from Euclidean or sequential data, graphs are irregular non-Euclidean data and sensitive to minor structural augmentation (Shen et al., 2023), resulting in learned graph representations being ineffective. Given that among many GCL variations, graph augmentation shows its crucial advantage for graph representation learning, many studies attempted to investigate effective graph augmentation for GCL. Prior GCL almost adopts random graph augmentation. For example, GRACE (Zhu et al., 2020) firstly uses random edge dropping and feature masking as graph augmentations. After that, GraphCL (You et al., 2020) gives an extensive study on different combinations of graph augmentations including randomly node dropping, edge perturbation, subgraph sampling, and feature masking. To make GraphCL more flexible, JOAO (You et al., 2021) automatically selects the combination of different random graph augmentations. Due to the limited generalization of random augmentation, researchers start to incorporate expert knowledge as constraints for graph augmentation. For instance, Duan et al. (2022) and GraphAug (Luo et al., 2023) employ label-invariance between original and augmented views as constraints, which achieves great success in the graph-level classification task. Recent GCL focuses on fully parameterizing graph augmentation for utilizing learnable graph augmentation to automatically determine how to disrupt graphs (Chen et al., 2023a). For example, AutoGCL (Yin et al., 2022) and ADGCL (Suresh et al., 2021) build a learnable graph generator that learns a probability distribution to help adaptively drop nodes and mask features. CGI (Wei et al., 2022) introduces the Information Bottleneck theory into GCL to remove unimportant nodes and edges between two augmented graphs by minimizing shared mutual information. GAME (Liu et al., 2022a) and GCL-SPAN (Lin et al., 2023) explore graph augmentation in spectral space, by maximizing spectral changes of high-frequency or all components to automatically drop edges. AdaGCL (Jiang et al., 2023) and GACN (Wu et al., 2023a) design graph generators and discriminators to automatically augment graphs in an adversarial style. Compared with previous studies, we are the first to point out the importance of community invariance for graph augmentation and propose a unified CI constraint for both topology and feature augmentation by simply maximizing spectral changes. Detailed comparisons are listed in Table 1. 3. Preliminary Let G = (X, A) be a graph with n nodes and m edges, where X \u2208Rn\u00d7d describes node features and A \u2208 {0, 1}n\u00d7n denotes an adjacency matrix with Aij = 1 if an edge exists between node i and j, otherwise Aij = 0. The normalized Laplacian matrix is defined as Lnorm = \fCommunity-Invariant Graph Contrastive Learning Lap(A) = In \u2212D\u22121/2AD\u22121/2, where In \u2208Rn\u00d7n is an identity matrix, D = diag (A1n) is the diagonal degree matrix with 1n \u2208Rn being an all-one vector. Graph Spectrum. The spectral decomposition of Lnorm is defined as Lnorm = Lap(A) = U\u039bU\u22a4, where the diagonal matrix \u039b = eig (Lap (A)) = diag (\u03bb1, . . . , \u03bbn) consists of real eigenvalues known as graph spectrum, and U = [u1, . . . , un] \u2208Rn\u00d7n are the corresponding orthonormal eigenvectors known as the spectral bases (Gene et al., 2013). Graph Representation Learning. Let G denote the whole graph space with G \u2208G. Graph representation learning aims to train an encoder f\u03b8(\u00b7) : G \u2192Rn\u00d7d\u2032 to obtain node representations. Then, it trains a readout function r\u03d5(\u00b7) : Rn\u00d7d\u2032 \u2192Rd\u2032 by pooling all node representations to obtain a low-dimensional vector for graph G, which can be used in graph-level tasks. Graph Contrastive Learning. GCL trains the encoder f\u03b8(\u00b7) to capture the maximum mutual information between the original graph and its perturbed view by graph augmentation. Formally, letting T1(G) and T2(G) denote two graph augmentation distributions of G, GCL is defined as follows: min \u03b8,\u03d5 LGCL(t1(G), t2(G), \u03b8, \u03d5), (1) where tm(G) \u223cTm(G) with m \u2208{1, 2} and LGCL measure the disagreement between two augmented graphs. 4. Methodology We first show the importance of community invariance in GCL with preliminary analysis. Then, we introduce the details of our methodology CI-GCL as illustrated by Figure 2. 4.1. Preliminary Analysis (a) Accuracy (b) Community Changes (c) Spectral Changes GraphCL AD-GCL 0.68 0.69 0.70 0.71 0.72 0.00 0.05 0.10 0.15 0.20 0.20 0.25 0.30 0.35 0.40 0.45 GraphCL+Cluster GraphCL+Destroy Figure 1. In unsupervised graph classification, we define community changes as the average ratio of the changed community labels over the number of nodes before and after graph augmentation by spectral clustering. Spectral changes are the eigenvalue changes between original and augmented graphs, using the L2 distance. Preserving community structure is crucial for learnable graph augmentation, i.e., perturbing a constrained number of edges or features that have least impact to community changes of the input graph. To show the benefits of preserving communities, we conduct a preliminary experiment by applying GraphCL (unlearnable graph augmentation) and AD-GCL (learnable graph augmentation) on the IMDB-B dataset. Specifically, we design the following four methods: (1) AD-GCL with uniformly edge dropping; (2) GraphCL with uniformly edge dropping; (3) GraphCL+Cluster augmentation that removes edges between different clusters with a higher probability; (4) GraphCL+Destroy augmentation that removes edges within the same cluster with a higher probability. Note that (3) preserves community structure, while (4) tends to disrupt community structure, as indicated by recent studies (Chiplunkar et al., 2018; Lin et al., 2022). We plot the accuracy for unsupervised graph classification as Figure 1(a) and the community changes as (b). From Figure 1(a-b), we can observe: (1) Methods\u2019 performance generally exhibits a nearly negative correlation with their community changes (i.e., less cluster changes yield higher accuracy); (2) AD-GCL outperforms GraphCL but underperforms GraphCL+Cluster. All these indicate preserving the community structure yields better results. Moreover, we also draw the spectral changes as Figure 1(c), as graph spectrum can reflect high-level graph structural information (Spielman, 2012; Lee et al., 2014). Through Figure 1(b-c), we can see that spectral changes are almost negatively correlated with community changes. That is to say, we can preserve community invariance during graph augmentation by maximizing spectral changes, based on which we expand on our methodology as follows. 4.2. Community-Invariant Graph Augmentation Topology Augmentation. We conduct edge perturbation and node dropping operations as our topology augmentation. For edge perturbation, we define T1(G) as a Bernoulli distribution Bern(P EP ij ) for each edge Aij. Then, we can sample the edge perturbation matrices \u2206EP \u2208{0, 1}n\u00d7n, where \u2206ij \u223cBern(P EP ij ) indicates whether to flip the edge between nodes i and j. The edge is flipped if \u2206EP ij = 1; otherwise, it remains unchanged. A sampled augmented graph by edge augmentation can be formulated as: tEP 1 (G) = A + C \u25e6\u2206EP, C = Ac \u2212A, (2) where \u25e6denotes an element-wise product and Ac represents the complement matrix of A, calculated as Ac = J\u2212In\u2212A, where J denotes an all-one matrix. Thus, C \u2208{\u22121, 1}n\u00d7n denotes all edge flipping operations, i.e., an edge is added between nodes i and j if Cij = 1, and removed if Cij = \u22121. However, Eq.(2) cannot be directly applied to learnable graph augmentation, since Bernoulli sampling is nondifferentiable. Inspired by Jang et al. (2017), we soften it from the discrete Bernoulli distribution space to the continuous space with a range \u2206EP ij \u2208(0, 1)n\u00d7n using Gumbel\fCommunity-Invariant Graph Contrastive Learning ... ... GNN Encoder Contrastive Shared f (.) r (.) \u03b8 \u1d69 1 2 3 4 5 6 Node features Input Graph Topology Node Feature ... 12 123 0 0 Bipartite Feature Node Embeddings Feature Embeddings Spectral Decomposation MLP Feature Augmentation T (G) 2 1 2 3 4 5 6 1 2 3 4 5 6 t (G) 2 Feature View Feature Masking Loss Conmmunity-Invariant Constraint Topology Augmentation T (G) 1 \u03bb \u03bb \u03bb 1 2 3 = \u001f \u001f Spectral Graph max max LEP( ) Node Embeddings ... 123456 1 2 3 4 5 6 Adjacency Matrix Spectral Decomposation Spectral Decomposation P P P P ... 11 13 23 34 MLP EP EP EP EP ... =Gumbel-Softmax( ) \u0394ij EP t (G) 1 Topology View Edge Perturbation GNN Encoder LFM( ) bases spectrum ... P EP ij \u0394 EP \u0394 FM =Gumbel-Softmax( ) \u0394ij FM P FM ij P P P P ... 11 12 21 22 FM FM FM FM ... ... ... ... Figure 2. The proposed CI-GCL consists of two core components: (1) Learnable graph augmenter optimizes Tm(G) to disrupt redundant information while ensuring community invariance from the original graph. (2) The GNN encoder f\u03b8(\u00b7) and Readout r\u03d5(\u00b7) maximize the mutual information between two augmented graphs by contrastive loss. We use edge dropping and feature masking as an instantiation. Softmax, which can be formulated as: \u2206EP ij (\u03f5) = Softmax((log(P EP ij ) + \u03f5)/\u03c4), (3) P EP ij = Sigmoid(MLPs(Concat(ei, ej))), (4) where P EP ij controls whether to flip edge Aij, MLPs are multilayer perceptions, ei is the i-th node representation, Concat(\u00b7, \u00b7) denotes the concatenation operation, \u03f5 \u223c Gumbel(0, 1),1 and \u03c4 is the temperature factor that controls the approximation degree for the discrete categorical distributions. \u03c4 > 0 results in a well-defined gradient \u2202\u2206EP ij /\u2202P EP ij , facilitating efficient optimization. Node dropping can be considered as a type of edge dropping, i.e., removing all edges connected to this node. Thus, node dropping can be formulated similarly to Eq.(2) as: tND 1 (G) = A + (\u2212A) \u25e6\u2206ND. (5) where \u2206ND can be calculated by: \u2206ND = (\u03a8ND \u00b7 1\u22a4 n + (\u03a8ND \u00b7 1\u22a4 n )\u22a4)/2, (6) \u03a8ND i (\u03f5) = Softmax((log(P ND i ) + \u03f5)/\u03c4), (7) P ND i = Sigmoid(MLPs(ei)). (8) We combine t(EP,ND) 1 (G) as topology augmentation t1(G). CI-based Topology Augmentation. Inspired by the findings in Sec 4.1, we aim to optimize \u2206by simultaneously maximizing graph disruption while minimizing community changes for learnable topology augmentation in Eqs.(2,5). Based on the matrix perturbation theory (Bojchevski & G\u00a8 unnemann, 2019), we have the following definition. Definition 1. Let \u03bbk denote the k-th eigenvalue of the spectral decomposition of Lap(A) = U\u039bU\u22a4. For a single edge 1The Gumbel(0, 1) distribution can be sampled by calculating \u03f5 = \u2212log(\u2212log(u)) with u \u223cUniform(0, 1) (Jang et al., 2017). perturbation Aij, it induces absolute changes in eigenvalues given by Pn k=1 |\u2206\u03bbk| = Pn k=1 |(Uik \u2212Ujk)2 + (\u03bbk \u2212 1)(U 2 ik + U 2 jk)|, and \u2206\u03bbk denotes the k-th spectral change. When optimizing Pn k=1 |\u2206\u03bbk| in Definition 1, we argue that maintaining community invariance requires a categorized discussion on edge adding and edge dropping. Theorem 1. The absolute spectral changes Pn k=1 |\u2206\u03bbk| are upper bounded by \u2225Ui\u00b7 \u2212Uj\u00b7\u22252 2+Pn k=1 |\u03bbk\u22121| and lower bounded by \u2225Ui\u00b7 \u2212Uj\u00b7\u22252 2 \u2212Pn k=1 |\u03bbk \u22121|, respectively. Here, Ui\u00b7 represents the i-th row vector of U, denoting the i-th node embedding in the spectral space. According to Theorem 1, maximizing spectral changes equates to maximizing their upper bound, i.e., flipping several edges between nodes with largest distances in spectral space. Shi & Malik (1997) states that node representations with larger distances always belong to different communities. Thus, we can maximize spectral changes during the edge dropping to preserve community invariance. However, we cannot conduct edge adding since adding edges between clusters always disrupts communities (Zhu et al., 2023). Conversely, minimizing spectral changes equates to minimizing their lower bound, i.e., flipping several edges between nodes with lowest distances, where nodes with lower distances always belong to the same cluster. Thus, we can minimize spectral changes during the edge adding, instead of edge dropping, since dropping edges within one cluster will disrupt communities (Zhu et al., 2023). We formulate the CI constraint for edge perturbation by jointly optimizing edge dropping and adding as follows: max \u2206ED, \u2206EA \u2208S LEP(\u2206EP) = LED(\u2206ED) \u2212LEA(\u2206EA), (9) LED(\u2206ED) = \u2225eig(Lap(A \u2212A \u25e6\u2206ED)) \u2212eig(Lap(A))\u22252 2, LEA(\u2206EA) = \u2225eig(Lap(A + Ac \u25e6\u2206EA)) \u2212eig(Lap(A))\u22252 2, \fCommunity-Invariant Graph Contrastive Learning where S = {S|S \u2208[0, 1]n\u00d7n, \u2225S\u22251 \u2264\u03c8}, \u03c8 controls the perturbation strength, and L(\u2206) represents graph spectral changes under different augmented operations. Node dropping can be considered as one type of ED, which can be constrained by community invariance by: max \u2206ND\u2208S \u2225eig(Lap(A \u2212A \u25e6\u2206ND)) \u2212eig(Lap(A))\u22252 2. (10) By jointly optimizing Eqs.(9,10), we can maximize topology perturbation while maintaining community invariance. Feature Augmentation. Similar to topology augmentation, we define T2(G) as a Bernoulli distribution Bern(P FM ij ) for each feature Xij. Then, we can sample feature masking matrix \u2206FM ij \u223cBern(P FM ij ), indicating whether to mask the corresponding feature. A sampled augmented graph by feature masking can be formulated as: tFM 2 (G) = X + (\u2212X) \u25e6\u2206FM. (11) CI-based Feature Augmentation. Different from topology augmentation, X \u2208Rn\u00d7d is an asymmetric matrix lacking spectral decomposition. Theorem 1 is not applicable to feature augmentation. Moreover, discerning which feature has the least impact on community changes is challenging. Inspired by the co-clustering of bipartite graph (Nie et al., 2017), which can determine the importance of features for node clustering, we construct the feature bipartite graph as: e X = \u0014 0 X X\u22a4 0 \u0015 (n+d)\u00d7(n+d), (12) where the first n rows of e X denote the original nodes, while the subsequent d rows serve to represent features as feature nodes. Then, e Xij, where i \u2208{1, \u00b7 \u00b7 \u00b7 n} and j \u2208{(n + 1), \u00b7 \u00b7 \u00b7 , (n + d)}, can be interpreted as the linking weight between i-th node with j-th feature. Theorem 2. Let the singular value decomposition of the feature matrix X be denoted as svd(D\u22121/2 u XD\u22121/2 v ) = U\u039b1V\u22a4where Du and Dv are the degree matrices of X and X\u22a4, and U and V represent the left and right singular vectors, respectively. Then, eig(Lap( e X)) = F\u039b2F\u22a4where the k-th smallest eigenvector F\u00b7k is equal to the concatenation of k-th largest singular vectors: F\u00b7k = [U\u00b7k; V\u00b7k]. According to Theorem 2 and findings from Nie et al. (2017), if we can maintain community invariance of e X, community structure will also be preserved in X. Hence, we investigate the community-invariant constraint in e X. Theorem 3. Let \u03bbk denote the k-th smallest eigenvalue of \u039b2. When masking one feature e Xij, the induced spectral changes are given by e X as Pn+d k=1 |\u2206\u03bbk| = Pn+d k=1 |(Fik \u2212 Fjk)2 + (\u03bbk \u22121)(F 2 ik + F 2 jk)|, which are upper bounded by \u2225Fi\u00b7 \u2212Fj\u00b7\u22252 2 + Pn+d k=1 |1 \u2212\u03bbk| where i \u2208{1, .., n} and j \u2208{(n + 1), .., (n + d)}, Fi\u00b7 is the i-th row vector of F. Based on Theorem 3, maximizing spectral changes in e X under a constrained number of perturbation equals finding several largest embedding distances between nodes and feature nodes, i.e., these features have the least impact on community changes for these nodes (Zhang et al., 2023). Thus, CI constraint for feature augmentation LFM(\u2206FM) can be formulated as follows: max \u2206FM\u2208S \u2225eig(Lap( e X \u2212e X \u25e6\u2206FM) \u2212eig(Lap( e X)))\u22252 2. (13) Finally, we parameterize \u2206FM and ensure its differentiability in the feature augmentation, formulated as follows: \u2206FM ij (\u03f5) = Softmax((log(P FM ij ) + \u03f5)/\u03c4), (14) P FM ij = Sigmoid(MLPs(Concat( e U\u00b7i, e V\u00b7j))). CI-GCL. As shown in Figure 2, we instantiate a graph contrastive learning framework with the proposed communityinvariant constraint, namely CI-GCL. Specifically, we first conduct spectral decomposition on the adjacent matrix and feature bipartite matrix to obtain node and feature representations. Then, we consider these node and feature representations as input of MLPs for both topology and feature augmentation, where we randomly initialize the parameters of MLPs. For each iteration of contrastive learning, we sample two augmented graphs by topology augmentation and feature augmentation. The augmented graphs are then fed into a GCN encoder f\u03b8(\u00b7), which outputs two sets of node representations. A readout pooling function r\u03d5(\u00b7) is applied to aggregate and transform the node representations and obtain graph representations z(1), z(2). Following GraphCL (You et al., 2020), given training graphs G, we use contrastive objective LGCL, which can be defined as: min \u03b8,\u03d5 LGCL (t1 (G) , t2 (G) , \u03b8, \u03d5) (15) = \u22121 |G| |G| X n=1 log exp(sim(z(1) n , z(2) n /\u03c42)) P|G| n\u2032=1,n\u2032\u0338=n exp(sim(z(1) n , z(2) n\u2032 )/\u03c42) ! , where \u03c42 is the temperature parameter, and we conduct minibatch optimization for Eq.(15) in our study. 4.3. Optimization and Scalability Optimization. Eqs.(9,10,13) are jointly optimized via projected gradient descent. Taking \u2206FM in Eq.(13) as an example, we can update the parameters \u2206FM as: \u2206FM t = PS \u0010 \u2206FM (t\u22121) \u2212\u03b7t\u2207LFM \u0010 \u2206FM (t\u22121) \u0011\u0011 , (16) where PS(\u2206) = arg minS\u2208S \u2225S \u2212\u2206\u22252 F is defined as one projection operation at \u2206over the constraint set S and \u03b7t > 0 is the learning rate for the t-th updating step. The gradient \fCommunity-Invariant Graph Contrastive Learning Table 2. Unsupervised representation learning classification accuracy (%) on TU Datasets. Bold denotes the best performance, and underline represents the second best performance. \u2745marks the reproduced results of the corresponding baselines by us. Method NCI1 \u2191 PROTEINS \u2191 DD \u2191 MUTAG \u2191COLLAB \u2191 RDT-B \u2191 RDT-M5K \u2191IMDB-B \u2191 Avg. \u2191 InfoGraph 76.20\u00b11.0 74.44\u00b10.3 72.85\u00b11.7 89.01\u00b11.1 70.65\u00b11.1 82.50\u00b11.4 53.46\u00b11.0 73.03\u00b10.8 74.02 GraphCL 77.87\u00b10.4 74.39\u00b10.4 78.62\u00b10.4 86.80\u00b11.3 71.36\u00b11.1 89.53\u00b10.8 55.99\u00b10.3 71.14\u00b10.4 75.71 MVGRL 76.64\u00b10.3 74.02\u00b10.3 75.20\u00b10.4 75.40\u00b17.8 73.10\u00b10.6 82.00\u00b11.1 51.87\u00b10.6 63.60\u00b14.2 71.48 JOAO 78.07\u00b10.4 74.55\u00b10.4 77.32\u00b10.5 87.35\u00b11.0 69.50\u00b10.3 85.29\u00b11.4 55.74\u00b10.6 70.21\u00b13.0 74.75 SEGA 79.00\u00b10.7 76.01\u00b10.4 78.76\u00b10.6 90.21\u00b10.7 74.12\u00b10.5 90.21\u00b10.7 56.13\u00b10.3 73.58\u00b10.4 77.25 GCS\u2745 77.18\u00b10.3 74.04\u00b10.4 76.28\u00b10.3 88.19\u00b10.9 74.00\u00b10.4 86.50\u00b10.3 56.30\u00b10.3 72.90\u00b10.5 75.64 GCL-SPAN\u2745 75.43\u00b10.4 75.78\u00b10.4 78.78\u00b10.5 85.00\u00b10.8 71.40\u00b10.5 86.50\u00b10.1 54.10\u00b10.5 66.00\u00b10.7 74.12 AD-GCL\u2745 73.38\u00b10.5 73.59\u00b10.7 75.10\u00b10.4 89.70\u00b11.0 72.50\u00b10.6 85.52\u00b10.8 54.91\u00b10.4 71.50\u00b10.6 74.53 AutoGCL\u2745 78.32\u00b10.5 69.73\u00b10.4 75.75\u00b10.6 85.15\u00b11.1 71.40\u00b10.7 86.60\u00b11.5 55.71\u00b10.2 72.00\u00b10.4 74.33 CI+AD-GCL 74.35\u00b10.5 74.66\u00b10.6 76.20\u00b10.4 89.88\u00b10.7 73.94\u00b10.3 87.80\u00b11.2 54.75\u00b10.6 72.10\u00b10.3 75.46 CI+AutoGCL 78.47\u00b10.7 70.81\u00b10.5 76.53\u00b10.6 86.73\u00b11.0 72.24\u00b10.9 87.50\u00b11.4 55.97\u00b10.2 72.50\u00b10.3 75.09 CI-GCL 80.50\u00b10.5 76.50\u00b10.1 79.63\u00b10.3 89.67\u00b10.9 74.40\u00b10.6 90.80\u00b10.5 56.57\u00b10.3 73.85\u00b10.8 77.74 \u2207LFM(\u2206FM (t\u22121)) can be calculated via a chain rule, with a closed-form gradient over eigenvalues, i.e., for Lnorm = Lap(A + C \u25e6\u2206FM), the derivatives of its k-th eigenvalue \u03bbk is \u2202\u03bbk/\u2202Lnorm = U\u00b7kU\u22a4 \u00b7k (Rogers, 1970). Scalability. Due to the eigendecomposition, the time complexity of optimizing Eqs.(9,10) is O(Mn3) with M representing the number of iterations, due to the eigendecomposition, which is prohibitively expensive for large graphs. To reduce the computational cost, instead of conducting eigendecomposition on the full graph spectrum, we employ selective eigendecomposition on the K lowest eigenvalues via the Lanczos Algorithm (Parlett & Scott, 1979), which can reduce time complexity to O(Mn2K). Similarly, we can use Truncated SVD (Halko et al., 2011) to obtain the K highest eigenvalues of X and then concatenate them as approximate for eigendecomposition of e X, thereby reducing time complexity from O(M(n+d)3) to O(Mn log K). 5. Experiments In our general experimental settings, we use GIN (Xu et al., 2019) as the base encoder for all baselines to ensure a fair comparison. Each experiment is repeated 10 times with different random seeds, and we report the mean and standard derivation of the corresponding evaluation metrics. We select several best-performing baselines for comparison, including classic GCL methods, such as MVGRL (Hassani & Ahmadi, 2020), InfoGraph (Sun et al., 2020), GraphCL, and JOAO, as well as GCL methods with learnable graph augmentation, such as SEGA (Wu et al., 2023b), GCS (Wei et al., 2023), GCL-SPAN, AD-GCL, and AutoGCL. 5.1. Quantitative Evaluation 5.1.1. COMPARISON WITH STATE-OF-THE-ARTS To comprehensively demonstrate the effectiveness and generalizability of CI-GCL, following previous studies (Yin et al., 2022), we perform evaluations for graph classification and regression under three different experimental settings: unsupervised, semi-supervised, and transfer learning. Table 3. RMSE for unsupervised graph regression. Method molesol \u2193 mollipo \u2193 molfreesolv \u2193Avg. \u2193 InfoGraph 1.344\u00b10.18 1.005\u00b10.02 10.00\u00b14.82 4.118 GraphCL 1.272\u00b10.09 0.910\u00b10.02 7.679\u00b12.75 3.287 MVGRL 1.433\u00b10.15 0.962\u00b10.04 9.024\u00b11.98 3.806 JOAO 1.285\u00b10.12 0.865\u00b10.03 5.131\u00b10.72 2.427 GCL-SPAN 1.218\u00b10.05 0.802\u00b10.02 4.531\u00b10.46 2.184 AD-GCL 1.217\u00b10.09 0.842\u00b10.03 5.150\u00b10.62 2.403 CI-GCL 1.130\u00b10.13 0.816\u00b10.03 2.873\u00b10.32 1.606 Unsupervised Learning. We first train graph encoders (i.e., GIN) separately for each of the GCL baselines using unlabeled data. Then, we fix parameters of these models and train an SVM classifier using labeled data. We use TU datasets (Morris et al., 2020) and OGB datasets (Hu et al., 2020a) to evaluate graph classification and regression, respectively. We adopt the provided data split for the OGB datasets and use 10-fold cross-validation for the TU datasets as it lacks such a split. Table 2 shows the performance on graph classification and Table 3 draws the performance on graph regression. In these tables, CI-GCL achieves the best results on 9 datasets and competitive results on the MUTAG and mollipo datasets. Specifically, CI-GCL achieves the highest averaged accuracy in graph classification (77.74%) and the lowest RMSE in graph regression (1.606), surpassing SOTA classification methods, such as SEGA (77.25%), GraphCL (75.71%), as well as SOTA regression methods, such as GCL-SPAN (2.184) and AD-GCL (2.403). Semi-Supervised Learning. Following GraphCL, we employ 10-fold cross validation on each TU datasets using ResGCN (Pei et al., 2021) as the classifier. For each fold, different from the unsupervised learning setting, we only use 10% as labeled training data and 10% as labeled testing data for graph classification. As shown in Table 4, CI-GCL achieves highest averaged accuracy (74.0%) compared with \fCommunity-Invariant Graph Contrastive Learning Table 4. Accuracy (%) for 10% labeled semi-supervised graph classification. Method NCI1 \u2191 PROTEINS \u2191 DD \u2191 COLLAB \u2191 RDT-B \u2191 RDT-M5K \u2191 GITHUB \u2191 Avg. \u2191 No Pre-train 73.72\u00b10.2 70.40\u00b11.5 73.56\u00b10.4 73.71\u00b10.3 86.63\u00b10.3 51.33\u00b10.4 60.87\u00b10.2 70.0 GraphCL 74.63\u00b10.3 74.17\u00b10.3 76.17\u00b11.4 74.23\u00b10.2 89.11\u00b10.2 52.55\u00b10.5 65.81\u00b10.8 72.3 JOAO 74.48\u00b10.3 72.13\u00b10.9 75.69\u00b10.7 75.30\u00b10.3 88.14\u00b10.3 52.83\u00b10.5 65.00\u00b10.3 71.9 SEGA 75.09\u00b10.2 74.65\u00b10.5 76.33\u00b10.4 75.18\u00b10.2 89.40\u00b10.2 53.73\u00b10.3 66.01\u00b10.7 72.9 AD-GCL 75.18\u00b10.4 73.96\u00b10.5 77.91\u00b10.7 75.82\u00b10.3 90.10\u00b10.2 53.49\u00b10.3 65.89\u00b10.6 73.1 AutoGCL 67.81\u00b11.6 75.03\u00b13.5 77.50\u00b14.4 77.16\u00b11.5 79.20\u00b13.5 49.91\u00b12.7 58.91\u00b11.5 69.3 CI-GCL 75.86\u00b10.8 76.28\u00b10.3 78.01\u00b10.9 77.04\u00b11.5 90.29\u00b11.2 54.47\u00b10.7 66.36\u00b10.8 74.0 Table 5. ROC-AUC (%) for graph classification under transfer Learning settings. Pre-Train ZINC 2M PPI-306K Fine-Tune BBBP \u2191 Tox21 \u2191 ToxCast \u2191 SIDER \u2191 ClinTox \u2191 MUV \u2191 HIV \u2191 BACE \u2191 PPI \u2191 Avg. \u2191 No Pre-train 65.8\u00b14.5 74.0\u00b10.8 63.4\u00b10.6 57.3\u00b11.6 58.0\u00b14.4 71.8\u00b12.5 75.3\u00b11.9 70.1\u00b15.4 64.8\u00b11.0 66.7 MVGRL 69.0\u00b10.5 74.5\u00b10.6 62.6\u00b10.5 62.2\u00b10.6 77.8\u00b12.2 73.3\u00b11.4 77.1\u00b10.6 77.2\u00b11.0 68.7\u00b10.7 71.4 SEGA 71.9\u00b11.1 76.7\u00b10.4 65.2\u00b10.9 63.7\u00b10.3 85.0\u00b10.9 76.6\u00b12.5 77.6\u00b11.4 77.1\u00b10.5 68.7\u00b10.5 73.6 GCS\u2745 72.5\u00b10.5 74.4\u00b10.4 64.4\u00b10.2 61.9\u00b10.4 66.7\u00b11.9 77.3\u00b11.7 78.7\u00b11.4 82.3\u00b10.3 70.3\u00b10.5 72.1 GCL-SPAN 70.0\u00b10.7 78.0\u00b10.5 64.2\u00b10.4 64.7\u00b10.5 80.7\u00b12.1 73.8\u00b10.9 77.8\u00b10.6 79.9\u00b10.7 70.0\u00b10.8 73.2 AD-GCL\u2745 67.4\u00b11.0 74.3\u00b10.7 63.5\u00b10.7 60.8\u00b10.9 58.6\u00b13.4 75.4\u00b11.5 75.9\u00b10.9 79.0\u00b10.8 64.2\u00b11.2 68.7 AutoGCL\u2745 72.0\u00b10.6 75.5\u00b10.3 63.4\u00b10.4 62.5\u00b10.6 79.9\u00b13.3 75.8\u00b11.3 77.4\u00b10.6 76.7\u00b11.1 70.1\u00b10.8 72.5 CI+AD-GCL 68.4\u00b11.1 74.5\u00b10.9 64.0\u00b10.8 61.4\u00b10.9 59.8\u00b13.2 76.5\u00b11.7 77.0\u00b10.9 80.0\u00b10.8 65.3\u00b11.1 69.6 CI+AutoGCL 73.9\u00b10.7 76.4\u00b10.3 63.8\u00b10.3 63.9\u00b10.6 80.9\u00b13.1 76.3\u00b11.3 78.8\u00b10.7 78.8\u00b11.1 70.9\u00b10.7 73.7 CI-GCL 74.4\u00b11.9 77.3\u00b10.9 65.4\u00b11.5 64.7\u00b10.3 80.5\u00b11.3 76.5\u00b10.9 80.5\u00b11.3 84.4\u00b10.9 72.3\u00b11.2 75.1 SOTA baselines SEGA (72.9%) and AD-GCL (73.1%). Transfer Learning. To show the generalization, we conduct self-supervised pre-training for baselines on the preprocessed ZINC-2M or PPI-306K dataset (Hu et al., 2020b) for 100 epochs and then fine-tune baselines on different downstream biochemical datasets. Table 5 shows that CIGCL achieves best results on 6 datasets and comparable performance on the rest datasets with an averaged performance (75.1%), comparing with SOTA baseline SEGA (73.6%). Summary. From the above experimental results, we obtain the following three conclusions: (1) Higher Effectiveness. CI-GCL can achieve the best performance in three different experimental settings, attributed to its unified communityinvariant constraint for graph augmentation. Compared to GraphCL and MVGRL, with similar contrastive objectives, the gain by CI-GCL mainly comes from the CI constraint and learnable graph augmentation procedure. While compared to AD-GCL and AutoGCL, with similar encoders, CI-GCL, guided by community invariance, is clearly more effective than the widely adopted uniformly random augmentation. (2) Better Generalizability. By maximizing spectral changes to minimize community changes, CI-GCL can obtain the encoder with better generalizability and transferability. Since the encoder is pre-trained to ignore the impact of community irrelevant information and mitigate the relationship between such information and downstream labels, solving the overfitting issue. Furthermore, previous studies, such as JOAO and GCL-SPAN, improve generalizability of the GNN encoder on molecule classification by exploring structural information like subgraph. We suggest that the community could be another way to study chemical and biological molecules. (3) Wider Applicability. By combing the CI constraint with AD-GCL and AutoGCL in Table 2 and Table 5, we also see significant improvements on almost all datasets, showing that the CI constraint could be a plugand-play component for any learnable GCL frameworks. Table 6. Ablation study on unsupervised graph classification. Method NCI1 \u2191 PROTEIN \u2191 DD \u2191 MUTAG \u2191Avg. \u2191 w/o TA 78.8\u00b10.6 74.6\u00b10.6 77.8\u00b10.8 86.1\u00b10.9 79.3 w/o FA 79.2\u00b10.4 75.1\u00b10.2 78.1\u00b10.3 86.3\u00b10.9 79.6 w/o CI on TA 79.9\u00b10.6 75.7\u00b10.9 78.8\u00b10.3 87.6\u00b10.5 80.5 w/o CI on FA 80.0\u00b10.3 75.9\u00b11.4 78.7\u00b10.8 87.5\u00b11.4 80.5 w/o CI on ALL 78.6\u00b10.9 74.8\u00b10.9 78.3\u00b10.3 86.5\u00b11.8 79.5 CI-GCL 80.5\u00b10.5 76.5\u00b10.1 79.6\u00b10.3 89.6\u00b10.9 81.5 5.1.2. ABLATION STUDY We conduct an ablation study to evaluate the effectiveness of the proposed CI constraint on topology augmentation (TA) and feature augmentation (FA). We consider the following variants of CI-GCL: (1) w/o TA: remove TA on one branch. (2) w/o FA: remove FA on one branch. (3) w/o CI on TA: remove CI constraint on TA. (4) w/o CI on FA: remove CI constraint on FA. (5) w/o CI on ALL: remove CI constraint on both TA and FA. Experimental results in Table 6 demonstrate that the removal of either component of the method negatively impacts the ability of the graph representation learning to perform well. These results align with our hypothesis that random topology or feature augmentation \fCommunity-Invariant Graph Contrastive Learning without CI constraint corrupt community structure, thereby hindering model\u2019s performance in downstream tasks. 5.2. Qualitative Evaluation 5.2.1. ROBUSTNESS AGAINST VARIOUS NOISE To showcase the robustness of CI-GCL, we conduct experiments in the adversarial setting. Following GraphCL, we conduct Random noise attack, with perturbation ratios \u03c3 \u2208{0.05, 0.30}, on topology A and feature X of the input graph, respectively. Specifically, for the topology attack, we randomly flip \u03c3 \u00d7 m edges with m as the total number of edges. For the feature attack, we randomly select \u03c3 \u00d7 d features to add Gaussian noise with d as the total number of features. Baselines without designed feature augmentation are set with random feature masking. Table 3 reports the 0.9 0.8 0.7 0.6 0.05 0.10 0.15 0.20 0.25 0.30 0.90 0.85 0.80 0.75 0.05 0.10 0.15 0.20 0.25 0.30 ACC (A) Topology attack on MUTAG CI-GCL AD-GCL AutoGCL GCL-SPAN GCS 0.8 0.7 0.6 0.05 0.10 0.15 0.20 0.25 0.30 0.70 0.65 0.60 0.05 0.10 0.15 0.20 0.25 0.30 ACC (B) Feature attack on MUTAG (C) Topology attack on NCI1 (D) Feature attack on NCI1 Figure 3. Accuracy (%) under noise attack on two datasets. graph classification performance. From Table 3, we have the following three findings. (1) CI-GCL outperforms four best-performing GCL methods in both topology and feature attack, demonstrating its strong robustness. (2) CI-GCL and GCL-SPAN are more robustness than other baselines in the topology attack, showing that preserving high-level graph structure can improve robustness than random graph augmentation. While CI-GCL can better focus on community invariance to outperform GCL-SPAN. (3) CI-GCL is more robust than other baselines in the feature attack since we also apply the uniformed CI constraint to feature augmentation. 5.2.2. EFFECTIVENESS IN COMMUNITY PRESERVATION To explore the ability of community preservation, we draw community changes in Table 7, where community changes are defined as the averaged number of changes of community labels of nodes before and after graph augmentation by spectral clustering. We observe that CI-GCL can effectively preserve community structure due to the proposed CI constraint. Furthermore, refer to Table 2, we can find that methods with larger community disruption, such as (A) Original Graph (B) Topology Aug. of CI-GCL (C) Topology Aug. of GraphCL (D) Feature Aug. of CI-GCL 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 Features Community Figure 4. A case study of TA and FA of GraphCL and CI-GCL. (B-C) share the same color map and Green lines are edge dropping and Red lines are edge adding. Table 7. Community changes (%) in unsupervised learning. Method NCI1 \u2193 PROTEINS \u2193 MUTAG \u2193 IMDB-B \u2193 Avg. \u2193 GraphCL 30.6 25.2 29.9 29.2 28.7 GCS 27.6 30.9 26.5 32.5 29.3 GCL-SPAN 15.1 11.2 14.5 19.7 15.1 AD-GCL 7.0 4.8 5.6 18.0 8.8 AutoGCL 34.2 31.0 30.3 33.4 32.2 CI-GCL 5.9 4.3 3.3 13.8 6.8 GraphCL and AutoGCL, underperform others with smaller community disruption. We also provide a visualization of CI-GCL on widely used synthetic graphs with 1,000 samples (Kim & Oh, 2021), that is suited for analysis since it possesses clear community structure (Kim & Oh, 2021). We train our models in an unsupervised learning manner on this dataset and randomly select one example for visualization. In Figure 4(A-C), CI-GCL effectively preserve community structure by removing edges between clusters (Green lines) and add edges within each cluster (Red lines), while GraphCL destroys communities by randomly add and remove edges. In Figure 4(D), with xand y-axis represent nodes and features, respectively, CI-GCL can effectively maintain important features for community invariance. 6. Conclusion In this work, we aimed to propose an unified constraint that can be applied to both topology and feature augmentation, to ensure community invariance and benefit for downstream tasks. To achieve this goal, we searched for the augmentation scheme that would maximize spectral changes of the input graph\u2019s topology and features, which can also minimize community changes. Our proposed community-invariant constraint can be paired with various GCL frameworks. We plan to explore more high-level graph information as constraints for learnable graph augmentation and apply our \fCommunity-Invariant Graph Contrastive Learning framework to many real-world applications in the future. Impact Statements In this section, we elaborate on the broader impacts of our work from the following two aspects. (1) Learnable Graph Augmentations. With the rapid development of GCL, learnable graph augmentation has become a significant research topic in the machine-learning community. Compared to current learnable graph augmentation methods, our work introduces control over the augmentation scheme in joint learning settings. Considering that CI constraint as a kind of expert knowledge, we can perceive our work as establishing a connection between expert knowledge-guided augmentations and learnable augmentations through the design of specific constraints. (2) Community and Spectrum: Despite significant advances in GCL, theoretical foundations regarding the relations between community preservation and spectrum design remain lacking. Our work highlights the significant potential of graph spectrum and community preservation in GCL, which may assist others in comprehending the graph spectrum. Moreover, we do not anticipate any direct negative impacts on society from our findings."
|
| 17 |
+
}
|
title_10K/test_title_short_2405.01359v1.json
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"url": "http://arxiv.org/abs/2405.01359v1",
|
| 3 |
+
"title": "GAIA: A General AI Assistant for Intelligent Accelerator Operations",
|
| 4 |
+
"abstract": "Large-scale machines like particle accelerators are usually run by a team of\nexperienced operators. In case of a particle accelerator, these operators\npossess suitable background knowledge on both accelerator physics and the\ntechnology comprising the machine. Due to the complexity of the machine,\nparticular subsystems of the machine are taken care of by experts, who the\noperators can turn to. In this work the reasoning and action (ReAct) prompting\nparadigm is used to couple an open-weights large language model (LLM) with a\nhigh-level machine control system framework and other tools, e.g. the\nelectronic logbook or machine design documentation. By doing so, a multi-expert\nretrieval augmented generation (RAG) system is implemented, which assists\noperators in knowledge retrieval tasks, interacts with the machine directly if\nneeded, or writes high level control system scripts. This consolidation of\nexpert knowledge and machine interaction can simplify and speed up machine\noperation tasks for both new and experienced human operators.",
|
| 5 |
+
"authors": "Frank Mayet",
|
| 6 |
+
"published": "2024-05-02",
|
| 7 |
+
"updated": "2024-05-02",
|
| 8 |
+
"primary_cat": "cs.CL",
|
| 9 |
+
"cats": [
|
| 10 |
+
"cs.CL",
|
| 11 |
+
"physics.acc-ph"
|
| 12 |
+
],
|
| 13 |
+
"label": "Original Paper",
|
| 14 |
+
"paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
|
| 15 |
+
"gt": "GAIA: A General AI Assistant for Intelligent Accelerator Operations",
|
| 16 |
+
"main_content": "Introduction Particle accelerators are complex machines that consist of a large number of subsystems. Although many processes are automated and feedback systems are in place, experiments and machine supervision need to be performed by a group of operators. These operators usually have an accelerator physics background and mostly know how the technology works. They especially know how to setup and tune the machine parameters for certain working points and experiments using high-level graphical user interfaces, which are connected to low-level machine control software. Due to the complexity of the machine, some subsystems of the machine are taken care of by experts, who the operators can turn to. This work shows that it is possible to support the day-to-day operation of a complex machine like a particle accelerator using a large language model (LLM), an object-oriented high-level machine control system framework, as well as a number of interfaces to knowledge bases such as the electronic logbook. The system is able to assist the operators on many levels, e.g. by producing Python scripts, which when executed perform a task defined by an input prompt to the LLM. To this end, the reasoning and action prompting paradigm (ReAct) [Yao et al., 2023] is implemented. This way a multi-expert system is realized, mimicking the real world, where the complex machine is operated by many subsystem experts. arXiv:2405.01359v1 [cs.CL] 2 May 2024 \fGAIA TECHNICAL REPORT 2 Background \u2013 Accelerator Controls Modern accelerators are routinely operated using sophisticated low-level control systems, such as EPICS1, TANGO2, TINE3, or DOOCS4. These control systems allow for high frequency machine and beam diagnostics, as well as control, which is an essential requirement for highly available, brilliant beams for the users. In addition, control systems are often paired with high speed data acquisitions systems (DAQ), capable of recording pulse synchronized data at the machine repetition rate. In order to access the machine, or derived data (via middle layer servers), users can usually leverage libraries and wrappers for common programming and scripting languages such as C, C++, Java, or Python. Easy to use toolkits are sometimes provided to create graphical user interfaces (GUIs) for simple interaction with control system servers (e.g. parameter scans), or data visualization. On many machines there is the need to perform experiments, which are more complex than simple oneor twodimensional parameter scans. In this case users and operators have to either perform the tasks manually, or to write measurement scripts using e.g. Python. In this work a Python-based scripting toolkit called doocs_generic_experiment is used, which simplifies the task of writing a measurement script substantially. The toolkit follows an object-oriented approach and is based on a set of so called actions, which resemble real-world sub-tasks an operator would have to perform if the experiment would be conducted manually. These actions can be grouped into procedures, which can reach arbitrary complexity. In addition of performing a set of action in series, procedures can also run multiple actions in parallel to save time. The final experiment can then consit of either one, or many of these procedures. Due to the easy to unterstand concept of stringing together common actions and procedures, the toolkit enables rapid prototyping of complex experiments and enables fulland semi-automation of experimental campaigns, which would otherwise take too long to perform manually. In addition, the concept of encapsulating commonly performed actions adds a safety layer as the room for possible mistakes is reduced. Figure 1 shows a selection of currently available actions and procedures. Action ReadAction WriteAction RichWriteAction RichReadAction IfElseAction LogAction UserInputAction PauseAction WaitForStabilityAction ParallelReadAction WaitForChangeAction SubscriptionAction ProcessingAction AlarmConditionAction NeuralNetAction LaserSwitchAction ScreenAction PulseShotModeAction BeamRecenteringAction LaserApertureAction MagnetDegaussAction MagnetCycleAction PrintToLogbookAction PolariXOnBeamSwitchAction Procedure ParallelProcedure DOOCSManager DOOCSAddress SingleDataProcedure ScanProcedure ImageScanProcedure OutputManager MachineStateProcedure PhaseAdvanceScanProcedure FourDimTomographyProcedure FiveDimTomographyProcedure Figure 1: A selection of procedures and actions available via the doocs_generic_experiment Python module. 1https://epics.anl.gov, last access 2024-04-26 2https://www.tango-controls.org, last access 2024-04-26 3https://tine.desy.de, last access 2024-04-26 4https://doocs.desy.de, last access 2024-04-26 2 \fGAIA TECHNICAL REPORT 3 Multi-Expert System \u2013 GAIA In order to realize the operations assistant, the open-weights LLM Mixtral 8x7B Instruct v0.1 (8 bit quantization) [Jiang et al., 2024] is used as the main model. Mixtral 8x7B supports a context size of 32k tokens. This is ideal for reasoning and chain of thought (CoT) prompting [Wei et al., 2022, 2023, Yao et al., 2023]. The model runs locally on a single Nvidia A100 80GB GPU in the DESY Maxwell cluster and is served by Ollama5 (model tag: mixtral:8x7b-instruct-v0.1-q8_0). Note that the Maxwell node does not need to directly interface with the accelerator control system. All interaction and knowledge retrieval is performed by a client application, which runs on a computer, which is part of the control system network. This client can then interface with control system servers, file servers, the Mattermost messaging system, etc., if needed. The client is called General AI Assistant (GAIA). One of the main concepts of the ReAct prompting scheme is tool use. As the agent engages in its inner monologue, or chain of thought, it will eventually reach a point where it needs to either perform an action, or retrieve information. This is analogous to an operator deciding to either use control system tools to control or diagnose certain parts of the machine, or to turn to an expert to retrieve information. Technically, the agent, which is implemented using LangChain6, interrupts the LLM output stream and injects the result of a particular tool use. The agent might, for example, come to the conclusion that it needs to retrieve the current value of a particular machine parameter. In this case, within its chain of thought, it may suggest using doocs_generic_experiment based tools to perform this particular task. Another example would be the retrieval of information from the electronic logbook. One advantage of the concept of tool use is that it helps to circumvent the LLM token limit. In many cases knowledge retrieval may involve classical retrieval augmented generation (RAG) [Lewis et al., 2021] implementations, which use their own disjunct LLM context. This way, only the result of the knowledge retrieval process is injected into the agents context window. In addition, this allows the use of fine-tuned RAG systems potentially based on other LLMs. If the task is to devise a certain experimental procedure at a particle accelerator, at least two tools/experts may be needed. The first expert may provide knowledge about the layout of the beam line, as well as how to utilize the specific elements for a given task. The second expert may on the other hand be proficient in producing doocs_generic_experiment code and how to interface with the accelerator control system. The agent is able to combine the output of the two tools, yielding a suitable Python program from a prompt, such as \"I want to operate the accelerator at maximum energy gain\". Figure 2 shows a selection of tools used in the current GAIA implementation. In the following section a number of example prompts to GAIA are shown. Note that the chain of thought, which is shown in the examples, is usually not visible to the user. \u21e8 Look up info about the accelerator layout \u21e8 Uses an already existing lattice file parser AcceleratorLatticeTool \u21e8 Helps looking up DOOCS addresses \u21e8 Uses simple sequence matching DOOCSAddressHelperTool \u21e8 Summarizes .pptx files from weekly meetings \u21e8 Uses basic prompting and its own context OperationsMeetingSummarizer \u21e8 Gives access to the electronic logbook \u21e8 Uses an existing CLI interface to the logbook LogbookTool \u21e8 Let\u2019s the agent contact experts \u21e8 Uses Mattermost API MattermostChatBotTool \u21e8 Runs Python code \u21e8 \u2026for example generated by DGEHelperTool PythonCodeRunnerTool \u21e8 ReadActionTool, LogbookActionTool, \u2026 \u21e8 MachineStateProcedureTool, \u2026 Wrappers around Actions and Procedures \u21e8 Explains doocs_generic_experiment \u21e8 Can also generate code DGEHelperTool \u21e8 \u2026in order to get accurate calculations CalculatorTool Figure 2: A selection of tools to be used by GAIA. 5http://www.ollama.com, last access 2024-04-26 6http://www.langchain.com, last access 2024-04-26 3 \fGAIA TECHNICAL REPORT 4 Examples In this section example prompts to GAIA are presented. The results are shown in Figs 3 through 7. Note that some parts of the output are truncated, as indicated by \u2019[...]\u2019. The chain of thought, shown in the examples, is usually not visible to the user. Also, names are redacted if needed. The example cases show the versatility of the system, which includes access to meeting notes (Fig. 3), guidance on how to use doocs_generic_experiment (Fig. 4), information retrieval from the electronic logbook (Fig. 5), machine data access using doocs_generic_experiment and subsequently contacting human experts via Mattermost chat (Fig. 6), and running complex tasks using an ad hoc generated doocs_generic_experiment script (Fig. 7). fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? Can you summarize the last operations meeting? > Entering new AgentExecutor chain\u2026 Thought: To answer this question, I need to use the \"Operation Meeting Summarizer\" tool and provide the date of the meeting as input. I will ask the user for the date if it has not been provided. Action: ``` { \"action\": \"User Input Tool\", \"action_input\": { \"question\": \"What is the date of the operation meeting you want me to summarize?\" } } ``` What is the date of the operation meeting you want me to summarize? 29.01.2024 Observation: 29.01.2024 Thought: Based on the user input, I can now use the \"Operation Meeting Summarizer\" tool to generate a summary of the operations meeting that took place on the given date. Action: ```json { \"action\": \"Operation Meeting Summarizer\", \"action_input\": { \"date\": \"2024-01-29\" } } ``` Observation: The ARES Operation Meeting summary from January 29, 2024 covers [\u2026] > Finished chain. The ARES Operation Meeting summary from January 29, 2024 covers the week 4 achievements, difficulties, and notes. During this week, the team re-established the velocity bunching working point using fewer quads, optimized transmission through the DLA, and worked on various scans and tuning. They also faced a TWS1 modulator fault due to low solenoid water flow rate, which was fixed. On the ARES beamline, the EA.L2 camera had ROI and background subtraction applied, showing likely Cherenkov/Smith-Purcell effects but no coherence yet. PolariX development included switching on the OVC, testing higher water temperatures for BOC detuning, conditioning both vertical and horizontal streaking, and developing measurement and analysis scripts. PolariX measurements consisted of streaking along various angles covering the full range of the phase shifter and emittance measurement using DLQZM1 and the D1 screen. For week 5, [Name 1] will be on shift with [Name 2] on Monday, followed by [Name 1] with Frank on Tuesday, [Name 1] with [Name 3] on Wednesday, [Name 4] on Thursday for AutoAcc beamtime, and [Name 5] on Friday for UKE beamtime preparation. Additionally, there are plans for connecting the kicker magnet, installing in-air stages, laser work, DLA activities, and more during this week. What can I do for you? Figure 3: Example: \"Can you summarize the last operations meeting?\". Names are redacted. 4 \fGAIA TECHNICAL REPORT fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? I want to write values to multiple devices in parallel using doocs_generic_experiment. How do I do this? > Entering new AgentExecutor chain\u2026 Thought: To answer this question, I need to understand how to write values to multiple devices in parallel using the doocs_generic_experiment Python module. I can use the DGE Helper tool for this. Action: ```json { \"action\": \"DGE Helper\", \"action_input\": { \"query\": \"How to write values to multiple devices in parallel using doocs_generic_experiment?\" } } ``` Observation: To write values to multiple devices in parallel using [\u2026] To write values to multiple devices in parallel using `doocs_generic_experiment`, you can use a `ParallelProcedure` with a list of `WriteAction`. Here is an example: ```python from doocs_generic_experiment import DOOCSAddress, WriteAction, ParallelProcedure # Define the addresses and values to write addresses = [DOOCSAddress('dev1', writable=True), DOOCSAddress('dev2', writable=True)] values = [10, 20] # Create a list of WriteAction instances write_actions = [WriteAction(address, value) for address, value in zip(addresses, values)] # Create a ParallelProcedure instance and add the write actions parallel_procedure = ParallelProcedure(actions=write_actions) # Run the procedure parallel_procedure.run() ``` In this example, we first define a list of `DOOCSAddress` instances with the `writable` flag set to `True`. We then create a list of `WriteAction` instances using a list comprehension and the `zip` function to pair the addresses and values. Finally, we create a `ParallelProcedure` instance and add the write actions to it. When we run the procedure, the write actions will be executed in parallel. What can I do for you? Figure 4: Example: \"I want to write values to multiple devices in parallel using doocs_generic_experiment. How do I do this?\". 5 \fGAIA TECHNICAL REPORT fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? Did they manage to define the new hexapod parking position today? > Entering new AgentExecutor chain\u2026 To answer this question, I need to check the logbook entries for today related to the hexapod parking position. I will use the Logbook tool with the 'n_days' parameter set to 0 to get the entries from today. Action: ``` { \"action\": \"Logbook\", \"action_input\": { \"n_days\": 0, \"n_entries\": null } } ``` Observation: Logbook name: SINBAD-ARESelog e-Logbook, Date: Wednesday 14. February 2024 Logbook entries: [\u2026] Date: 14.02.2024 18:14 Authors: [Name 1] Title: New Hexapod Parking Position Content: The grid is spaced in 1mm steps. The new parking position (red dot) is 7mm away from the Vgroove holder [\u2026] Date: 14.02.2024 14:23 Authors: [Name 1], [Name 2] Title: Activated Reference Phase Tracking for the Gun Content: [\u2026] > Finished chain. Based on the logbook entries, they did manage to define a new hexapod parking position today. The new parking position is 7mm away from the V-groove holder, as recorded in the entry at 18:14 by [Name 1]. What can I do for you? Figure 5: Example: \"Did they manage to define the new hexapod parking position today?\". Names are redacted. 6 \fGAIA TECHNICAL REPORT fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? Can you ask an expert whether the current value of the Gun Amplitude (Probe) is correct? > Entering new AgentExecutor chain\u2026 Thought: To answer this question, I need to find out the current value of the Gun Amplitude (Probe). I can use the ReadAction tool for this. After getting the value, I will ask an expert via Mattermost chat whether it is correct. Action: ``` { \"action\": \"ReadAction\", \"action_input\": { \"address\": \"MTest/GUN_AMPLITUDE_PROBE\" } } ``` Observation: nan Thought: It seems there was an issue with the DOOCS address provided. I will use the DOOCS Address Helper tool to ensure the correct address is used. Action: ```json { \"action\": \"DOOCS Address Helper\", \"action_input\": { \"search_query\": \"Gun Amplitude (Probe)\" } } ``` Observation: SINBAD.RF/LLRF.CONTROLLER/VS.AR.LI.RSB.G.1/AMPL.SAMPLE Thought: Now that I have the correct DOOCS address, I can use the ReadAction tool to get the current value of the Gun Amplitude (Probe). After getting the value, I will ask an expert via Mattermost chat whether it is correct. Action: ``` { \"action\": \"ReadAction\", \"action_input\": { \"address\": \"SINBAD.RF/LLRF.CONTROLLER/VS.AR.LI.RSB.G.1/AMPL.SAMPLE\" } } ``` Observation: 74.56 Thought: Now that I have the current value of the Gun Amplitude (Probe), I can ask an expert via Mattermost chat whether it is correct. Action: ```json \"action\": \"Mattermost Chat Bot\", \"action_input\": { \"chat_message\": \"@fmayet The current value of the Gun Amplitude (Probe) is 74.56. Is this value correct?\u201d } } ``` The current value of the Gun Amplitude (Probe) is 74.56. Is this value correct? I would say it is a little low, but they can proceed with the experiment. Observation: Here is a message from fmayet: I would say it is a little low, but they can proceed with the experiment. Thought: I have obtained the current value of the Gun Amplitude (Probe) and consulted an expert. The expert has confirmed that the value is a bi t low, but they can continue with the experiment. > Finished chain. The current value of the Gun Amplitude (Probe) is 74.56, and it is a bit low. However, you can proceed with the experiment. What can I do for you? Figure 6: Example: \"Can you ask an expert whether the current value of the Gun Amplitude (Probe) is correct?\". 7 \fGAIA TECHNICAL REPORT fmayet@desymac42 ~ % ./gaia \u2014-verbose True Welcome to GAIA (served on max-wng055) What can I do for you? Please cycle the two magnets ARDLMQZM1 and ARDLMQZM2 in parallel and post the result to the logbook afterwards. > Entering new AgentExecutor chain\u2026 To perform this task, I will first ask the DGE Helper for advice. Action: ``` { \"action\": \u201cDGE Helper\", \"action_input\": { \"query\": \"How can I cycle the two magnets ARDLMQZM1 and ARDLMQZM2 in parallel and post the result to the logbook afterwards?\" } } ``` Observation: [\u2026] Thought: I will now run the code using Python Code Runner. Action: ```json { \"action\": \"Python Code Runner\", \"action_input\": { \"code\": \"[\u2026]\" } } ``` This is the code I would like to run for you: from doocs_generic_experiment import ParallelProcedure, MagnetCycleAction, PrintToLogbookAction # Define the ParallelProcedure to cycle both magnets in parallel cycle_magnets_procedure = ParallelProcedure( [ MagnetCycleAction(magnet_name='ARDLMQZM1'), MagnetCycleAction(magnet_name='ARDLMQZM2') ] ) # Define the PrintToLogbookAction to print the result to the logbook print_to_logbook = PrintToLogbookAction(title=\"Magnet Cycling Result\", severity='INFO', author='aresoper (via doocs_generic_experiment)', logbook=\u2018areslog\u2019, message=\u201cARDLMQZM1 and ARDLMQZM2 are now cycled.\u201d) # Define a Procedure that runs both actions in sequence procedure = Procedure([cycle_magnets_procedure, print_to_logbook]) # Run the procedure procedure.run() Do you want to run this code? Figure 7: Example: \"Please cycle the two magnets ARDLMQZM1 and ARDLMQZM2 in parallel and post the result to the logbook afterwards.\". 8 \fGAIA TECHNICAL REPORT 5 Conclusions This work shows how a combination of an ReAct empowered LLM and a number of high-level expert tools can assist operators in the context of accelerator operation. The examples shown above reveal a number of positive features of the current implementation. The multi-expert nature of the ReAct concept means that one agent can be used for many use cases, which makes it easier to use for operators. At the same time, the use of tools enables save and noise-free interactions with the accelerator control system. This is crucial for machine safety. It is important to note that not all tools need to rely on advanced LLM-based systems (see Fig. 2). As the system is work in progress, there is also room for improvement. Depending on the available compute resources the reasoning process can be lengthy, especially if many tools are used. During test runs it was furthermore observed that sometimes intermediate thoughts within the chain of thought are actually more useful than the final answer. Also, sometimes only slight changes to the initial prompt influenced the outcome substantially. In addition to addressing and thoroughly analyzing the aforementioned issues, future work will include adding multimodal models to better understand e.g. media rich logbook entries. Furthermore, metrics will be devised to properly quantify the quality of the answers. Acknowledgments The author is grateful for getting access to the ARES R&D accelerator at DESY, as well as being able to leverage the compute resources provided by the DESY Maxwell team. The author acknowledges support from DESY (Hamburg, Germany), a member of the Helmholtz Association HGF."
|
| 17 |
+
}
|