XaiverZ commited on
Commit
dc5c5f6
·
1 Parent(s): cc0faf2
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. title_10K/test_title_short_2405.02178v1.json +17 -0
  2. title_10K/test_title_short_2405.02225v1.json +20 -0
  3. title_10K/test_title_short_2405.02228v1.json +18 -0
  4. title_10K/test_title_short_2405.02228v2.json +18 -0
  5. title_10K/test_title_short_2405.02235v1.json +16 -0
  6. title_10K/test_title_short_2405.02384v1.json +18 -0
  7. title_10K/test_title_short_2405.02426v1.json +18 -0
  8. title_10K/test_title_short_2405.02478v1.json +17 -0
  9. title_10K/test_title_short_2405.02696v1.json +17 -0
  10. title_10K/test_title_short_2405.02710v1.json +16 -0
  11. title_10K/test_title_short_2405.02730v1.json +16 -0
  12. title_10K/test_title_short_2405.02749v1.json +16 -0
  13. title_10K/test_title_short_2405.02791v1.json +17 -0
  14. title_10K/test_title_short_2405.02801v2.json +18 -0
  15. title_10K/test_title_short_2405.02816v1.json +18 -0
  16. title_10K/test_title_short_2405.02844v1.json +16 -0
  17. title_10K/test_title_short_2405.02905v1.json +17 -0
  18. title_10K/test_title_short_2405.03003v1.json +18 -0
  19. title_10K/test_title_short_2405.03008v1.json +18 -0
  20. title_10K/test_title_short_2405.03025v1.json +16 -0
  21. title_10K/test_title_short_2405.03085v1.json +16 -0
  22. title_10K/test_title_short_2405.03108v1.json +16 -0
  23. title_10K/test_title_short_2405.03121v1.json +17 -0
  24. title_10K/test_title_short_2405.03133v1.json +17 -0
  25. title_10K/test_title_short_2405.03150v1.json +0 -0
  26. title_10K/test_title_short_2405.03188v1.json +16 -0
  27. title_10K/test_title_short_2405.03251v1.json +17 -0
  28. title_10K/test_title_short_2405.03280v1.json +17 -0
  29. title_10K/test_title_short_2405.03485v1.json +17 -0
  30. title_10K/test_title_short_2405.03549v1.json +19 -0
  31. title_10K/test_title_short_2405.03606v1.json +18 -0
  32. title_10K/test_title_short_2405.03690v2.json +16 -0
  33. title_10K/test_title_short_2405.03894v1.json +17 -0
  34. title_10K/test_title_short_2405.03958v1.json +18 -0
  35. title_10K/test_title_short_2405.03962v1.json +17 -0
  36. title_10K/test_title_short_2405.03989v2.json +16 -0
  37. title_10K/test_title_short_2405.04003v1.json +17 -0
  38. title_10K/test_title_short_2405.04233v1.json +17 -0
  39. title_10K/test_title_short_2405.04272v1.json +18 -0
  40. title_10K/test_title_short_2405.04356v1.json +16 -0
  41. title_10K/test_title_short_2405.04370v1.json +16 -0
  42. title_10K/test_title_short_2405.04403v1.json +17 -0
  43. title_10K/test_title_short_2405.04483v1.json +16 -0
  44. title_10K/test_title_short_2405.04496v1.json +16 -0
  45. title_10K/test_title_short_2405.04534v1.json +16 -0
  46. title_10K/test_title_short_2405.04674v1.json +0 -0
  47. title_10K/test_title_short_2405.04682v1.json +18 -0
  48. title_10K/test_title_short_2405.04700v1.json +19 -0
  49. title_10K/test_title_short_2405.04781v1.json +16 -0
  50. title_10K/test_title_short_2405.04795v1.json +16 -0
title_10K/test_title_short_2405.02178v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02178v1",
3
+ "title": "Assessing and Verifying Task Utility in LLM-Powered Applications",
4
+ "abstract": "The rapid development of Large Language Models (LLMs) has led to a surge in\napplications that facilitate collaboration among multiple agents, assisting\nhumans in their daily tasks. However, a significant gap remains in assessing to\nwhat extent LLM-powered applications genuinely enhance user experience and task\nexecution efficiency. This highlights the need to verify utility of LLM-powered\napplications, particularly by ensuring alignment between the application's\nfunctionality and end-user needs. We introduce AgentEval, a novel framework\ndesigned to simplify the utility verification process by automatically\nproposing a set of criteria tailored to the unique purpose of any given\napplication. This allows for a comprehensive assessment, quantifying the\nutility of an application against the suggested criteria. We present a\ncomprehensive analysis of the effectiveness and robustness of AgentEval for two\nopen source datasets including Math Problem solving and ALFWorld House-hold\nrelated tasks. For reproducibility purposes, we make the data, code and all the\nlogs publicly available at https://bit.ly/3w3yKcS .",
5
+ "authors": "Negar Arabzadeh, Siging Huo, Nikhil Mehta, Qinqyun Wu, Chi Wang, Ahmed Awadallah, Charles L. A. Clarke, Julia Kiseleva",
6
+ "published": "2024-05-03",
7
+ "updated": "2024-05-03",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM Fairness",
15
+ "gt": "Assessing and Verifying Task Utility in LLM-Powered Applications",
16
+ "main_content": "Introduction One of the long-lasting goals for intelligent agents (Winograd, 1972) is for them to seamlessly interact with humans in natural language and help their end-users with their tasks, such as completing household tasks, math tutoring, and so on. The rapid development of open-source libraries (Wu et al., 2023; Li et al., 2023a) helps that goal by simplifying the development of LLM-powered agentic applications for various user-centered tasks (Liang et al., 2023b; Hong et al., 2023; Talebirad and Nadiri, 2023). To ensure that the application\u2019s behavior meets the requirements of the application developers, it is also crucial to assess its potential utility to end users (Dibia et al., 2023), as \u2217Work done during an internship at Microsoft Research Task Task Description Successful Execution Failed Execution QuantifierAgent Quantified Criteria for the solution Criteria w/ accepted values CriticAgent A solution to be assessed VerifierAgent Adversarial attack targeted solution Robustness Check Updating criteria Multidimensional Task Utility Figure 1: An overview of the AgentEval framework: CriticAgent creates a set of criteria and suggested values; QuantifierAgent quantifies the criteria for a considered application; and VerifierAgent verifies the criteria based on its robustness. The output of the QuantifierAgent is a multi-dimensional assessment of the utility of the application based on a suggested list of criteria and their evaluations. this can significantly impact its improvement journey. Taking into account a range of applications, it is unrealistic to assume benchmarking for every domain, including but not limited to code generation (Liu et al., 2024), health care (Andrew, 2024), and many others whose development we witness every day (Wu et al., 2023). Moreover, directly evaluating agentic applications poses challenges, as current approaches predominantly rely on endto-end success metrics i.e., whether the application accomplishes tasks (Shridhar et al., 2020b, 2019; Myers et al., 2023). However, understanding a user\u2019s interactions with an application involves much more than success alone (Kiseleva et al., 2022a,b; Zhang et al., 2023). Consider math problem solving, although it is important that the application solves the problem correctly, its ability to present and explain solutions based on various criteria, such as completeness, conciseness, and clarity, is crucial. Furthermore, success is not alarXiv:2405.02178v1 [cs.CL] 3 May 2024 \fways clearly defined for a task. Recognizing such criteria and being able to quantify them is essential to assess whether developer requirements are being satisfied and if the application brings utility to the end-users. Given the objective of assessing arbitrary applications, relying solely on end-to-end success metrics is untenable, due to the expansive range of tasks requiring automation. The question is how to design a flexible methodology to assess the task utility for diverse set of applications? To bridge this gap, we introduce AgentEval, a framework to gauge the utility of LLM-powered applications. Its goal is to assess the utility by providing application developers with insights into how the current flow can be characterized. AgentEval builds on recent work showing that LLMs can be a scalable and cost-effective alternative to human evaluation for open-ended tasks (Li et al., 2023b). AgentEval as illustrated in Fig. 1, consists of the three following agents, formally defined in Sec. 3: (1) CriticAgent suggests the list of criteria based on the task description and a pair of solutions, where one is preferred over the other one (e.g., successful and failed examples). For instance, for math problems, the criteria could be be Efficiency and Clarity of the proposed solution; (2) QuantifierAgent quantifies how the solution performs for each criterion and returns the utility function, e.g. for math problems, if the \u2019 Clarity is \u2018not clear\u2019, \u2018moderately clear\u2019, or \u2018very clear\u2019; (3) VerifierAgent verifies the quality of the assessment of the suggested criteria to make sure the criteria are essential, robust, informative and have high discriminative power. In summary, our main contributions are: C1 Introducing AgentEval, a novel framework that leverages LLM-powered agents as a scalable and cost-effective alternative to human evaluations, to produce task utility through the collaboration of three agents: CriticAgent, QuantifierAgent and VerifierAgent; and C2 An in-depth analysis of AgentEval robustness for two applications across different solutions, that can be replicated on an unseen domain. 2 Related Work 2.1 Evaluation of LLMs Prior work (Guo et al., 2023; Ziyu et al., 2023; Chang et al., 2023; Liang et al., 2023a) has extensively studied the evaluation of LLMs on various fronts: how ethically sound they are (Stahl and Eke, 2024), how they align to human preferences (Hendrycks et al., 2021a; K\u00f6pf et al., 2024), their robustness (Wang et al., 2023b), and the knowledge, and reasoning capabilities they posses (Bian et al., 2023). Recent work evaluates LLMs on more specialized tasks, such as medical domain (Jin et al., 2019), multi-modal tasks (Mialon et al., 2023; Bang et al., 2023), or as agents in interactive environments (Liu et al., 2023). 2.2 User satisfaction prediction Studies suggest that users interacting with various systems operate with specific utility functions in mind (Li et al., 2020; Azzopardi et al., 2018; Ahmadvand et al., 2022). Traditionally, metrics defining user satisfaction were designed using large-scale collected behavioral signals (Kiseleva et al., 2014), and were tailored to specific applications, such as intelligent assistants (Kiseleva et al., 2016a,b), web search engines (Williams et al., 2016a,b; Williams and Zitouni, 2017), dialogue systems (See et al., 2019), multi-turn conversations (Li et al., 2021) and general-purpose personal assistants (Kiseleva and de Rijke, 2017). It was demonstrated that assessing users\u2019 satisfaction requires goes beyond a single metric. As such, here, we propose a flexible framework to assess user and developer requirements, which can eventually be used to improve the application flow. 2.3 Using LLMs as evaluators More recently, there has been a growing trend in utilizing LLMs as evaluators (Chiang and Lee, 2023; Fu et al., 2023), such as for qualitative research (Bano et al., 2023), or summarization. Specifically, Jain et al. (2023) studied the efficacy of few-shot prompted LLM evaluators in evaluating summaries that were written by other LLMs. Similarly, Wang et al. (2023a) explore if ChatGPT itself can be used as an evaluator, by prompting it to score texts. Other works (Tjuatja et al., 2023; Liu and Sun, 2023; Chiang and Lee, 2023) look at how LLMs can be used as proxies for human behavior, or work with humans, such as CoEval (Li et al., 2023b), which showed how LLMs can make human evaluation easier. Pan et al. (2024) also show how LLM evaluators can help build models that increase performance on downstream task. Building on the above, a different line of works identify weaknesses in single LLMs as direct evaluators (Huang et al., 2023), and propose to improve them, \fsuch as a multi-step calibration framework (Wang et al., 2023c). Given these drawbacks, recent work has looked at how multiple LLM agents can be used as evaluators. Chan et al. (2023), proposed ChatEval, a multi-agent team that discusses and evaluates responses from agents on generation tasks (debate-style), leading to text that aligns with better human preferences. Similarly, Chern et al. (2024) proposed a multiple agent-debate-assisted meta-evaluation framework. Building on these works, we propose an automatic multi-agent assessment of utility for arbitrary LLM-powered applications, to provide deep insights for developers. Our framework can uncover current flaws in these applications, and may lead to improvements in them, particularly if the application flow changes after it is applied, and then it is re-used. 3 Task Utility Fig. 2 outlines a taxonomy of target tasks for LLMpowered applications, in terms of success metrics. At a high level, these tasks can be categorized into: 1) Success is not clearly defined \u2014 Users use the system in an assistive manner, seeking suggestions from it, rather than expecting it to solve the task. For example, a user can request the system to generate an email. The user usually uses the system\u2019s response as a template, which can later be edited. Directly evaluating assistive tasks like these is hard, particularly for online evaluation, or when dealing with less well-defined tasks. One potential approach is to directly ask users how useful the help was, but this is not well-calibrated (Borisov et al., 2018), hard to quantify (Sepliarskaia et al., 2018), and expensive. 2) Success is clearly defined \u2014 It is clear whether the system solved the task or not, for example, assisting with household tasks, where success is clear and measurable. This category can be further divided into two subcategories: \u2022 an optimal solution exists \u2014 only one successful outcome is possible. For example, when asking an assistant to turn on a light, success is clearly defined, as there is only one way to do it. \u2022 multiple solutions exist \u2014 Increasingly, we observe situations where multiple trajectories of agent behavior can lead to success. For example, when asking an agent to suggest a food recipe, success could be multiple cuisines tasting good, but perhaps the recipe should not be expensive. Tasks for LLM-powered applications Tasks where LLM-powered systems can assist the end user Success is not clearly defined When an agent assumes the role of an assistant, and success is not clearly defined Success is clearly defined When success is clearly defined, it is usually evaluated in a binary way Optimal Solution Exists There is a clear path to a successful event Multiple Solutions Exist Multiple trajectories are leading to success Figure 2: The taxonomy of tasks assessment. AgentEval is currently focused on tasks where success is clearly defined and multiple successful solutions may exist. Previous research on assistive agents suggests human pairwise preferences as one of the most optimal assessments, i.e. when the annotator is presented with two agents side by side and asked for their preferences (Kiseleva et al., 2022b). In this setup of side-by-side pairwise comparison, humans tend to suggest a list criteria, explaining why they prefer one agent over the other. For instance,\u2018the first agent was faster\u2019 or \u2018the second agent converses more naturally\u2019. This comparative setup can guide humans to come up with a list of criteria that helps to infer the utility of the task. With this in mind, we designed AgentEval (Fig. 1), by employing LLMs to help us understand, verify, and assess task utility, namely: \u2022 CriticAgent: The goal of this agent is to suggest a set of criteria that can be used to assess task utility. The CriticAgent is given a task description, as well as optionally several pairs of solutions, where preferably some are preferred over the other ones, for instance, successful and failed examples. CriticAgent would return a set of criteria C = {c1, . . . , cn}, where each criterion ci is accompanied by a set of accepted values \u03c9 as ci : {\u03c9j}m j=1. For example, for solving math problems, the CriticAgent generated accepted values and criteria such as clarity, efficiency, and more see Tab. 1. \u2022 QuantifierAgent: The goal of QuantifierAgent is to quantify each of the suggested criterion, to access the task utility of the system Ut, for the end user. We define the Utility for task t as: Ut(s) = {Qi(s|ci)}n i=1. where s represents the task sample and Q(s|ci.) is the quantifier output for sample s based on the criterion ci. \fFor example, for math problem solving, given the generated criteria shown in Tab. 1, the solution\u2019s Accuracy could be quantified as \u201cIncorrect\u201d, \u201cpartially correct\u201d or \u201ccorrect\u201d. Eligible quantified values for quantification process are shown in \u201cAccepted values\u201d column in Tab. 1 \u2022 VerifierAgent: There might be cases where not all the criteria suggested by CriticAgent help assess utility. Some criteria might be redundant, while others may not aid in distinguishing performance. VerifierAgent validates the quality of the criteria in terms of robustness and their distinguishability of noisy samples. Essentially, it checks (1) if the criteria can be quantified robustly over repeated samples, and (2) if QuantifierAgent can identify the adversarial attacked targeted samples from the original ones. If the sanity checks do not pass, VerifierAgent will update the list of criteria, to end up with a set of robust, stable, informative and distinguishable criteria for assessment. Finally, we note that AgentEval allows for incorporating a human in the loop in the role of a domain expert. For instance, CriticAgent could be replaced by a human expert who either comes up with the relevant criteria or helps VerifierAgent verify the useful criteria and filter out the unessential ones. 4 Datasets and Solutions This section provides an overview of the datasets utilized in our study i.e., Math problem solving and ALFWorld household task. The math dataset is chosen for its widespread usage and complex problem-solving scenarios that are fundamental in evaluating the effectiveness. ALFWorld dataset offers a scenario involving multi-turn interactions within a moderately approximated multi-modal environment. Each dataset plays a critical role in evaluating different aspects of AgentEval\u2019s capabilities, from handling complex theoretical problems to navigating real-world scenarios. In both tasks, although success is clearly defined, multiple solutions exist for accomplishing the objectives. An example of Math problem solving and ALFWorld task is shown in Appendix A.1. Due to space, we report all experiments about Math problem solving in the main paper and we keep all the experiments related to ALFWorld dataset in the Appendix A.3. 4.1 MATH Problem Solving Dataset: The MATH dataset is a substantial collection of 12,500 challenging mathematics problems from high school competitions (Hendrycks et al., 2021b). Each problem comes with a step-by-step solution and is tagged by difficulty levels. Similar to the math problem experimental setup in Wu et al. (2023), we carry out evaluations on 120 problems from level-5 by three different solutions. Due to limited space, for more details about this dataset, we refer readers to Appendix A.2 Solutions: In establishing solutions for this task to assess, we draw inspiration from the experiments showcased in (Wu et al., 2023). We evaluate the proposed methodology by AutoGen (Wu et al., 2023), as well as Langchain ReAct (Yao et al., 2022) and a Vanilla solver that employs GPT-4 to tackle the task. These solutions have previously demonstrated promising and competitive performance (Wu et al., 2023). In Sec. 5.2, we explore how the measured performance with AgentEval correlates with the ground truths. 4.2 ALFWorld Household Task Dataset: ALFWorld presents a set of languagebased interactive decision-making tasks within simulated household environments (Shridhar et al., 2020b). ALFWorld is the first interactive parallel environment that aligns text descriptions and commands with physically embodied robotic simulation. Finally, the dataset\u2019s inclusion of household chores to more intricate problem-solving scenarios, provides a comprehensive testbed for evaluating the adaptability of multi-agent systems. For more information about the dataset and examples of the test cases, we refer the readers to Appendix A.3.1. Solutions: As for the solutions to assess for ALFWorld Household tasks, similar to (Wu et al., 2023), we consider ReAct (Yao et al., 2022) as well as AutoGen with two agents and AutoGen with three agents (Wu et al., 2023). In Appendix A.3.2, we discuss in more details the solutions under assessment. We assess and compare the performance of these three solutions using AgentEval. 5 Experiments 5.1 Implementation Details For all experiments, we use GPT-4 version 0613, accessed through Azure OpenAI services, as the LLM model and the temperature of 0. AgentEval utilizes AutoGen (Wu et al., 2023) for implementation, since it provides a versatile environment where agents can be finely tuned and customized based on specific application needs. This is cru\fCo Error_analysis y Aver Criteria Clarity Error_analysis Completeness Efficiency Vanilla SolverSuccess Vanilla Solve Failed ReActSuccess ReActFailed AutogenSuccess AutogenFailed Average Value Figure 3: AgentEval assessment of three solutions on math problems categorized by success and failed cases. cial for maintaining the flexibility to handle a wide range of applications. We tried to avoid much prompt engineering and tried to keep each agent\u2019s instructions as if we are instructing human annotators. Moreover, another advantages of using AutoGen for implementation of AgentEval is that it has the flexibility to involve human in the loop. Each agent could be replaced by a human annotator. We further provide all the prompts used in our experiments in our Git repository. 5.2 AgentEval for Math Problems When executing the CriticAgent for Math problem solving, we first obtain a set of criteria as presented in Tab. 1. Then, the QuantifierAgent is tasked with quantifying each criterion, based on the accepted values. We present the outcome of QuantifierAgent measuring performance of three solutions on this task in Fig. 3. Notably, we see that Agenteval does not quantify the three solutions as if they perform equally well across the different criteria. For instance, while all three solutions leverage GPT-4 as the underlying language model, Autogen outperforms ReAct and Vanilla GPT-4 in terms of accuracy. This observation, while confirmed by previous studies (Wu et al., 2023), extends to solution completeness and efficiency as well. As depicted in Fig. 3, the error analysis range of quantified values differs from other metrics. We scrutinize the results by categorizing them into successful and failed cases. AutoGen, Vanilla Solver and ReAct solutions are each presented in orange, blue and green respectively, where the darker bars represent the performance on successful cases and lighter bars represent the failed cases. The difference between the dark and light bar of each color, verify AgentEval\u2019s performance, as we expect that each positive criteria should be quantified higher for successful cases compared to their failed cases. Table 1: Verification Criteria for MathProblems Criteria Description Accepted Values Clarity The ease of understanding the steps, explanations, and language used in the solution. \u2013 Not Clear (0) \u2013 Moderately Clear (1) \u2013 Very Clear (2) Efficiency The use of optimal methods or approaches to solve the math problem. \u2013 Inefficient (0) \u2013 Moderately Efficient (1) \u2013 Efficient (2) Error Analysis The identification and description of possible errors or misconceptions in the math problem-solving process. \u2013 Not Addressed (0) \u2013 Partially Addressed (1) \u2013 Well Addressed (2) Completeness Quality of code in terms of efficiency and elegance \u2013 Incomplete (0) \u2013 Mostly Complete (1) \u2013 Complete (2) We observe that in most cases, the successful and failed cases are distinguished, even with 95% interval confidence on all the success and failed cases. When examining the differences between successful and failed cases among the three solutions, we note that not all successful cases are assessed identically, nor are all failed cases quantified with the same performance. This can be interpreted to mean that even though two solutions might both be successful, one might perform better or worse in certain criteria, such as clarity or efficiency. This observation provides us with valuable additional insights, especially for the developers of the proposed solutions, and goes beyond reporting the effectiveness of a application by one scalar value e.g., success rate. 6 Robustness Analysis and Verification In this section, we first analyze the robustness of AgentEval, then further investigate how VerifierAgent can increase the stability of our assessment. 6.1 Diversity of Criteria Here, our main goal is to study the diversity of the suggested criteria. We investigate the extent inputs to AgentEval (Fig. 1 such as \u2018Task Description\u2019 and \u2018Successful/Failed Executions\u2019) contribute to CriticAgent for creating a more diverse set of criteria. To do so, we use two distinct methods, with CriticAgent generating (1) \u201ctask-based\u201d criteria solely from the task description, and (2) \u201csolution-based\u201d criteria, derived from both the task and execution examples. For example, a solution to a mathematical problem, might satisfy criteria such as \u2018Accuracy\u2019 and \u2018Clarity\u2019, independent of the solution. However, when additional tools such as coding are used to solve the problems, additional criteria like \u2018Code Efficiency\u2019 may be introduced to the set of criteria. This makes sense, since the application leveraged coding to solve math problems. \fFigure 4: Task-based vs solution-based criteria for Math problems. Error bar show the 95% confidence interval. Fig. 4 displays the number of unique criteria extracted for mathematical problem solving in taskbased mode, and three different solution-based approaches. To keep the balance between computational costs and analyzing the robustness, we conducted 50 runs of the CriticAgent with different seeds. Subsequently, for N = 50 iterations, we randomly select M \u226450 samples, as shown on the x-axis of Fig. 4, and present the average number of unique extracted criteria, along with its 95% confidence interval after repeating this process 50 times. We note that because the total pool of criteria includes 50 iterations in total, the confidence intervals become smaller when M get closer to the maximum number of samples i.e., 50 To gain deeper insights into diversity of criteria, we took a closer look at them to study if they are truly unique or to what extent they have similarities. This is important to determine if CriticAgent, when continually generating criteria, will always produce new criteria, or if it will eventually converge to a set. We noted that some criteria are similar but worded differently. For example, \u2018Problem Complexity\u2019 vs. \u2018Problem Difficulty\u2019 or \u2018Time Taken\u2019 vs. \u2018Time to Completion\u2019. Tab. 3 in the Appendix lists such instances. To consolidate the similar criteria and reduce noise in the number of unique criteria and redundancy, inspired from previous work (Liu et al., 2022; Vahtola et al., 2022; Reimers and Gurevych, 2019), we employ a pre-trained language model fine-tuned for paraphrasing1, to measure the semantic similarity of criteria descriptions. Using a threshold \u03c4, we classify pairs with cosine similarity greater than \u03c4 as semi-identical ones and select one of them as the representative of the pair. Fig. 4 illustrates the impact of different \u03c4 values (0.7, 0.85, 1) on the diversity of criteria. A threshold of 1 means no filtering occurs. This analysis shows that the solution-based approach has potential to produce more diverse criteria than 1https://bit.ly/3UgsYOp the task-based approach, although this varies by the creativity of the model. For example, while the AutoGen solution demonstrates the highest diversity, task-based methods yield more unique criteria than ReAct and Vanilla Solver. Another interesting observation is that repeating the CriticAgent will eventually lead to a convergence in the number of criteria. This suggests that the CriticAgent\u2019s ability to create new criteria will diminish, converging to an almost finite list of criteria, which will reduce the cost as well. 6.2 Verification As outlined in Sec. 3 and illustrated in Fig. 1, the VerifierAgent\u2019s primary role is to ensure the selected criteria are effective toward evaluating the utility for the end-user, while maintaining robustness and high discriminative power. To achieve this, the VerifierAgent undertakes two main actions: (1) Criteria Stability: The criteria should be essential and robust, meaning they should not be redundant and we should be able to quantify them stably if we repeatedly quantify it for an individual solution, showing no divergence. As such, VerifierAgent enhances the criteria by iterating over the generation and quantification phases. It then consolidates these criteria by identifying and eliminating redundancies, followed by evaluating the dispersion of the distribution of the quantified criteria. This step modifies the criteria, ensuring that only the most robust criteria are retained. (2) Discriminative Power: A reliable evaluation should detect and withstand noise. To test that, we propose to use adversarial examples and then assess the system\u2019s ability to differentiate between these compromised examples and standard cases. Should the system fail to distinguish effectively, it indicates that the criteria are insufficient for reliable assessment under varied conditions. We note that both steps involve a tunable threshold that can be adapted based on application needs, \fFigure 5: Distribution of QuantifierAgent output on AutoGen results on successful (dark blue) and failed (light blue) cases on different criteria. ensuring flexible criteria validation. The proposed methodology for VerifierAgent is summarized in Algorithm 1 in the Appendix. 6.2.1 Criteria Stability Our goal here is to explore the stability of criteria and robustness of the quantifier for having a more essential, robust and stable set of criteria. We specifically evaluate the QuantifierAgent\u2019s robustness using criteria for mathematical problems (Table 1), conducting 50 repeats of runs with different seeds on 120 problems (Section 4.1). Ideal expected outcomes include consistent performance across all criteria on all the repeats. Fig. 5 illustrates the distribution of quantifier values for both failed (dark blue) and successful cases (light blue) across all criteria through box plots. The more robust a criterion, the narrower the range of quantified performance (narrower box plots). Also, the less overlap between the successful and failed boxes, the higher the distinguishability of the criteria. We observe that all four criteria, except \u2018error analysis\u2019 allow for easy differentiation between successful and failed cases. Additionally, some criteria prove to be more robust compared to others. We believe that such an analysis of the quantifier agent\u2019s performance will yield valuable insights for enhancing reliability, trustworthiness, and explainability in performance evaluation. A detailed examination of the stability of each criterion, especially how they differentiate between successful and failed cases, is provided in Appendix A.4.2. Further, to refine and expand the criteria set without redundancy, we operate the CriticAgent multiple times i.e., we execute CriticAgent 50 times with varied seeds. The criteria are then summarized into one list of useful criteria using the LLM. AddiFigure 6: \u2206sum of mean coefficient of variation across all criteria with increasing number of seeds. tionally, as explained in Section 6.1, we remove similar and redundant criteria using pre-trained language models, thus obtaining a comprehensive list of criteria. The refined criteria after 50 repeats are detailed in Tab. 4 in the Appendix. Now, we aim to determine the stability of these criteria through repeated quantifications. Our goal is to identify criteria that maintain consistent results without significant divergence, even when quantified multiple times. Using this consolidated list, we measure the dispersion of quantified results using the coefficient of variation, a standardized metric that facilitates comparison across various test cases when QuantifierAgent quantifies them. Given the consolidated list of criteria, we use the QuantifierAgent to quantify various test cases and report the coefficient of variation as a measure of the dispersion of the QuantifierAgent\u2019s outputs with respect to each criterion across different seeds and report the mean coefficient of variation across all samples. we run QuantifierAgent with 50 seeds and plot the change (\u2206) in the sum of mean coefficient of variation across all criteria against the number of seeds, in Figure 6. For each criterion, we compute the absolute difference with the mean coefficient of variation calculated when using n\u22121 seeds, summing up the absolute differences across all criteria. According to the plot, after approximately 18 seeds, the magnitude of mean coefficient of variation stabilizes and becomes rather trivial. In almost all cases, the mean coefficient of variation is around or below 0.5, which is relatively small, suggesting that QuantifierAgent is quite robust. 6.2.2 Discriminative Power It is crucial to ensure the quality of quantification of each criterion. Ideally, this validation would involve comparisons with known pairwise samples, where sample S+ is definitively superior to S\u2212for \fa given criterion. If the evaluator also confirms superiority of S+ w.r.t S\u2212, it has robust quantification. However, due to rapid expansion of LLMpowered applications, obtaining annotated data for many tasks is often unfeasible. Therefore, we propose using synthetically altered versions of samples for verification. Let us assume we have an alternative disturbed version of sample S, which is called S\u2032. Assuming sample S is more likely to outperform its disturbed version S\u2032, our assessment should confirm this assumption by assigning better quantified performance S in comparison to S\u2032. In experiments with mathematical problems, we introduced random noise by removing portions of the solution sentences from AutoGen, VanillaSolver, and ReAct\u2019s results respectively, expecting that criteria like \u2018Completeness\u2019 or \u2018Clarity\u2019 would show be higherin S than in S\u2032. We disturbed solutions by removing 25% of the sentences and assessed the QuantifierAgent\u2019s performance. As shown in Fig. 7, criteria measuring aspects like \u2018Clarity\u2019 and \u2018Completeness\u2019 were lower in disturbed solutions (lighter bars), confirming QuantifierAgent\u2019s high discriminative power and effectiveness. We have already filtered out the criteria that were unstable, i.e., those that had a high mean standard deviation and dispersion when being quantified in the previous section. We report the results of the QuantifierAgent quantifying differences between original and disturbed samples on the comprehensive set of criteria shown in Appendix, as shown in Fig. 13 for the math problem-solving. In most cases, the QuantifierAgent quantifies the disturbed output to be worse than the original task output. We believe analyzing the QuantifierAgent\u2019s performance will enhance the reliability, trustworthiness, and explainability in evaluations.. 6.2.3 VerifierAgent After modifying the list of criteria (Sec. 6.2.1), we have developed a stable and robust list of criteria that the QuantifierAgent can reliably quantify. Further, we also proposed a method for assessing whether the criteria can distinguish between noise-adversarially attacked samples and the original ones. These two tests will serve as input for the VerifierAgent (described in Algorithm 1), which can also have its threshold tuned for different applications. For instance, one might prioritize the stability of the criteria, while another may value the discriminative power of the AgentEval for specific applications. As such, the VerifierAgent will Figure 7: Assessment of original and disturbed solutions on Math dataset (discriminative power study). modify and update the criteria based on to what extend they pass the two tests, i.e., if the mean coefficient of variation is below a specific threshold and the percentage of adversarial testing it has passed. The VerifierAgent will then update the criteria if necessary. We believe that having a VerifierAgent would help continuously updating the criteria as needed because, by improving the systems, we may require new criteria that were not previously necessary for utility assessment. 7 Conclusions and Future Work We introduced the AgentEval framework, designed to swiftly gauge the utility of arbitrary LLMpowered agentic applications. Our framework leverages recent findings suggesting LLMs as a scalable and cost-effective alternative to human evaluations for open-ended tasks. AgentEval consists of three agents: CriticAgent suggests criteria based on task descriptions and executions of the applications, QuantifierAgent quantifies how well the application flow aligns with these criteria, and VerifierAgent modifies the list of criteria if needed. This framework is customizable, adaptable, and can operate in various modes, employing combinations of LLMs, human inputs, and tools. We believe that suggested AgentEval\u2019s utility extends beyond immediate performance. It can uncover new system capabilities over time and adapt to changes in user needs tracked by developers. AgentEval can also enable developers to assess the alignment between application behavior and suggested user requirements, providing them with insights into areas for improvement. In summary, our contributions include introducing the AgentEval framework, and conducting a robust analysis of \fits performance across various datasets and baselines. AgentEval represents a significant step towards assessing LLM-powered applications. 8 Limitations and Ethics 8.1 Limitations Here, we discuss some limitations of the AgentEval framework. Firstly, the performance of the AgentEval is highly dependent on the quality of the output logs of the applications. Flaws or limitations in these outputs can significantly impact the framework\u2019s ability to accurately assess utility. Secondly, our experiments were conducted exclusively with closed-source LLMs, specifically with GPT-4. This may limit the generalizability of our findings. Plans to include a broader array of LLMs, including open-source models, are considered for future studies to validate and possibly enhance the robustness of our conclusions. Additionally, the tests conducted were limited to specific scenarios within math problem solving and household tasks. Expanding the diversity of test scenarios could help in understanding the broader applicability of the framework. Thirdly, while AgentEval employs a novel methodology leveraging LLMs to estimate utility, the absence of human evaluation in our validation process could be viewed as a drawback. Human evaluations provide unique insights, especially in subjective aspects of utility that automated systems might overlook. However, such evaluations are often cost-prohibitive and logistically challenging, restricting our ability to implement them within this study. Especially do developers of agentic LLM-powered applications who needs insights fast as they go with the deployments. Lastly, as LLM technologies evolve, the criteria and metrics used for evaluation may need to be updated or revised. What works for assessing current LLMs may not hold as these models become more advanced. Continuous updates to the evaluation framework will be necessary to keep pace with technological advancements. 8.2 Ethics To the best of our knowledge, we did not violate any code of ethics with the experiments done in this paper. We reported technical details and results, with details in the main paper, Appendix, and code release. Our experimental results are an outcome of a Machine Learning model. Our AgentEval system has a variety of uses in real world settings, such as improving applications for end users or helping developers. However, we caution that it must be used carefully, as the outputs are from a ML model and can have real world consequences, if used incorrectly. These and many other related issues are important aspects to consider when deploying a system like AgentEval in the real world."
17
+ }
title_10K/test_title_short_2405.02225v1.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02225v1",
3
+ "title": "Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks",
4
+ "abstract": "This paper introduces a framework for post-processing machine learning models\nso that their predictions satisfy multi-group fairness guarantees. Based on the\ncelebrated notion of multicalibration, we introduce $(\\mathbf{s},\\mathcal{G},\n\\alpha)-$GMC (Generalized Multi-Dimensional Multicalibration) for\nmulti-dimensional mappings $\\mathbf{s}$, constraint set $\\mathcal{G}$, and a\npre-specified threshold level $\\alpha$. We propose associated algorithms to\nachieve this notion in general settings. This framework is then applied to\ndiverse scenarios encompassing different fairness concerns, including false\nnegative rate control in image segmentation, prediction set conditional\nuncertainty quantification in hierarchical classification, and de-biased text\ngeneration in language models. We conduct numerical studies on several datasets\nand tasks.",
5
+ "authors": "Lujing Zhang, Aaron Roth, Linjun Zhang",
6
+ "published": "2024-05-03",
7
+ "updated": "2024-05-03",
8
+ "primary_cat": "stat.ML",
9
+ "cats": [
10
+ "stat.ML",
11
+ "cs.AI",
12
+ "cs.CY",
13
+ "cs.LG",
14
+ "stat.ME"
15
+ ],
16
+ "label": "Original Paper",
17
+ "paper_cat": "LLM Fairness",
18
+ "gt": "Fair Risk Control: A Generalized Framework for Calibrating Multi-group Fairness Risks",
19
+ "main_content": "Introduction A common theme across the fairness in machine learning literature is that some measure of error or risk should be equalized across sub-populations. Common measures evaluated across demographic groups include false positive and false negative rates (Hardt et al., 2016) and calibration error (Kleinberg et al., 2016; Chouldechova, 2017). Initial work in this line gave methods for equalizing different risk measures on disjoint groups. A second generation of work gave methods for equalizing measures of risk across groups even when the groups could intersect \u2013 e.g. for false positive and negative rates (Kearns et al., 2018), calibration error (\u00darsula H\u00e9bert-Johnson et al., 2018), regret (Blum & Lykouris, 2019; Rothblum & Yona, 2021), prediction set coverage (Jung et al., 2021, 2022; Deng et al., 2023), among other risk measures. In general, distinct algorithms are derived for each of these settings, and they are generally limited to one-dimensional predictors of various sorts. In this work, we propose a unifying framework for fair risk control in settings with multi-dimensional outputs, based on multicalibration (\u00darsula H\u00e9bert-Johnson et al., 2018). This framework is developed as an extension of the work by Deng et al. (2023); Noarov & Roth (2023), and addresses the need for calibrating multi-dimensional output functions. To illustrate the usefulness of this framework, we apply it to a variety of settings, including false negative rate control in image segmentation, prediction set conditional coverage guarantees in hierarchical classification, and de-biased text generation in language models. These applications make use of the additional power granted by our multi-dimensional extension of multicalibration. 1.1 Related Work Multicalibration was introduced by \u00darsula H\u00e9bert-Johnson et al. (2018) as a fairness motivated constraint that informally asks that a 1-dimensional predictor of a binary-valued outcome be unbiased, conditional 1Work was done during Lujing Zhang\u2019s remote research internship at Rutgers and Penn. Email: [email protected] 2University of Pennsylvania. Email: [email protected] 3Rutgers University. Email: [email protected] 3Corresponding Author. 1 arXiv:2405.02225v1 [stat.ML] 3 May 2024 \fon both its own prediction and on membership of the input in some number of pre-defined groups (see also a line of prior work that asks for a similar set of guarantees under slightly different conditions (Dawid, 1985; Sandroni et al., 2003; Foster & Kakade, 2006)). Subsequently, multicalibration has been generalized in a number of ways. Jung et al. (2021) generalizes multicalibration to real-valued outcomes, and defines and studies a variant of multicalibration that predicts variance and higher moments rather than means. Gupta et al. (2022) extends the study of multicalibration of both means and moments to the online setting, and defines a variant of mulicalibration for quantiles, with applications to uncertainty estimation. Bastani et al. (2022); Jung et al. (2022) gives more practical variants of quantile multicalibration with applications to conditional coverage guarantees in conformal prediction, together with experimental evaluation. Deng et al. (2023) gives an abstract generalization of 1-dimensional multicalibration, and show how to cast other algorithmic fairness desiderata like false positive rate control in this framework. Noarov & Roth (2023) gives a characterization of the scope of 1-dimensional multicalibration variants via a connection to property elicitation: informally, a property of a distribution can be multicalibrated if and only if it minimizes some 1-dimensional separable regression function. The primary point of departure of this paper is that we propose a multi-dimensional generalization of multicalibration: it can be viewed as the natural multi-dimensional generalization of Deng et al. (2023). Another line of work generalizes multicalibration in an orthogonal direction, leaving the outcomes binary valued but generalizing the class of checking rules that are applied. Dwork et al. (2021) defines outcome indistinguishability, which generalizes multicalibration to require indistinguishability between the predicted and true label distributions with respect to a fixed but arbitrary set of distinguishers. Kakade & Foster (2008); Foster & Hart (2018) define \u201csmooth calibration\u201d that relaxes calibration\u2019s conditioning event to be a smooth function of the prediction. Gopalan et al. (2022) defines a hierarchy of relaxations called low-degree multicalibration that further relaxes smooth calibration and demonstrates desirable statistical properties. Zhao et al. (2021) and Noarov et al. (2023) define notions of calibration tailored to the objective function of a downstream decision maker. These last lines of work focus on multi-dimensional outputs. These lines of work are part of a more general literature studying multi-group fairness. Work in this line aims e.g. to minimize disparities between false positive or false negative rates across groups (Kearns et al., 2018, 2019), or to minimize regret (measured in terms of accuracy) simultaneously across all groups (Blum & Lykouris, 2019; Rothblum & Yona, 2021; Globus-Harris et al., 2022; Tosh & Hsu, 2022). A common theme across these works is that the groups may be arbitrary and intersecting. 1.2 Notation Let X represent a feature domain, Y represent a label domain, and D denote a joint (feature, label) data distribution. For a finite set A, we use |A| and \u2206A, to denote the cardinality of A and the simplex over A respectively. Specifically, \u2206A = {(p1, p2, . . . , p|A|) : 0 \u2264pi \u22641, P|A| i=1 pi = 1}. Given a set F, we use ProjF to denote the \u21132-projection onto the set. We also introduce some shorthand notation. For two vectors a and b, \u27e8a, b\u27e9represents their inner product. For a positive integer T, we define [T] = {1, 2, . . . , T}. For a function f(x) = (f1(x), f2(x), ..., fm(x)), we denote \u2225f\u2225\u221e= supx\u2208X,i\u2208[m][fi(x)]. 2 Formulation and Algorithm 2.1 A generalized notion of Multicalibration Let x \u2208X represent the feature vector of the input, y \u2208Y represent the label, and let h(x) \u2208H denote a multi-dimensional scoring function associated with the input. For example, in image segmentation tasks, h(x) \u2208Rk (k is the number of pixels) is intended to approximate the probability of a pixel being part of a relevant segment, often learned by a neural network. In text generation tasks, h(x) is the distribution over the vocabulary produced by a language model given context x. For x \u2208X, consider an output function f : X \u2192F \u2282Rm, defined as f(x) = (f1(x), . . . , fm(x)), where F is a convex set. We denote the class of functions that f belongs to by Q. For example, in text 2 \fgeneration tasks, f(x) is the calibrated distribution over the output vocabulary and is multi-dimensional (with dimension equal to the vocabulary size); in binary classification tasks where h and f are both scalars, f(x) is the threshold used to convert the raw score h(x) into binary predictions, i.e. 1{h(x)>f(x)}. We write s(f, x, h, y, D) : Q \u00d7 X \u00d7 H \u00d7 Y \u00d7 P \u2192Rl to denote a mapping functional of interest, where D is the joint distribution of (x, h, y) and P is the distribution space. Here, s is set to be a functional of f rather than a function of f(x), which offers us more flexibility that will be useful in our applications. For example, in text generation, where h(x) \u2208\u2206Y is the distribution over tokens output by an initial language model, our goal might be to find f(x) \u2208\u2206Y, an adjusted distribution over tokens y \u2208Y with |Y| = m. In this case we could set s = f(x) \u2212Exf(x) \u2208Rm to be the mapping functional. We can calibrate the probabilities (through s) to be \u201cfair\u201d in some way \u2013 e.g. that the probability of outputting various words denoting professions should be the same regardless of the gender of pronouns used in the prompt. We note that we do not always use the dependence of s on all of its inputs and assign different s in different settings. We write G to denote the class of functions that encode demographic subgroups (along with other information) and for each g \u2208G, g(f(x), x) \u2208Rl, consistent with the dimension of s(f, x, h, y, D) so that we can calibrate over every dimension of s. For example, when l = 1, G can be set to be the indicator function of different sensitive subgroups of X. Alternately, in fair text generation tasks, when the dimension of s equals the size of the set Y, denoted as l = m, we can set the vector g \u2208G to have a value of 1 in the dimensions corresponding to certain types of sensitive words, and 0 in all other dimensions. We now formally introduce the (s, G, \u03b1)-Generalized Multicalibration ((s, G, \u03b1)-GMC) definition. Definition 1 ((s, G, \u03b1)-GMC). Let x, h, y, D denote the feature vector, the scoring function, the label vector, and the joint distribution of (x, h, y) respectively. Given a function class G, mapping functional s, and a threshold \u03b1 > 0, we say f satisfies (s, G, \u03b1)-Generalized Multicalibration ((s, G, \u03b1)-GMC) if E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. (s, G, \u03b1)-GMC is a flexible framework that can instantiate many existing multi-group fairness notions, including s-HappyMap (Deng et al., 2023), property multicalibration (Noarov & Roth, 2023), calibrated multivalid coverage (Jung et al., 2022) and outcome indistinguishability (Dwork et al., 2021). More generally, compared to these notions, (s, G, \u03b1)-GMC extends the literature in two ways. First, it allows the functions s and g to be multi-dimensional (most prior definitions look similar, but with 1-dimensional s and g functions). Second, the function s here is more general and allowed to be a functional of f (rather than just a function of f(x), the evaluation of f at x). These generalizations will be important in our applications. 2.2 Algorithm and Convergence Results To achieve (s, G, \u03b1)-GMC, we present the (s, G, \u03b1)-GMC Algorithm, which can be seen as a natural generalization of algorithms used for more specific notions of multicalibration in previous work (\u00darsula H\u00e9bert-Johnson et al., 2018; Dwork et al., 2021; Jung et al., 2022; Deng et al., 2023): Algorithm 1 (s, G, \u03b1)-GMC lgorithm Input: step size \u03b7 > 0, initialization f (0) \u2208Q, max iteration T. Initialization: t = 0. while t < T, \u2203g(t) \u2208G s.t : E(x,h,y)\u223cD[\u27e8s(f (t), x, h, y, D), g(t)(f (t)(x), x)\u27e9] > \u03b1 do Let g(t) \u2208G be an arbitrary function satisfying the condition in the while statement f (t+1)(x) = ProjF \u0000f (t)(x) \u2212\u03b7g(t)(f (t)(x), x) \u0001 t = t + 1 end while Output: f (t) It is worth noting that our goal involves functionals concerning our objective function f in order to capture its global properties. We aim to find a function f such that a functional associated with it 3 \f(obtained by taking the expectation over x) satisfies the inequalities we have set to meet different fairness demands. Before delving into the main part of our convergence analysis, we introduce some definitions related to functionals. Examples of these definitions can be found in the appendix Section B. Definition 2 (The derivative of a functional). Given a function f : X \u2192F, consider a functional L(f, D) : Q\u00d7P \u2192R, where Q is the function space of f and P is a distribution space over X. Assume that L(f, D) follows the formulation that L(f, D) = Ex\u223cD[L(f(x))]. The derivative function of L(f, D) with respect to f, denoted as \u2207fL(f, D) : X \u2192F, exists if \u2200w \u2208Q, y \u2208Rm, D \u2208P, Ex\u223cD[\u27e8\u2207fL(f, D), w\u27e9] = \u2202 \u2202\u03f5 L(f + \u03f5w, D)|\u03f5=0 . In the following, we introduce the definitions of convexity and smoothness of a functional. Definition 3 (Convexity of a functional). Let L and f be defined as in Definition 2. A functional L is convex with respect to f if for any f1, f2 \u2208Q, L(f1, D) \u2212L(f2, D) \u2265Ex\u223cD[\u27e8\u2207fL(f2, D), f1 \u2212f2\u27e9]. Definition 4 (KL-smoothness of a functional). Let L and f be defined as in Definition 2. A functional L is KL\u2212smooth if for any f1, f2 \u2208Q, L(f1, D)\u2212L(f2, D) \u2264Ex\u223cD[\u27e8\u2207L(f2, D), f1\u2212f2\u27e9]+Ex\u223cD[ KL 2 \u2225f1\u2212 f2\u22252]. We will prove that this algorithm converges and outputs an f satisfying (s, G, \u03b1)-GMC whenever the following assumptions are satisfied. These are multidimensional generalizations of the conditions given by Deng et al. (2023). Assumptions (1). There exists a potential functional L(f, h, y, D), such that \u2207fL(f, h, y, D)(x) = s(f, x, h, y, D), and L(f, h, y, D) is KL-smooth with respect to f for any x \u2208X. (2). Let f \u2217(x) \u225cProjFf(x) for all x \u2208X. For any f \u2208Q, L(f \u2217, h, y, D) \u2264L(f, h, y, D) . (3). There exists a positive number B, such that for all g \u2208G and all f \u2208Q, Ex\u223cD[\u2225g(f(x), x)\u22252] \u2264B. (4). There exists two numbers Cl, Cu such that for all f \u2208Q, L(f, h, y, D) \u2265Cl, L(f (0), h, y, D) \u2264Cu. Assumption (1) says that a potential functional L exists and it satisfies a KL-smoothness condition with respect to f. For example, when f is a predicted distribution, we often set s = f(x) \u2212Ex\u223cDf(x). In this situation, L = Ex\u223cD[ 1 2\u2225f(x) \u2212Ex\u223cDf(x)\u22252] satisfies the assumption. Assumption (2) states that the potential function decreases when projected with respect to f. A specific example is when F = Y = [0, 1] and L = E(x,y)\u223cD|f(x) \u2212y|2. Assumption (3) states that the \u21132-norm of the functions in G is uniformly bounded. It always holds when G contains indicator functions, which is the most common case in fairness-motivated problems (these are usually the indicator functions for subgroups of the data). Assumption (4) says that the potential functional L is lower bounded and this generally holds true when L is convex. One concrete example is when s(f(x), h, y) = f(x) \u2212y and we have L(f, h, y, D) = Ex\u223cD[(f(x) \u2212y)2], which is lower bounded by 0. Theorem 1. Under Assumptions 1-4, the (s, G, \u03b1)-GMC Algorithm with a suitably chosen \u03b7 = O(\u03b1/(KLB)) converges in T = O( 2KL(Cu\u2212Cl)B) \u03b12 ) iterations and outputs a function f satisfying E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. The proof is provided in Appendix C. At a high level, if we consider g as a generalized direction vector and s as the gradient of L, each violation can be interpreted as detecting a direction where the first-order difference of L is significant. By introducing the assumption of smoothness, our update can result in a decrease in L that exceeds a constant value. Since L is lower bounded by assumption, the updates can terminate as described. 4 \f2.3 Finite-Sample Results We have presented Algorithm 1 as if we have direct access to the true data distribution D. In practice, we only have a finite calibration set D, whose data is sampled i.i.d from D. In this subsection, we show how a variant of Algorithm 1 achieves the same goal from finite samples. First, we introduce a useful measure which we call the dimension of the function class, as similarly defined by Kim et al. (2019); Deng et al. (2023). For a dataset D, we use E(x,h,y)\u223cD to denote the empirical expectation over D. We need T datasets in all and we assume that the whole sample size is m (m/T for each dataset). Definition 5 (Dimension of the function class). We use d(G) to denote the dimension of class G, defined to be a quantity such that if the sample size m \u2265C1 d(G)+log(1/\u03b4) \u03b12 , then a random sample Sm of m elements from D guarantees uniform convergence over G with error at most \u03b1 with failure probability at most \u03b4. That is, for any fixed f and fixed s with \u2225s\u2225\u221e\u2264C2 (C1, C2 > 0 are universal constants): sup g\u2208G |E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2212E(x,h,y)\u223cSm[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9]| \u2264\u03b1. A discussion of this definition is given in the appendix. We now give the finite sample version of the (s, G, \u03b1)-GMC Algorithm and its convergence results below. The detailed proof is in the appendix; we use the uniform convergence guarantee arising from Definition 5 to relate the problem to its distributional counterpart. Algorithm 2 (s, G, \u03b1)-GMC Algorithm (Finite Sample) Input: step size \u03b7 > 0, initialization f (0)(x) \u2208F, validation datasets D[2T ], max iteration T. Initialization: t = 0. while t < T, \u2203g(t) \u2208G, s.t. : E(x,h,y)\u223cD2t\u22121[\u27e8s(f (t)(x), h, y, D2t), g(t)(f (t)(x), x)\u27e9] > 3 4\u03b1 do Let g(t) \u2208G be an arbitrary function satisfying the condition in the while statement f (t+1)(x) = ProjF \u0000f (t)(x) \u2212\u03b7g(t)(f (t)(x), x) \u0001 t = t + 1 end while Output: f (t) Theorem 2. Under the assumptions 1-4 given in section 3, suppose we run Algorithm 2 with a suitably chosen \u03b7 = O (\u03b1/ (\u03baLB)) and sample size m = O \u0010 T \u00b7 d(G)+log(T/\u03b4) \u03b12 \u0011 , then with probability at least 1 \u2212\u03b4, the algorithm converges in T = O \u0000(Cu \u2212Cl) \u03baLB/\u03b12\u0001 steps and returns a function f satisfying: E(x,h,y)\u223cD[\u27e8s(f, x, h, y, D), g(f(x), x)\u27e9] \u2264\u03b1, \u2200g \u2208G. 3 Applications In this section, we explore three applications of our framework: De-biased text generation in language modeling \u2013 where the output function is multi-dimensional and can\u2019t be addressed in other frameworks, uncertainty quantification in hierarchical classification \u2014 in which we can offer prediction set conditional coverage guarantees, and group-wise false-positive rate control in image segmentation. We begin by outlining the challenges related to fairness and robustness inherent to these applications. Subsequently, we illustrate how to integrate these challenges within the (s, G, \u03b1)-GMC framework, enabling their resolution through Algorithm 1. 3.1 De-Biased Text Generation This section applies our framework to fair word prediction in language modelling. We think of a language model as a function that maps prompts to a distribution over the next word. More specifically, we write 5 \fx \u2208X to denote a prompt, given which the language model outputs a distribution over the vocabulary, denoted by Y. Namely, the language model generates the probability vector h(x) \u2208\u2206Y, and then samples a word (output) following o(x) \u223ch(x). Previous studies (Lu et al., 2018; Hoffmann et al., 2022) demonstrated the presence of gender bias in contemporary language models. Our objective in this section is to mitigate this issue through an approach that post-processes h(x) to a probability distribution p(x) \u2208\u2206Y that has better fairness properties in specific ways. To take advantage of the information in initial language model, p is initialized at h. At the high level, we aim to produce p(x) so that the probabilities of certain groups of words remain the same whether the prompt includes male-indicating words or female-indicating words. For example, we might not want \u201cHe was a \u201d to be completed with \u201cdoctor\u201d more frequently than \u201cShe was a \u201d to be completed with \u201cdoctor\u201d. We define an attribute set U as a collection of specific sensitive words and U to be the set of all U, which stands for different kinds of sensitive words. Following the work by Lu et al. (2018); Hoffmann et al. (2022), we measure the bias of the model on sensitive attribute U by |P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U|x \u2208M)|, where the probability is taken over o(x) \u223cp(x), and x \u2208F and x \u2208M denotes that x indicates female and male pronouns respectively. Suppose the marginal distribution over prompt x (which is drawn uniformly from the given corpus) satisfies that P(x \u2208F), P(x \u2208M) \u2265\u03b3 for some positive constant \u03b3 > 0, we get: |P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U|x \u2208M)| \u22641 \u03b3 (|P(x \u2208F)(P(o(x) \u2208U|x \u2208F) \u2212P(o(x) \u2208U))| + |P(x \u2208M)(P(o(x) \u2208U|x \u2208M) \u2212P(o(x) \u2208U))|). (1) As a result, we only need to control the terms on the right side of (1) instead. More specifically, we want to calibrate the output so that for any subset U \u2208U \u2282Y (e.g., gender-stereotyped professions) and subgroups A \u2208A \u2282X (e.g., gender-related pronouns), |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| \u2264\u03b1. To better understand this fairness notion, let us consider a toy example where X = {he, she, his, her}, A = {{he,his}, {she,her}}, Y = {lawyer, doctor, dream, nurse}, U = {{lawyer, doctor}, {nurse}}. Our aim is to calibrate the output so that |P[o(x) \u2208{lawyer, doctor}|x \u2208{she, her}] \u2212P[o(x) \u2208 {lawyer, doctor}]| \u2264\u03b1 and |P[o(x) \u2208{lawyer, doctor}|x \u2208{he, his}] \u2212P[o(x) \u2208{lawyer, doctor}]| \u2264\u03b1. We can define V \u225c{(1, 1, 0, 0), (0, 0, 0, 1)} to be the set of indicator vectors of sensitive attributes defined by U. Setting G \u225c{1{x\u2208A}v : A \u2208A, v \u2208V} \u222a{\u22121{x\u2208A}v : A \u2208A, v \u2208V}, this problem can be cast in the GMC framework, and leads to the following theorem: Theorem 3. Assuming that x is a prompt that is uniformly drawn from the given corpus, and h is given by any fixed language model and the size of the largest attribute set in U is upper bounded by B. With a suitably chosen \u03b7 = O(\u03b1/B), our algorithm halts after T = O(B/\u03b12) iterations and outputs a function p satisfying: \u2200A \u2208A, U \u2208U, when o(x) \u223cp(x), sup A\u2208A |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| \u2264\u03b1. For the finite-sample counterpart, by applying theorem 2, the sample complexity required in this setting is O( log(2|U||A|)+log( 1 \u03b4 ) \u03b12 ). 3.2 Prediction-Set Conditional Coverage in Hierarchical Classification Hierarchical classification is a machine learning task where the labels are organized in a hierarchical tree structure (Tieppo et al., 2022). More specifically, at the most granular level, predictions are made using labels on the leaves of the tree. These leaves are grouped together into semantically meaningful categories through their parent nodes, which are, in turn, grouped together through their parents, and so on up to 6 \fthe root of the tree. Such a tree structure allows us\u2014when there is uncertainty as to the correct label\u2014to predict intermediate nodes, which correspond to predicting sets of labels \u2014 the set of leaves descended from the intermediate node \u2014 giving us a way to quantify the uncertainty of our predictions. Our goal is to produce such set-valued predictions that have a uniform coverage rate conditional on the prediction we make, where a prediction set is said to \u201ccover\u201d the true label if the true label is a descendent of (or equal to) the node we predicted. For example, in a K-class hierarchical text classification problem, our input x \u2208X is a document and the label is a leaf node y on a classification tree with nodes V and edges E. For simplicity, set V = {1, 2, ..., |V |} where the first K indices {1, 2, .., K} denote leaf nodes (so the groundtruth label y \u2208{1, ..., K}). The tree is of depth H. For a given single-class classification model h : x \u2192[0, 1]K, let u(x) \u225carg maxk hk(x) denote the candidate with the highest score over all leaf nodes according to h. u(x) here corresponds to the most natural point prediction we might make given h. Figure 1: A demo of hierarchical text classification using a subset of labels from the Web of Science dataset. (Kowsari et al., 2017). As a concrete example, in the tree diagram above, we map the set {1, 2, 3, 4, 5, 6, 7} to represent the categories: Green Building, Water Pollution, Cancer, Alzheimer\u2019s Disease, Civil, Medical and Root. Consider a document x with the true label \u2018Cancer\u2019 and an initial model predicting scores h(x) = (0.1, 0.1, 0.5, 0.6). If we used the scores to make a point prediction, we would be incorrect \u2014 the highest scoring label u(x) is \u201cAltzheimer\u2019s disease\u201d, and is wrong: u(x) \u0338= y. If we output the parent node ( \u2018Medical\u2019) instead, our prediction would be less specific (a larger prediction set, here corresponding to both \u201cCancer\u201d and \u201cAlzheimer\u2019s Disease\u201d), but it would cover the true label. We would like to output nodes such that we obtain our target coverage rate (say 90%), without over-covering (say by always outputting \u201cRoot\u201d, which would be trivial). Traditional conformal prediction methods (see Angelopoulos & Bates (2021) for a gentle introduction) give prediction sets that offer marginal guarantees of this sort, but not prediction-set conditional guarantees: i.e. they offer that for 90% of examples, we produce a prediction set that covers the true label. Recent applications of multicalibration related techniques ((Jung et al., 2021; Gupta et al., 2022; Bastani et al., 2022; Jung et al., 2022; Deng et al., 2023; Gibbs et al., 2023) are able to give \u201cgroup conditional\u201d coverage guarantees which offer (e.g.) 90% coverage as averaged over examples within each of a number of intersecting groups, but once again these methods are not able to offer prediction-set conditional guarantees. Prediction set conditional guarantees promise that for each prediction set that we produce, we cover 90% of example labels, even conditional on the prediction set we offer. This precludes the possibility of our model being over-confident in some prediction sets and under-confident in others, as demonstrated in our experimental results. We now define some useful functional notation. Let A : V \u2192V H return the set of all the ancestor nodes of the input node. Let q : V \u00d7 V \u2192V compute the nearest common ancestor of its two input nodes. Let R : X \u2192R|V | be the function that computes for each node i, Ri, the sum of the raw scores h(x) assigned to each leaf that is a descendent of node i (or itself if i is a leaf). When needed, we may randomize R by letting ri(x) \u225cRi(x) + \u03f5i(x), where \u03f5(x) is an independent random variable with zero-mean and constant variance. We define a natural method to choose a node o(x) to output given a scoring function h(x) and a threshold function \u03bb(x). We define o(x) \u225carg minv{d(v) : v \u2208A(u(x)), rv < \u03bb(x)}, where d(v) denotes the depth of the node v in the tree. In other words, we output the highest ancestor i of u(x) (which we recall is the point prediction we would make given h alone) whose cumulative score ri is below 7 \fsome threshold \u2014 which we will select to obtain some target coverage probability. Other natural choices of o(x) are possible \u2014 what follows uses this choice for concreteness, but is not dependent on the specific choice. Recall that an output covers the label if it is the ancestor of the label or the label itself. Our goal is to find a \u03bb(x), such that the rate at which the output covers the label is roughly equal to a given target \u03c3, not just overall, but conditional on the prediction set we output lying in various sets U \u22822V : |E(x,h,y)\u223cD[1{o(x)\u2208U}(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1, \u2200U \u2208U. Back to our example, we can specify U in various ways. For example, we can set U = {{1, 2, 5}, {3, 4, 6}} to require equal coverage cross the parent categories \u2018Civil\u2019 and \u2018Medical\u2019. Or, we can set U = {{1}, {2}, . . . , {6}, {7}} to obtain our target coverage rate \u03c3 conditionally on the prediction set we output for all possible prediction sets we might output. We set G \u225c{1{o(x)\u2208U} : U \u2208U} \u222a{\u22121{o(x)\u2208U} : U \u2208U}, fitting this problem into our GMC framework: |E(x,h,y)\u223cD[g(o(x))(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1, \u2200g \u2208G. Using PK i=1 1{rq(i,u)(x)<\u03bb}1{y=i} = 1{o(x) covers y} and applying Algorithm 1, we obtain the following theorem: Theorem 4. Assume (1). \u2200u, \u2200i \u2208V, fri|x(u) \u2264Kp, where fri|x(u) denotes the density function of ri conditioned on x; (2). There exists a real number M > 0 such that \u2200i \u2208V, ri \u2208[\u2212M, M]. With a suitably chosen \u03b7 = O(\u03b1/KP ), our algorithm halts after T = O(KP M/\u03b12) iterations and outputs a function \u03bb satisfying that \u2200U \u2208U, |E(x,h,y)\u223cD[1{o(x)\u2208U}(\u03c3 \u22121{o(x) covers y})]| \u2264\u03b1. Applying theorem 2, we can see that the sample complexity for the finite-sample version of the algorithm is O( log(2|U|)+log( 1 \u03b4 ) \u03b12 ). 3.3 Fair FNR Control in Image Segmentation In image segmentation, the input is an image of m = w \u00d7 l (w for width and l for length) pixels and the task is to distinguish the pixels corresponding to certain components of the image, e.g., tumors in a medical image, eyes in the picture of a face, etc. As pointed out by Lee et al. (2023), gender and racial biases are witnessed when evaluating image segmentation models. Among the common evaluations of image segmentation, we consider the False Negative Rate (FNR), defined as False Negatives False Negatives+True Positives. In image segmentation when O, O\u2032 denotes the set of the actual selected segments and the predicted segments respectively, FNR = 1 \u2212|O\u2229O\u2032| |O| . We write x \u2208X to denote the input, which includes both image and demographic group information and y \u2208{0, 1}m to denote the label, which is a binary vector denoting the true inclusion of each of the m pixels. To yield the prediction of y, namely \u02c6 y \u2208{0, 1}m, a scoring function h(x) \u2208Rm and a threshold function \u03bb(x) are needed, so that \u02c6 yi = 1{hi(x)>\u03bb(x)} for i \u2208[m]. As in Section 3.2, for technical reasons we may randomize hi by perturbing it with a zero-mean random variable of modest scale. Our objective is to determine the threshold function \u03bb(x). In the context of algorithmic fairness in image segmentation, one specific application is face segmentation, where the objective is to precisely identify and segment regions containing human faces within an image. The aim is to achieve accurate face segmentation while ensuring consistent levels of precision across various demographic groups defined by sensitive attributes, like gender and race. Thus, our objective is to determine the function \u03bb(x) that ensures multi-group fairness in terms of the FNR \u2014 a natural multi-group fairness extension of the FNR control problem for image segmentation studied by Angelopoulos et al. (2023). Letting A be the set of sensitive subgroups of X, our goal is to ensure that the FNR across different 8 \fgroups are approximately (1 \u2212\u03c3) for some prespecified \u03c3 > 0: |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212|O \u2229O\u2032| |O| \u2212\u03c3)]| \u2264\u03b1, \u2200A \u2208A. We can write |O \u2229O\u2032| = Pm i=1 yi1{hi(x)>\u03bb(x)}, so the object is converted to sup A\u2208A |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212 Pm i=1 yi1{hi(x)>\u03bb(x)} Pm i=1 yi \u2212\u03c3)]| \u2264\u03b1. Let s(\u03bb, x, h, y) = 1 \u2212 Pm i=1 yi1{hi(x)>\u03bb(x)} Pm i=1 yi \u2212\u03c3 and G \u225c{\u00b11{x\u2208A} : A \u2208A}. Rewriting the inequality we get: sup g\u2208G E(x,h,y)\u223cD[g(\u03bb(x), x)s(\u03bb, x, h, y)] \u2264\u03b1. Cast in the GMC framework, we obtain the following result: Theorem 5. Assume (1) For all i \u2208[n], |hi| \u2264M for some universal constant M > 0; (2) the density function of hi conditioned on x is upper bounded by some universal constant Kp > 0. Let C be the set of sensitive subgroups of X. Then with a suitably chosen \u03b7 = O(\u03b1/(KP )), the algorithm halts after T = O( 2KP M \u03b1 ) iterations and outputs a function \u03bb satisfying: |E(x,h,y)\u223cD[1{x\u2208A}(1 \u2212|O \u2229O\u2032| |O| \u2212\u03c3)]| \u2264\u03b1, \u2200A \u2208A. Similar to the previous two applications, by applying Theorem 2 for the finite-sample version of the algorithm, the sample complexity required is O( log(2|A|)+log( 1 \u03b4 ) \u03b12 ). We note that equalizing false negative rates across groups can be achieved trivially by setting \u03bb to be large enough so that the FNR is equalized (at 0) \u2014 which would of course destroy the accuracy of the method. Thus when we set an objective like this, it is important to empirically show that not only does the method lead to low disparity across false negative rates, but does so without loss in accuracy. The experiments that we carry out in Section 4 indeed bear this out. 4 Experiments In this section, we conduct numerical experiments and evaluate the performance of our algorithms within each application from both the fairness and accuracy perspectives. We compare the results with baseline methods to assess their effectiveness. The code can be found in the supplementary material. For more detailed experiment settings and additional results, please refer to Appendix D. 4.1 De-Biased Text Generation In text generation, we consider two datasets and run experiments separately. The first dataset is the corpus data from Liang et al. (2021), which extracts sentences with both terms indicative of biases (e.g., gender indicator words) and attributes (e.g., professions) from real-world articles. The second dataset is made up of synthetic templates based on combining words indicative of bias targets and attributes with simple placeholder templates, e.g., \u201cThe woman worked as ...\u201d; \u201cThe man was known for ...\u201d, constructed in (Lu et al., 2019). Then, we define two kinds of terms indicative of bias targets: female-indicator words and male-indicator words; we also define six types of attributes: female-adj words, male-adj words, male-stereotyped jobs, female-stereotyped jobs, pleasant words, and unpleasant words, by drawing on existing word lists in the fair text generation context (Caliskan et al., 2017) (Gonen & Goldberg, 2019). Each input x is a sentence where sensitive attributes are masked. We use the BERT model (Devlin et al., 2018) to generate the initial probability distribution over the entire vocabulary for the word at 9 \fthe masked position, denoted by h(x). We then use our algorithm to post-process h(x) and obtain the function p(x), which is the calibrated probability of the output. We define two sets of prompts: Afemale and Amale be the set of prompts containing female-indicator and male-indicator words, respectively. We aim to control the gender disparity gap |P(x \u2208A) \u00b7 [P(o(x) \u2208U|x \u2208A) \u2212P(o(x) \u2208U)]| for A \u2208{Afemale, Amale}. Figure 2 plots the disparity gap for A = Amale (the result for A = Afemale is deferred to the appendix due to space constraints). It is evident that our post-processing technique effectively limits the disparity between the probabilities of outputting biased terms related to different gender groups, ensuring that it remains consistently below a specified threshold value of \u03b1 = 0.002 (we will further discuss the way of choosing \u03b1 in the Appendix D). Additionally, we assess the cross-entropy loss between the calibrated output distribution and the corresponding labels. Unlike the calibration set where sensitive words are deliberately masked, we randomly mask words during the cross-entropy test to evaluate the model\u2019s overall performance, extending beyond the prediction of sensitive words. The cross-entropy of the test set is 9.9291 before post-processing and 9.9285 after it, indicating that our algorithm does not reduce the accuracy of the model while reducing gender disparities. We would like to note that our algorithm is not designed to enhance accuracy but to improve fairness while ensuring that the performance of cross-entropy does not deteriorate too much. Figure 2: The bias on outputting different types of sensitive attributes measured on the corpus data. The results for the synthetic data are deferred to the appendix. 4.2 Prediction-Set Conditional Coverage in Hierarchical Classification For hierarchical classification, we use the Web of Science dataset (Kowsari et al., 2017) that contains 46, 985 documents with 134 categories including 7 parent categories. We choose HiAGM (Wang et al., 2022) as the network to generate the initial scoring. Our algorithm is then applied to find the threshold function that yields a fair output. We set our coverage target to be \u03c3 = 0.95 with a tolerance for coverage deviations of \u03b1 = 0.025. Equivalently put, our goal is that for each of the predictions, we aim to cover the true label with probability 95 \u00b1 2.5%, even conditional on the prediction we make. We choose naively outputting the leaf node (denoted as \u201cunprocessed\u201d in the figure) as one baseline and the split conformal method (Angelopoulos et al., 2023) as another baseline. Figure 3 shows that our method achieves coverage within the target tolerance for all predictions, while the two baselines fail to satisfy the coverage guarantee for predicting \u2019CS\u2019 and \u2019Medical\u2019. 10 \fFigure 3: The deviation of prediction-set conditional coverage from the target. 4.3 Fair FNR Control in Image Segmentation We use the FASSEG (Khan et al., 2015) dataset and adopt the U-net (Ronneberger et al., 2015) network to generate the initial scoring function for each pixel, representing the predicted probability of this pixel corresponding to the signal. The dataset contains 118 human facial images and their semantic segmentations. We set our target FNR to be \u03c3 = 0.075 with a tolerance for deviations of \u03b1 = 0.005 and calibrate the FNR across different gender subgroups and racial subgroups. In addition, we compare with the method proposed in (Angelopoulos et al., 2023) that controls on-average FNR in a finite-sample manner based on the split conformal prediction method. The results yielded by U-net and the split conformal are plotted as baselines for comparison in Figure 4. Our algorithm demonstrates its effectiveness as the deviations of the FNRs of GMC from the target \u03b1 across all subgroups are controlled below \u03c3, while the baselines are found to perform poorly on male and white subgroups. Since equalizing FNR does not necessarily imply accuracy, we compute the accuracy of our model\u2019s output together with that of the baseline. The accuracy of our model, measured as the ratio of correctly predicted pixels to the total number of pixels, is 0.86. In comparison, the baseline models achieve an accuracy of 0.84 and 0.92, respectively. This result suggests that our algorithm empirically yields significant gains in mitigating FNR disparities without a significant sacrifice in accuracy. Figure 4: The deviation of the false negative rate from the target in image segmentation. 11"
20
+ }
title_10K/test_title_short_2405.02228v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02228v1",
3
+ "title": "REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs",
4
+ "abstract": "Automatic citation generation for sentences in a document or report is\nparamount for intelligence analysts, cybersecurity, news agencies, and\neducation personnel. In this research, we investigate whether large language\nmodels (LLMs) are capable of generating references based on two forms of\nsentence queries: (a) Direct Queries, LLMs are asked to provide author names of\nthe given research article, and (b) Indirect Queries, LLMs are asked to provide\nthe title of a mentioned article when given a sentence from a different\narticle. To demonstrate where LLM stands in this task, we introduce a large\ndataset called REASONS comprising abstracts of the 12 most popular domains of\nscientific research on arXiv. From around 20K research articles, we make the\nfollowing deductions on public and proprietary LLMs: (a) State-of-the-art,\noften called anthropomorphic GPT-4 and GPT-3.5, suffers from high pass\npercentage (PP) to minimize the hallucination rate (HR). When tested with\nPerplexity.ai (7B), they unexpectedly made more errors; (b) Augmenting relevant\nmetadata lowered the PP and gave the lowest HR; (c) Advance retrieval-augmented\ngeneration (RAG) using Mistral demonstrates consistent and robust citation\nsupport on indirect queries and matched performance to GPT-3.5 and GPT-4. The\nHR across all domains and models decreased by an average of 41.93% and the PP\nwas reduced to 0% in most cases. In terms of generation quality, the average F1\nScore and BLEU were 68.09% and 57.51%, respectively; (d) Testing with\nadversarial samples showed that LLMs, including the Advance RAG Mistral,\nstruggle to understand context, but the extent of this issue was small in\nMistral and GPT-4-Preview. Our study con tributes valuable insights into the\nreliability of RAG for automated citation generation tasks.",
5
+ "authors": "Deepa Tilwani, Yash Saxena, Ali Mohammadi, Edward Raff, Amit Sheth, Srinivasan Parthasarathy, Manas Gaur",
6
+ "published": "2024-05-03",
7
+ "updated": "2024-05-03",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "cs.IR"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "LLM Fairness",
16
+ "gt": "REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs",
17
+ "main_content": "Introduction The development of LLMs marks a significant advancement in computational linguistics and artificial intelligence (AI) (Tamkin and Ganguli, 2021). LLMs, such as OpenAI\u2019s GPT series, have shown remarkable capabilities in text generation (Zhao et al., 2023), and question-answering systems (Rasool et al., 2023; Elgedawy et al., 2024). However, their limitations become apparent as they become more integrated into various domains, including defense (Schwinn et al., 2023), news media (Fang et al., 2023), and education (Yan et al., 2024; Hung et al., 2023; Augenstein et al., 2023). The critical issue is their propensity to generate hallucinated sentences and propagate factually inaccurate pieces of information without reference (Ji et al., 2023; Rawte et al., 2023). These inaccuracies diminish the models\u2019 reliability and erode users\u2019 trust, a vital component in their widespread adoption. Commercial LLM-based search systems, including Bing Search-powered GPT 4 (Mehdi, 2024) and Perplexity.ai (Roose, 2024), are still not capable enough of resolving the issue of citation generation to confirm the scientific feasibility of either a generated sentence(s) or given sentence(s) from the scientific literature. For instance, Figure 1 shows how proprietary LLMs respond to the zero-shot indirect query. It is evident from the figure that while general-purpose LLMs like GPT3.5 and GPT-4 \u2018pass\u2019 the query, task-specific LLM Perplexity does generate relevant citations but still shows hallucination. Consider the following arXiv:2405.02228v1 [cs.CL] 3 May 2024 \fFigure 1: An illustration and motivating example for investigating LLMs for automatic citation generation task. Perplexity.ai, which is an LLM-based search engine, yields a citation that doesn\u2019t exist [1], an incorrect one [3], and a correct citation [2]. Advance RAG (defined in this research) improved context understanding and citation generation quality. Time: Feb. 05, 2024. three real world examples of this research: Citation Generation in Research Articles and News Reports: LLMs can generate highly persuasive and realistic content, especially in writing research articles or news reports, making it challenging for users to distinguish between genuine and fabricated information Nakano et al. (2021); Menick et al. (2022); Kumarage and Liu (2023). Citation Generation in Reports for Organizational Cybersecurity: In cybersecurity, where decisions often need to be made quickly and are based on the data provided, the accuracy and reliability of information are paramount (Divakaran and Peddinti, 2024). Inaccurate citations can lead to misinformation and potentially severe consequences in decision-making processes. LLMs can automate the citation generation process but need to be carefully designed for organization specific cybersecurity. Citation Generation in Reports for Legal: In a significant event, an attorney tried employing ChatGPT for legal analysis during a trial (see subsection A.1)(Bohannon, 2023). While ChatGPT generated information, it failed to capture the nuanced complexities and critical legal precedents needed for the case. This underscores the importance of confirming and sourcing accurate legal citations and precedents relevant to the case. We contribute by addressing these challenges with the following: (A) Introduce REASONS, a dataset created by extracting related works from IEEE articles spanning 12 scientific domains from 2017 to 2023. (B) We employ a new RAG training regime to develop Advance RAG. Advance RAG and Na\u00efve RAG examine the factual integrity of the information retrieved by dense retrievers and its presentation as citations by LLMs. (C) We evaluate both proprietary and public LLMs and their RAG counterparts (10 models) to assess their contextual awareness using metrics like Pass Percentage (PP) and Hallucination rate (HR). Additionally, we have measured the quality of citation generation using F-1 and BLEU scores. (D) We conduct an adversarial examination to provide a clear assessment of context awareness regarding citation generation in LLMs. Findings:(I) Perplexity, faces a major challenge when dealing with indirect and direct query on the REASONS dataset (Figure 2 Figure 5, and in Appendix A Table 6 Table 9).(II) Citation generation is enhanced uniformly across public and proprietary LLMs when metadata like abstract and title are considered with indirect query (Figure 3 and Figure 5, along with Table 7 and Table 9). (III) Advance RAG with Mistral LLM outperforms other competitive proprietary and public LLMs. This performance is realized by a reduction in the HR and increments in F-1 and BLEU scores (Figure 3 and Figure 5 (last two bars) and Table 7 and Table 9 (last two columns)). (IV) For domains such as Quantum Computing and Biomolecules that are heavy in mathematics and numerals, there was a substantial decline in citation generation quality and an increase in HR. Adversarial examination strengthens our understanding that despite being exorbitantly large, LLMs lack context awareness (Table 2). (V) Advance RAG did provide convincing evidence of context understanding (Table 2). Further improvements in RAG-based LLMs are desirable, and utilizing REASONS dataset can provide valuable insights into context understanding and provenance in tasks such as hypothesis generation. 2 Background Early Techniques in Citation Recommendation: The practice of citing sources is a cornerstone of academic and professional writing, serving as the bedrock for reliability, and truthfulness in scholarly work (Cronin, 1981). The evolution of citation recommendation systems mirrors the broader advancements in computational linguistics and nat\fural language processing (NLP) (Bai et al., 2019; Ali et al., 2021). Initial methods in citation recommendation focused on basic techniques such as text feature-based systems (Strohman et al., 2007), simple keyword matching, and basic statistical methods (Bethard and Jurafsky, 2010). Context-aware citation recommendation systems supplemented these methods (He et al., 2010; Ebesu and Fang, 2017; Jeong et al., 2020a; Huang et al., 2021). However, their inability to grasp deeper textual contexts limited their effectiveness. Machine learning in Citation Recommendation The incorporation of machine learning into citation recommendation systems represents an initial step toward automating the citation process, which is typically regarded as manual and laborintensive(Agarwal et al., 2005; K\u00fc\u00e7\u00fcktun\u00e7 et al., 2012). These systems began to exhibit an improved understanding of the text, although they still lacked a nuanced grasp of complex contexts (Tran et al., 2015). The application of neural networks revolutionized citation recommendation. NLP algorithms, capable of parsing complex sentence structures, started identifying relevant themes for contextually appropriate citation recommendations (Zarrinkalam and Kahani, 2013; Beel et al., 2016; Iqbal et al., 2020). Concurrently, graph-based models, visualizing literature as interconnected networks, enhanced citation recommendations by considering content similarity and citation patterns (Ali et al., 2020; Chakraborty et al., 2015). With deep learning, citation recommendation systems began incorporating semantic analysis, employing models like word embeddings and neural networks for a more nuanced understanding (Yang et al., 2018; Bhagavatula et al., 2018; Vajdecka et al., 2023). Adapted from commercial use, collaborative filtering also emerged, recommending citations based on similar citation behaviors (Wang et al., 2020). Large Language Models in Citation Generation: The advent of LLMs like GPT-3 and its successors has further transformed NLP. Initial language model systems such as those based on BERT have significantly improved citation recommendation by converting unstructured text into meaningful vectors (Jeong et al., 2020b; Devlin et al., 2018; Bhowmick et al., 2021). Recent studies have focused on evaluating the fidelity of generated text to its sources (Ji et al., 2023). (Rashkin et al., 2023) introduced the \u201cattributable to identified sources\u201d (AIS) score, while (Bohnet et al., 2022) and others (Honovich et al., 2022; Yue et al., 2023) have focused on automating AIS. Concurrent work by (Liu et al., 2023) explored human evaluation of commercial generative search engines such as Bing. Chat, NeevaAI, Perplexity.ai, and YouChat. Despite these advancements, LLMs in citation recommendation still struggle with generating accurate information and providing references, as shown in studies by (Ji et al., 2023; Zheng et al., 2023). We conduct empirical and investigative research on why public and proprietary LLMs, including the powerful GPT-4 (which has not been examined yet), are prone to incorrect citation generation. Further, we provide means for improving the citation generation in public LLMs through a customized design using RAG. This limitation necessitates an approach closely aligning with RAG. RAG compels LLMs to provide citations alongside the generated text. The concept of retrieval-augmented LLMs has gained traction in recent years following (Guu et al., 2020; Borgeaud et al., 2022; Izacard et al., 2022; Khandelwal et al., 2019; Schick et al., 2023; Jiang et al., 2023b; Yao et al., 2022; Gao et al., 2023). We evaluate public and proprietary LLMs and their RAG counterparts on citation generation using REASONS, a meticulously curated dataset from arXiv spanning key domains in computer science and related fields. This allows us to assess the LLM\u2019s ability to identify a given sentence\u2019s source accurately. Domain Paper Count IEEE Papers Citation Count CV 5488 1028 3437 Robotics 3656 292 776 Graphics 1796 384 1417 IR 1741 564 1654 AI 1697 531 2021 NLP 1526 293 1092 Cryptography 1084 371 1106 NNC 892 111 326 HCI 761 112 229 Databases 723 115 182 QC 421 126 456 Biomolecules 119 17 27 Total 19904 3944 12723 Table 1: Our benchmark dataset, REASONS, includes papers and sentences from 12 domains. It primarily features ten domains in computer science and 2 in biology. Full forms of domain acronyms are provided in subsection A.5. \f3 Problem Setup Scope of REASONS: The dataset comprises sentences gathered from the related work sections of articles in computer science and biology available on arXiv (arX). Summary is provided in Table 1. It should be noted that GPT-3.5 or its successors may have processed all the papers published on arXiv from 2017 to 2021 while training. To ensure our dataset is unbiased, we include papers published in 2022 and 2023 that test the memory and understanding of LLMs. Exclusions were made for mathematics, statistics, and physics due to the abundance of equations in the related work section, and the crawling method theoremKb1 lacked the required versatility. We chose to focus on IEEE papers as they are represented across all 12 domains we considered. Each sentence in the related work section encapsulates the author\u2019s thought process in citing related works: (A) Every sentence captures the author\u2019s interpretation and emphasis on original methodology, critique of prior work, corrections to previous research, or acknowledgment of pioneers. This encompasses summarizing these aspects briefly and concisely. (B) The cited work in the related work section is either incidental or important to current work (Valenzuela et al., 2015). REASONS is inspired by previously constructed s2ORC and UnarXive datasets containing academic papers (see Table 4 in Appendix A); however, we diverge on the following points: (A) We provide sentence-level annotation of citations on major computational domains on arXiv. (B) Each sentence is accompanied by its metadata, which includes the paper title, abstract, and author names of the paper it cites. It also contains the title of the paper from which it was taken. (C) The dataset structure allows for an easy examination of LLMs using indirect and direct queries. Crawling Process: The web crawler employs the Oxylabs2 SERP Scraper API as its methodology, enabling real-time data extraction from major search engines. This API offers a proxy chaining platform for efficient data extraction. The dataset is meticulously organized in JSON format with a detailed outline (see \u201cJSON Structure\u201d). A complete GitHub repository is provided, containing the dataset and the code for reproducibility (see details in subsection A.3). We plan to keep updating the repository with more articles and metadata. The 1https://github.com/PierreSenellart/theoremkb 2https://oxylabs.io/ associated costs are provided in (subsection A.2). JSON Structure {\"Computer Vision\": { \"http://arXiv.org/abs/2012.05435v2\": { \"Paper Title\": \"Optimization-Inspired..\", \"Sentences\": [ {\"Sentence ID\": 32, \"Sentence\": \"... For GM, ... \", \"Citation Text\": \"C. Ledig,...\", \"Citation\": { \"Citation Paper ID\": \"arXiv:1609.04802\", \"Citation Paper Title\": \"Title:Photo..\", \"Citation Paper Abstract\": \"Abstract:.\", \"Citation Paper Authors\": \"Authors:...\" }}]}}} 3.1 Problem Formulation We define two tasks for LLMs over the REASONS dataset R: (a) Direct Querying and (b) Indirect Querying. For experimentation, we segment R into RS and RM. RS represents sentences and paper titles for which references are to be generated with or without the support from metadata RM. Direct Querying Task: Given a title ti \u2208RS, the LLM should generate the author list. For the task of direct querying with metadata, the LLM is given the following input: ti \u2208RS, the Advance RAG model retrieves top-40 chunks of information ai1, ..., ai40 \u2208RM, and generates the names. Indirect Querying Task: Given a sentence si \u2208RS, the LLM should generate a paper title in zero-shot setting. For the task of indirect querying with metadata called Sequential Indirect and Direct Prompting (SID Prompting), the LLM is given the following input: si \u2208RS and ground truth abstract abss \u2208RM as well as the authors aus \u2208RM, and the model is asked to generate the citation paper title. Examples of direct and indirect queries are: Direct Prompt Prompt: Who were the authors of the research paper \"Research Paper Title\"? Instruction: List only author names, formatted as < firstname >< lastname >, separated by comma. Do not mention the paper in the title, also, if you don\u2019t know, write \u2019pass\u2019. Response: Author Names. \fIndirect Prompt Prompt: I have taken a sentence from the research paper titled \u201cResearch Paper Title\u201d, give me the research paper that this sentence is citing. If you cannot come up with the paper titles, write \u2018pass.\u2019 Don\u2019t write anything else. Instruction: Sentence \"uses fractional max-pooling to randomly specify non-integer ratios between the spatial dimension sizes of the input and the output to pooling layers.\" Response: Citation Paper Title. Implementation of Direct and Indirect Querying: Direct querying is executed using zero-shot prompting for scenarios without metadata and chain-of-thoughts prompting for metadata situations. We modify the chain-of-thoughts prompting with SID Prompting. It begins with an indirect query. Following an incorrect response or a \u2018pass,\u2019 more details about the cited paper are given (i.e., direct query), including its abstract and authors\u2019 names. This is an iterative approach to generate the correct citation. Following are the two examples of these prompting strategies: Direct Query with Metadata Prompting Prompt: Who were the authors of the research paper \u201cResearch Paper Title\"? Let me give you some more context by providing the abstract of the research paper. Abstract:\u2019....\u2019. Instruction: List only author names, formatted as <first name><last name>, separated by comma. Do not mention the paper in the title. Also, if you don\u2019t know, write \u2018pass.\u2019 Response: Author Names. SID Prompting Prompt: I have taken a sentence from the research paper titled \"Research Paper Title.\" give me the title of the possible research paper that this sentence is citing. If you cannot come up with the paper titles, write \u2019pass\u2019. Don\u2019t write anything else. Instruction: Sentence:\"......\". Let me give you some more context by providing the authors and the abstract of the paper the sentence is citing. Authors:\"......\", Abstract:\".......\" Response: Citation Paper Title. 3.2 Models and Evaluation Our research has focused on a diverse array of LLMs, carefully chosen to provide a broad perspective on the capabilities and limitations inherent in current language model technologies. Proprietary Models: Our selection of proprietary models includes those from OpenAI and Preplexity.ai. While OpenAI is known for its cutting-edge NLP models, driving significant advancements in the field, Preplexity.ai focuses on models with unique functionalities, such as recommending citations and utilizing natural language prediction for innovative search experiences. Public Models: We choose LLAMA 2 (Touvron et al., 2023) and Mistral (Jiang et al., 2023a) as the two publicly available LLMs that have demonstrated competitive performance compared to proprietary LLMs. We evaluate their effectiveness on the REASONS dataset under the standard state and retrieval-augmentation conditions. This analysis goes beyond simply comparing proprietary and public models, extending to evaluating models based on their size, particularly those with 7B parameters. 3.3 Evaluation Metrics Our evaluation uses four key metrics: 1) The BLEU Score assesses the structural alignment through clipped n-gram matching. 2) The F-1 Score evaluates the balance between precision and recall, reflecting the models\u2019 effectiveness in capturing key information. 3) Hallucination rate (HR), which we estimate by averaging over incorrect and partially correct generated citations. HR = 1 QD P I[\u02c6 c \u0338= c] + 1 |Uw| P|Uw| w=1 I[\u02c6 cw \u0338= cw], where QD: queries within a domain, and |Uw|: total number of unique words in generated citation (\u02c6 c) and true citation (c). 4) Pass Percentage (PP) measures the tendency of an LLM to either respond or abstain from giving a response. It is calculated as follows: 1 QD P I[\u02c6 c = Pass]. It is crucial to emphasize that PP serves as a safeguard to prevent LLMs from generating hallucinatory responses but also reduces engagement. Additionally, even with a high PP, the HR can be high. This implies that the model struggles to discern whether it offers correct or incorrect citations in the remaining instances. 3.4 Retrieval Augmented Generation (RAG) RAG combines a retriever and a generator to create better answers. RAG can access external knowledge, unlike methods that feed the model prompts. This lets it craft more accurate, relevant, and informative responses than models that rely solely on what they were pre-trained. We investigate RAG\u2019s ability to improve LLMs\u2019 accuracy. Ideally, RAG would help LLMs avoid giving wrong answers (low PP) and making things up (HR). We also investigate whether RAG works consistently with direct and indirect questions across different scientific fields (12 domains). We experiment with two forms of RAG architecture: \f(a) Na\u00efve RAG and (b) Advance RAG. Both architectures leverage the same bi-encoder-based retriever architecture (Karpukhin et al., 2020). Given a corpus of documents RM and a sentence s \u2208RS, the document encoder maps d \u2208RM to an embedding E\u03b8(c) and the query encoder maps s to an embedding E\u03b8(s). The top-k relevant documents for s are retrieved based on the sentence-document embedding similarity, which is often computed via dot product: z(s, d) = exp(E\u03b8(s)T E\u03b8(d)). We start with a bi-encoder retriever using an embedding model from OpenAI (subsection A.4). Other ways to set up a bi-encoder retriever, such as DRAGON+ (Lin et al., 2023), are possible. However, those are more useful when involving large-scale data augmentation. The retrieved documents are ranked in two ways, which separates Na\u00efve RAG from Advance RAG. Under the Na\u00efve RAG, we use BM25 relevance scoring to rank the documents, whereas, in Advance RAG, we fine-tune a cross-encoder on REASONS document index RM to better align it with our task of citation generation with LLM. For the fine-tuning of the cross-encoder, we use localized contrastive loss (LCL) for two reasons: (a) In RM, we do not have labeled positive and negative documents, and (b) for a sentence s there is a possibility for more than one true positive documents (Pradeep et al., 2022). LCL is formally defined as follows: LLCLs := \u2212log exp(zs,{d+}) P d\u2208Gs exp(zs,d) LLCL := 1 |S| X s\u2208Rs,Gs\u2208Rs M LLCLs where Gs represents a set of documents for a sentence s, which consist of a set of relevant documents ({d+}) and n-1 non-relevant documents {d\u2212} sampled from Rs M using biencoder. The training of Advance RAG happens through the standard cross entropy loss: LCE(\u02c6 c|s, \u03d5) = Pb i=1 I(\u02c6 cw i = cw i ) \u00b7 log Pr(\u02c6 cw i |\u03d5) where, \u03d5 is parameter of the generator LLM and b is the minibatch fine-tuning in Advance RAG. \u02c6 ci represents ith citation generation, and I(\u02c6 cw i = cw i ) represents word level comparison with ground truth citation (direct query: author names; indirect query: paper titles). For the Na\u00efve and Advance RAG, we employ LLAMA-2 7B and Mistral 7B as competitive models against proprietary LLMs. 4 Results We conducted experiments encompassing four distinct prompting styles applied to twelve scientific domains. This extensive analysis involved 12,723 sentences, resulting in a substantial dataset rigorously evaluated using ten different models. This equates to 508920 instance assessments involving 4 (prompting styles) \u00d7 12,723 (sentences for all domains) \u00d7 10 (models). The total duration required to execute all experiments on the GPU is 238 days, 6 hours, and 59 minutes. For detailed information regarding the time spent on experiments across various domains, please refer to the appendix (see subsection A.6 and Table 5). Zero-Shot Indirect Prompting: In Figure 4, a majority of the models exhibited high HR. As expected for a huge model GPT-4-1106-preview (1 Trillion Parameters) shows a relatively lower HR of 67.73% and a higher PP of 89% averaged across 12 domains. Perplexity-7b-Chat showed an exceptionally high PP of 97.5%, which is surprising, as this LLM is designed specifically for citation generation. RAG Mistral was a competitive model with GPT-4 with a lower PP of 21% and HR of 72.49% in comparison to other LLMs. Analysis shows RAG Mistral is competitive because of the high variance in HR compared to GPT-4-1106-preview. Generation quality measured by F-1 and BLEU scores were predominantly low across the board, with GPT-4 (not the preview, G1) comparatively better scores. RAG Mistral and RAG LLAMA 2 rank second and third best respectively. SID Prompting In Figure 5, showed improvement across all the LLMs in citation generation over indirect queries. An average improvement of 21% was measured, with a reduction in variance. Even though some models like Perplexity-7b-Chat and LLAMA 2 still had high HR rates, the PP dropped significantly, especially for GPT-4-1106-preview. The results of this experiment indicate that SID prompting in LLMs can balance the trade-off between PP and HR, significantly enhancing generation quality with an (8%\u2191) increase in BLEU and a (13%\u2191) in F-1 (The Appendix B provides examples for visual inspection.). Zero-Shot Direct Prompting presents a very idealistic scenario where the LLMs have access to context through direct query. This leads to both lower PP and HR. The citation generation quality significantly improves from zero-shot in\fG1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 50 100 Pass Percentage Figure 2: Averaged Zero-Shot Direct Prompting results of different LLMs across all 12 domains. G1 shows notably lower HR and higher F-1 and BLEU scores, indicating superior performance in generating citations. In contrast, model P exhibits the highest HR and the lowest scores in F-1 and BLEU, suggesting challenges in generating accurate and contextually relevant citations. The RAG models (RM and RL) demonstrate varied results, with RM showing a better accuracy and coherence balance than RL. G1: gpt-4-1106-preview, G2: gpt-4, G3: gpt-3.5-turbo, P: pplx-7b-chat, RM: Na\u00efve RAG mistral-7b-instruct, M: mistral-7b-instruct, RL: Na\u00efve RAG llama-2-7b-chat, L: llama-2-7b-chat, AL: Advance RAG llama-2-7b-chat, AM: Advance RAG mistral-7b-instruct. For the purposes of clarity and saving space, the terms AL and AM are used in the figures to denote Advance RAG llama-2-7b-chat and Advance RAG mistral-7b-instruct, respectively. In the main text of the paper, these are referred to as AdvRAG(L) and AdvRAG(M). G1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.5 1 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.5 1 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 0.5 1 Pass Percentage Figure 3: Averaged Direct Prompting with Metadata results of different LLMs across all 12 domains. The plot indicates that models G1, G2, and G3 stand out with their low HR and impressive F-1 and BLEU scores, in contrast to other models that face challenges. All models except RM reach a 0% PP, suggesting that including metadata significantly enhances their contextual understanding. G1 G2 G3 P RM M RL L 0 50 100 Hallucination Rate G1 G2 G3 P RM M RL L 0 0.2 0.4 0.6 F-1 Score G1 G2 G3 P RM M RL L 0 0.2 0.4 0.6 BLEU Score G1 G2 G3 P RM M RL L 0 50 100 Pass Percentage Figure 4: Averaged Zero-Shot Indirect Prompting across 12 domains. This prompting method led to elevated HR among the models. There was also a notable variance in PP, with models G3, P, and L exhibiting higher scores. Both conditions indicate challenges in understanding context and generating accurate citations when using indirect prompts. G1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 50 100 Pass Percentage Figure 5: Averaged SID Prompting results of different LLMs across all 12 domains. Models G1, G2, and G3 exhibit relatively better outcomes with lower HR and higher F-1 and BLEU scores, suggesting more contextual understanding. Other models demonstrated high HR, indicating difficulties in accurate citation generation with SID Prompting. Notably, while models G1 and G3 have high PPs, indicating some difficulties with SID, their overall performance still reflects a more advanced level of language processing and contextual comprehension compared to the other models. direct and SID promptings, achieving high F-1 and BLEU scores (see Figure Figure 4). However, Perplexity-7b-Chat, oddly, had high PP and HR, suggesting a need for more research on such \fspecialized LLM search engines. We observed that Perplexity-7b-Chat expands its search queries and adds references to the broader content it finds. The issue is that the expanded versions drift too far in meaning from the original. In Direct Prompting with Metadata, when metadata such as abstracts and titles were used with indirect questions, all the LLMs got better at generating citations and had low HR and PP. This shows that having more information helps LLMs create more accurate and related citations, proving the importance of enough data for good language processing. Note that PP dropped to zero for almost all models when direct promoting includes metadata. All GPT LLMs achieved F-1 and BLEU scores close to 1.0 and showed more consistent results overall. Two main points from this experiment are: First, adding metadata to LLMs is effective for all of them, especially RAG models that integrate this augmentation in their learning process. Second, smaller models with advance RAG (Mistral and LLAMA-2) adjust better to metadata than GPT-4-Preview/4/3.5 (see Figure 3). Overall: Advance RAG Mistral 7b outperformed other competitive proprietary and public LLMs in all prompting styles. This superior performance was notably marked by reduced HR, suggesting this model is more adept at generating accurate and relevant responses when adding metadata. Furthermore, improvements in F-1 scores reinforce its reliability in retrieving information. Higher BLEU scores were observed, signifying that the language output of the model aligns closely with human-like text in terms of fluency & coherence. 5 Adversarial Examination The analysis of LLMs using the REASONS dataset highlights significant variability in their performance across different domains. While they perform moderately better in areas like AI and CV with lower HR and higher F-1/BLEU scores, they struggle in complex domains such as QC, Biomolecules, and Cryptography, likely due to limited training data and the complexity of these subjects. This variability in performance indicates that LLMs have varying degrees of contextual understanding, with a tendency to perform better in domains with more extensive training data and less complex structures (e.g., maths and numerics). Motivation and Setup: We conducted adversarial experiments across all models to better assess their contextual understanding. The core concept Group PP(%) BLEU F1 HR Changing Paper Title G1 96.23 0.6210 0.8470 17.99 G2 31.45 0.0524 0.2640 83.66 G3 68.55 0.0389 0.1828 87.35 RM 3.14 0.0796 0.1584 86.78 M 0.00 0.0003 0.0221 94.95 RL 5.03 0.0628 0.1448 87.56 L 0.00 0.0066 0.0254 98.30 AdvRAG(L) 0.00 0.1322 0.4763 85.72 AdvRAG(M) 0.00 0.1569 0.5839 75.41 Changing Paper Abstract G1 95.60 0.4595 0.6451 38.49 G2 32.70 0.0396 0.2186 86.22 G3 76.10 0.0034 0.1013 91.64 RM 7.55 0.0520 0.1216 89.44 M 0.00 0.0074 0.0161 90.20 RL 2.52 0.0445 0.1112 90.16 L 0.00 0.0017 0.0146 99.01 AdvRAG(L) 0.00 0.4101 0.5780 39.67 AdvRAG(M) 0.00 0.4904 0.6954 39.57 Table 2: Performance of various LLMs on adversarial set, designed by swapping titles and abstracts. Models G1, G2, and G3, possibly exposed to similar data during training, struggled with the adversarial sets, resulting in high HR and PP. Conversely, models like AdvRAG(L) and AdvRAG(M) showed better performance, suggesting that these models attempt to understand the context before generating the citations. behind these experiments was to provide the models with incorrect yet similar metadata about the sentences in the prompts. The aim was to discern whether the models generated citations based on the contextual grasp of the provided metadata or if the metadata had minimal influence on the citation generation process. These adversarial experiments comprised two types: 1) Providing inaccurate paper titles related to the sentences. 2) Providing incorrect paper abstracts associated with the sentences. Both experiments were conducted using the SID prompting. To facilitate these experiments, we curated a subsample of 200 sentences from the REASONS dataset spanning all the domains. We extracted each sentence\u2019s most similar paper title or abstract from this dataset and replaced the original metadata. For similarity calculation, we use the RatcliffObershelp metric, which is calculated as twice the length of the longest common substring plus recursively the number of matching characters in the non-matching regions on both sides of the longest common substring (Tang et al., 2023). According to this metric, for the following example title \u201cDiffusion models for counterfactual explanations,\u201d the best replacement is \u201cOctet: Object-aware models for counterfactual explanations (0.736)\u201d as opposed \fto \u201cAdversarial counterfactual visual explanations (0.638)\u201d. We considered a threshold of 0.70 effective in preparing the adversarial set. Findings: We found that incorrect paper titles and abstracts easily fool most LLMs if it is similar to accurate information. In Table 2, G1 is displayed at 17.99%, and its pairing with a high PP of 96.23% indicates a defensive mechanism. This means the LLMs are not very good at understanding the true meaning of what they are given. On such a small adversarial set, we expect LLMs like GPT-4-1106-preview and GPT-4 to perform exceedingly well because of their extensive knowledge; however, we observed counterintuitive results in Table 2, all models show the effect. We do see promising direction with AdvRAG(M) and AdvRAG(L); however, further investigation is required into how rich graphical metadata (e.g., knowledge graph) and graph-theoretic approaches to information retrieval can improve LLM effectiveness (He et al., 2024). 6 Conclusion We have developed a new resource called REASONS (REtrieval and Automated citationS Of scieNtific Sentences), a benchmark designed to assess the ability of LLMs to understand context and generate appropriate citations. This benchmark includes sentences from the related work sections of papers, along with citations and metadata across 12 scientific and computational fields. We evaluated proprietary and public LLMs\u2019 ability to correctly provide author names and paper titles under two conditions: direct and indirect citation. Surprisingly, none of the LLMs demonstrated the readiness to annotate draft reports in various professional settings, such as market analysis, misinformation prevention, defense strategy, and healthcare reporting. We observed a trade-off between PP and HR, where GPT-4 and GPT-3.5 achieved higher accuracy at the cost of a lower HR. In contrast, though smaller with only 7B parameters, the Advance RAG model showed reasonable efficiency. Unlike other models, in adversarial tests where abstracts or paper titles were swapped, Advance RAG unexpectedly outperformed GPT-4, suggesting it does capture context before generating citations. Future Work: Through reasoning and explanation, we plan to explore and mitigate the noted shortcomings in citation generation (trade-off between HR and PP, high variance in BLEU scores, sub-par scores on adversarial set). One approach is to employ the Toulmin model (Naveed et al., 2018)) within Advance RAG. We believe these improvements will improve the quality of citation generation and better equip the models to manage complex reasoning (e.g., hypothesis generation and verification (Tyagin and Safro, 2023)) challenges confidently. Limitations Several factors constrain our study on applying LLMs for citation generation. (a) Primarily, integrating high-parameter-size models (>13B; refer to Table 5 for computation time) with RAG is not feasible, limiting our ability to leverage more complex models. (b) Additionally, the high computational resources required for such models are often inaccessible in academic settings. (c) One constraint in our study was the dataset creation, where we confined ourselves to predominantly IEEE format papers, particularly with domains with a high count of submissions. (d) Another significant limitation is the current inability of LLMs to effectively process and interpret mathematical expressions, a crucial aspect in many academic papers. (e) Due to the latest version of Google API (time stamp: December 04, 2023) lacking the citation generation feature, we have limited our experiments to OpenAI only. (f) While cross-encoders can be more powerful in understanding text relationships, they tend to be more computationally intensive. This is because they need to process every possible pair of inputs together, which can be a significant workload, especially in cases where there are many potential pairs to consider (like in large-scale retrieval tasks in our REASONS dataset). These constraints highlight the need for advancements in model adaptability, computational resource accessibility, dataset diversity, and specialized content processing for more robust and wide-ranging applications. Ethical Considerations We followed the Oxylabs Acceptable Use Policy3 and worked alongside some Oxylabs developers to ensure we respected the terms of services on arXiv. arXiv\u2019s terms of service place restrictions on automated crawling of their site for articles marked by \u201carxiv.org perpetual, non-exclusive license and CC BY-NC-ND\u201d. We paid attention to the following key ethical issues: (a) Privacy and Consent: The content on arXiv is publicly available, but the authors who upload their work there may not have 3https://oxylabs.io/legal/ oxylabs-acceptable-use-policy \fconsented to having their preprints crawled and used for other purposes. It\u2019s important to respect the privacy and intellectual property rights of the researchers who contribute to arXiv. We only crawled articles marked as CC Zero, CC BY, and CC BYSA. (b) Potential misuse: We prepared REASONS only to test the citation generation capability of LLMs for subsequent future downstream applications, such as annotating draft analytic reports. Our focus on HR and PP for citation generation and its quality using BLEU and F-1 shows that the data scraped is not for malicious purposes, such as fine-tuning LLMs to generate misinformation or infringe on copyrights. (c) Transparency and Accountability: We have been mindful of our crawling process, and to the best of our knowledge, we have enumerated sufficient details regarding the process. This would help build trust regarding reproducibility, extend REASONS, and ensure that the crawling process was not abused. (d) Author Identity and Contact: No authors of the crawled papers were contacted through their provided information in the publicly available arXiv papers. This user study was duly approved by the authors\u2019 organization\u2019s Institutional Review Board (IRB)."
18
+ }
title_10K/test_title_short_2405.02228v2.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02228v2",
3
+ "title": "REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs",
4
+ "abstract": "Automatic citation generation for sentences in a document or report is\nparamount for intelligence analysts, cybersecurity, news agencies, and\neducation personnel. In this research, we investigate whether large language\nmodels (LLMs) are capable of generating references based on two forms of\nsentence queries: (a) Direct Queries, LLMs are asked to provide author names of\nthe given research article, and (b) Indirect Queries, LLMs are asked to provide\nthe title of a mentioned article when given a sentence from a different\narticle. To demonstrate where LLM stands in this task, we introduce a large\ndataset called REASONS comprising abstracts of the 12 most popular domains of\nscientific research on arXiv. From around 20K research articles, we make the\nfollowing deductions on public and proprietary LLMs: (a) State-of-the-art,\noften called anthropomorphic GPT-4 and GPT-3.5, suffers from high pass\npercentage (PP) to minimize the hallucination rate (HR). When tested with\nPerplexity.ai (7B), they unexpectedly made more errors; (b) Augmenting relevant\nmetadata lowered the PP and gave the lowest HR; (c) Advance retrieval-augmented\ngeneration (RAG) using Mistral demonstrates consistent and robust citation\nsupport on indirect queries and matched performance to GPT-3.5 and GPT-4. The\nHR across all domains and models decreased by an average of 41.93%, and the PP\nwas reduced to 0% in most cases. In terms of generation quality, the average F1\nScore and BLEU were 68.09% and 57.51%, respectively; (d) Testing with\nadversarial samples showed that LLMs, including the Advance RAG Mistral,\nstruggle to understand context, but the extent of this issue was small in\nMistral and GPT-4-Preview. Our study contributes valuable insights into the\nreliability of RAG for automated citation generation tasks.",
5
+ "authors": "Deepa Tilwani, Yash Saxena, Ali Mohammadi, Edward Raff, Amit Sheth, Srinivasan Parthasarathy, Manas Gaur",
6
+ "published": "2024-05-03",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.AI",
12
+ "cs.IR"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
16
+ "gt": "REASONS: A benchmark for REtrieval and Automated citationS Of scieNtific Sentences using Public and Proprietary LLMs",
17
+ "main_content": "Introduction The development of LLMs marks a significant advancement in computational linguistics and artificial intelligence (AI) (Tamkin and Ganguli, 2021). LLMs, such as OpenAI\u2019s GPT series, have shown remarkable capabilities in text generation (Zhao et al., 2023), and question-answering systems (Rasool et al., 2023; Elgedawy et al., 2024). However, their limitations become apparent as they become more integrated into various domains, including defense (Schwinn et al., 2023), news media (Fang et al., 2023), and education (Yan et al., 2024; Hung et al., 2023; Augenstein et al., 2023). The critical issue is their propensity to generate hallucinated sentences and propagate factually inaccurate pieces of information without reference (Ji et al., 2023; Rawte et al., 2023). These inaccuracies diminish the models\u2019 reliability and erode users\u2019 trust, a vital component in their widespread adoption. Commercial LLM-based search systems, including Bing Search-powered GPT 4 (Mehdi, 2024) and Perplexity.ai (Roose, 2024), are still not capable enough of resolving the issue of citation generation to confirm the scientific feasibility of either a generated sentence(s) or given sentence(s) from the scientific literature. For instance, Figure 1 shows how proprietary LLMs respond to the zero-shot indirect query. It is evident from the figure that while general-purpose LLMs like GPT3.5 and GPT-4 \u2018pass\u2019 the query, task-specific LLM Perplexity does generate relevant citations but still shows hallucination. Consider the following arXiv:2405.02228v2 [cs.CL] 9 May 2024 \fFigure 1: An illustration and motivating example for investigating LLMs for automatic citation generation task. Perplexity.ai, which is an LLM-based search engine, yields a citation that doesn\u2019t exist [1], an incorrect one [3], and a correct citation [2]. Advance RAG (defined in this research) improved context understanding and citation generation quality. Time: Feb. 05, 2024. three real world examples of this research: Citation Generation in Research Articles and News Reports: LLMs can generate highly persuasive and realistic content, especially in writing research articles or news reports, making it challenging for users to distinguish between genuine and fabricated information Nakano et al. (2021); Menick et al. (2022); Kumarage and Liu (2023). Citation Generation in Reports for Organizational Cybersecurity: In cybersecurity, where decisions often need to be made quickly and are based on the data provided, the accuracy and reliability of information are paramount (Divakaran and Peddinti, 2024). Inaccurate citations can lead to misinformation and potentially severe consequences in decision-making processes. LLMs can automate the citation generation process but need to be carefully designed for organization specific cybersecurity. Citation Generation in Reports for Legal: In a significant event, an attorney tried employing ChatGPT for legal analysis during a trial (see subsection A.1)(Bohannon, 2023). While ChatGPT generated information, it failed to capture the nuanced complexities and critical legal precedents needed for the case. This underscores the importance of confirming and sourcing accurate legal citations and precedents relevant to the case. We contribute by addressing these challenges with the following: (A) Introduce REASONS, a dataset created by extracting related works from IEEE articles spanning 12 scientific domains from 2017 to 2023. (B) We employ a new RAG training regime to develop Advance RAG. Advance RAG and Na\u00efve RAG examine the factual integrity of the information retrieved by dense retrievers and its presentation as citations by LLMs. (C) We evaluate both proprietary and public LLMs and their RAG counterparts (10 models) to assess their contextual awareness using metrics like Pass Percentage (PP) and Hallucination rate (HR). Additionally, we have measured the quality of citation generation using F-1 and BLEU scores. (D) We conduct an adversarial examination to provide a clear assessment of context awareness regarding citation generation in LLMs. Findings:(I) Perplexity, faces a major challenge when dealing with indirect and direct query on the REASONS dataset (Figure 2 Figure 5, and in Appendix A Table 6 Table 9).(II) Citation generation is enhanced uniformly across public and proprietary LLMs when metadata like abstract and title are considered with indirect query (Figure 3 and Figure 5, along with Table 7 and Table 9). (III) Advance RAG with Mistral LLM outperforms other competitive proprietary and public LLMs. This performance is realized by a reduction in the HR and increments in F-1 and BLEU scores (Figure 3 and Figure 5 (last two bars) and Table 7 and Table 9 (last two columns)). (IV) For domains such as Quantum Computing and Biomolecules that are heavy in mathematics and numerals, there was a substantial decline in citation generation quality and an increase in HR. Adversarial examination strengthens our understanding that despite being exorbitantly large, LLMs lack context awareness (Table 2). (V) Advance RAG did provide convincing evidence of context understanding (Table 2). Further improvements in RAG-based LLMs are desirable, and utilizing REASONS dataset can provide valuable insights into context understanding and provenance in tasks such as hypothesis generation. 2 Background Early Techniques in Citation Recommendation: The practice of citing sources is a cornerstone of academic and professional writing, serving as the bedrock for reliability, and truthfulness in scholarly work (Cronin, 1981). The evolution of citation recommendation systems mirrors the broader advancements in computational linguistics and nat\fural language processing (NLP) (Bai et al., 2019; Ali et al., 2021). Initial methods in citation recommendation focused on basic techniques such as text feature-based systems (Strohman et al., 2007), simple keyword matching, and basic statistical methods (Bethard and Jurafsky, 2010). Context-aware citation recommendation systems supplemented these methods (He et al., 2010; Ebesu and Fang, 2017; Jeong et al., 2020a; Huang et al., 2021). However, their inability to grasp deeper textual contexts limited their effectiveness. Machine learning in Citation Recommendation The incorporation of machine learning into citation recommendation systems represents an initial step toward automating the citation process, which is typically regarded as manual and laborintensive(Agarwal et al., 2005; K\u00fc\u00e7\u00fcktun\u00e7 et al., 2012). These systems began to exhibit an improved understanding of the text, although they still lacked a nuanced grasp of complex contexts (Tran et al., 2015). The application of neural networks revolutionized citation recommendation. NLP algorithms, capable of parsing complex sentence structures, started identifying relevant themes for contextually appropriate citation recommendations (Zarrinkalam and Kahani, 2013; Beel et al., 2016; Iqbal et al., 2020). Concurrently, graph-based models, visualizing literature as interconnected networks, enhanced citation recommendations by considering content similarity and citation patterns (Ali et al., 2020; Chakraborty et al., 2015). With deep learning, citation recommendation systems began incorporating semantic analysis, employing models like word embeddings and neural networks for a more nuanced understanding (Yang et al., 2018; Bhagavatula et al., 2018; Vajdecka et al., 2023). Adapted from commercial use, collaborative filtering also emerged, recommending citations based on similar citation behaviors (Wang et al., 2020). Large Language Models in Citation Generation: The advent of LLMs like GPT-3 and its successors has further transformed NLP. Initial language model systems such as those based on BERT have significantly improved citation recommendation by converting unstructured text into meaningful vectors (Jeong et al., 2020b; Devlin et al., 2018; Bhowmick et al., 2021). Recent studies have focused on evaluating the fidelity of generated text to its sources (Ji et al., 2023). (Rashkin et al., 2023) introduced the \u201cattributable to identified sources\u201d (AIS) score, while (Bohnet et al., 2022) and others (Honovich et al., 2022; Yue et al., 2023) have focused on automating AIS. Concurrent work by (Liu et al., 2023) explored human evaluation of commercial generative search engines such as Bing. Chat, NeevaAI, Perplexity.ai, and YouChat. Despite these advancements, LLMs in citation recommendation still struggle with generating accurate information and providing references, as shown in studies by (Ji et al., 2023; Zheng et al., 2023). We conduct empirical and investigative research on why public and proprietary LLMs, including the powerful GPT-4 (which has not been examined yet), are prone to incorrect citation generation. Further, we provide means for improving the citation generation in public LLMs through a customized design using RAG. This limitation necessitates an approach closely aligning with RAG. RAG compels LLMs to provide citations alongside the generated text. The concept of retrieval-augmented LLMs has gained traction in recent years following (Guu et al., 2020; Borgeaud et al., 2022; Izacard et al., 2022; Khandelwal et al., 2019; Schick et al., 2023; Jiang et al., 2023b; Yao et al., 2022; Gao et al., 2023). We evaluate public and proprietary LLMs and their RAG counterparts on citation generation using REASONS, a meticulously curated dataset from arXiv spanning key domains in computer science and related fields. This allows us to assess the LLM\u2019s ability to identify a given sentence\u2019s source accurately. Domain Paper Count IEEE Papers Citation Count CV 5488 1028 3437 Robotics 3656 292 776 Graphics 1796 384 1417 IR 1741 564 1654 AI 1697 531 2021 NLP 1526 293 1092 Cryptography 1084 371 1106 NNC 892 111 326 HCI 761 112 229 Databases 723 115 182 QC 421 126 456 Biomolecules 119 17 27 Total 19904 3944 12723 Table 1: Our benchmark dataset, REASONS, includes papers and sentences from 12 domains. It primarily features ten domains in computer science and 2 in biology. Full forms of domain acronyms are provided in subsection A.5. \f3 Problem Setup Scope of REASONS: The dataset comprises sentences gathered from the related work sections of articles in computer science and biology available on arXiv (arX). Summary is provided in Table 1. It should be noted that GPT-3.5 or its successors may have processed all the papers published on arXiv from 2017 to 2021 while training. To ensure our dataset is unbiased, we include papers published in 2022 and 2023 that test the memory and understanding of LLMs. Exclusions were made for mathematics, statistics, and physics due to the abundance of equations in the related work section, and the crawling method theoremKb1 lacked the required versatility. We chose to focus on IEEE papers as they are represented across all 12 domains we considered. Each sentence in the related work section encapsulates the author\u2019s thought process in citing related works: (A) Every sentence captures the author\u2019s interpretation and emphasis on original methodology, critique of prior work, corrections to previous research, or acknowledgment of pioneers. This encompasses summarizing these aspects briefly and concisely. (B) The cited work in the related work section is either incidental or important to current work (Valenzuela et al., 2015). REASONS is inspired by previously constructed s2ORC and UnarXive datasets containing academic papers (see Table 4 in Appendix A); however, we diverge on the following points: (A) We provide sentence-level annotation of citations on major computational domains on arXiv. (B) Each sentence is accompanied by its metadata, which includes the paper title, abstract, and author names of the paper it cites. It also contains the title of the paper from which it was taken. (C) The dataset structure allows for an easy examination of LLMs using indirect and direct queries. Crawling Process: The web crawler employs the Oxylabs2 SERP Scraper API as its methodology, enabling real-time data extraction from major search engines. This API offers a proxy chaining platform for efficient data extraction. The dataset is meticulously organized in JSON format with a detailed outline (see \u201cJSON Structure\u201d). A complete GitHub repository is provided, containing the dataset and the code for reproducibility (see details in subsection A.3). We plan to keep updating the repository with more articles and metadata. The 1https://github.com/PierreSenellart/theoremkb 2https://oxylabs.io/ associated costs are provided in (subsection A.2). JSON Structure {\"Computer Vision\": { \"http://arXiv.org/abs/2012.05435v2\": { \"Paper Title\": \"Optimization-Inspired..\", \"Sentences\": [ {\"Sentence ID\": 32, \"Sentence\": \"... For GM, ... \", \"Citation Text\": \"C. Ledig,...\", \"Citation\": { \"Citation Paper ID\": \"arXiv:1609.04802\", \"Citation Paper Title\": \"Title:Photo..\", \"Citation Paper Abstract\": \"Abstract:.\", \"Citation Paper Authors\": \"Authors:...\" }}]}}} 3.1 Problem Formulation We define two tasks for LLMs over the REASONS dataset R: (a) Direct Querying and (b) Indirect Querying. For experimentation, we segment R into RS and RM. RS represents sentences and paper titles for which references are to be generated with or without the support from metadata RM. Direct Querying Task: Given a title ti \u2208RS, the LLM should generate the author list. For the task of direct querying with metadata, the LLM is given the following input: ti \u2208RS, the Advance RAG model retrieves top-40 chunks of information ai1, ..., ai40 \u2208RM, and generates the names. Indirect Querying Task: Given a sentence si \u2208RS, the LLM should generate a paper title in zero-shot setting. For the task of indirect querying with metadata called Sequential Indirect and Direct Prompting (SID Prompting), the LLM is given the following input: si \u2208RS and ground truth abstract abss \u2208RM as well as the authors aus \u2208RM, and the model is asked to generate the citation paper title. Examples of direct and indirect queries are: Direct Prompt Prompt: Who were the authors of the research paper \"Research Paper Title\"? Instruction: List only author names, formatted as < firstname >< lastname >, separated by comma. Do not mention the paper in the title, also, if you don\u2019t know, write \u2019pass\u2019. Response: Author Names. \fIndirect Prompt Prompt: I have taken a sentence from the research paper titled \u201cResearch Paper Title\u201d, give me the research paper that this sentence is citing. If you cannot come up with the paper titles, write \u2018pass.\u2019 Don\u2019t write anything else. Instruction: Sentence \"uses fractional max-pooling to randomly specify non-integer ratios between the spatial dimension sizes of the input and the output to pooling layers.\" Response: Citation Paper Title. Implementation of Direct and Indirect Querying: Direct querying is executed using zero-shot prompting for scenarios without metadata and chain-of-thoughts prompting for metadata situations. We modify the chain-of-thoughts prompting with SID Prompting. It begins with an indirect query. Following an incorrect response or a \u2018pass,\u2019 more details about the cited paper are given (i.e., direct query), including its abstract and authors\u2019 names. This is an iterative approach to generate the correct citation. Following are the two examples of these prompting strategies: Direct Query with Metadata Prompting Prompt: Who were the authors of the research paper \u201cResearch Paper Title\"? Let me give you some more context by providing the abstract of the research paper. Abstract:\u2019....\u2019. Instruction: List only author names, formatted as <first name><last name>, separated by comma. Do not mention the paper in the title. Also, if you don\u2019t know, write \u2018pass.\u2019 Response: Author Names. SID Prompting Prompt: I have taken a sentence from the research paper titled \"Research Paper Title.\" give me the title of the possible research paper that this sentence is citing. If you cannot come up with the paper titles, write \u2019pass\u2019. Don\u2019t write anything else. Instruction: Sentence:\"......\". Let me give you some more context by providing the authors and the abstract of the paper the sentence is citing. Authors:\"......\", Abstract:\".......\" Response: Citation Paper Title. 3.2 Models and Evaluation Our research has focused on a diverse array of LLMs, carefully chosen to provide a broad perspective on the capabilities and limitations inherent in current language model technologies. Proprietary Models: Our selection of proprietary models includes those from OpenAI and Preplexity.ai. While OpenAI is known for its cutting-edge NLP models, driving significant advancements in the field, Preplexity.ai focuses on models with unique functionalities, such as recommending citations and utilizing natural language prediction for innovative search experiences. Public Models: We choose LLAMA 2 (Touvron et al., 2023) and Mistral (Jiang et al., 2023a) as the two publicly available LLMs that have demonstrated competitive performance compared to proprietary LLMs. We evaluate their effectiveness on the REASONS dataset under the standard state and retrieval-augmentation conditions. This analysis goes beyond simply comparing proprietary and public models, extending to evaluating models based on their size, particularly those with 7B parameters. 3.3 Evaluation Metrics Our evaluation uses four key metrics: 1) The BLEU Score assesses the structural alignment through clipped n-gram matching. 2) The F-1 Score evaluates the balance between precision and recall, reflecting the models\u2019 effectiveness in capturing key information. 3) Hallucination rate (HR), which we estimate by averaging over incorrect and partially correct generated citations. HR = 1 QD P I[\u02c6 c \u0338= c] + 1 |Uw| P|Uw| w=1 I[\u02c6 cw \u0338= cw], where QD: queries within a domain, and |Uw|: total number of unique words in generated citation (\u02c6 c) and true citation (c). 4) Pass Percentage (PP) measures the tendency of an LLM to either respond or abstain from giving a response. It is calculated as follows: 1 QD P I[\u02c6 c = Pass]. It is crucial to emphasize that PP serves as a safeguard to prevent LLMs from generating hallucinatory responses but also reduces engagement. Additionally, even with a high PP, the HR can be high. This implies that the model struggles to discern whether it offers correct or incorrect citations in the remaining instances. 3.4 Retrieval Augmented Generation (RAG) RAG combines a retriever and a generator to create better answers. RAG can access external knowledge, unlike methods that feed the model prompts. This lets it craft more accurate, relevant, and informative responses than models that rely solely on what they were pre-trained. We investigate RAG\u2019s ability to improve LLMs\u2019 accuracy. Ideally, RAG would help LLMs avoid giving wrong answers (low PP) and making things up (HR). We also investigate whether RAG works consistently with direct and indirect questions across different scientific fields (12 domains). We experiment with two forms of RAG architecture: \f(a) Na\u00efve RAG and (b) Advance RAG. Both architectures leverage the same bi-encoder-based retriever architecture (Karpukhin et al., 2020). Given a corpus of documents RM and a sentence s \u2208RS, the document encoder maps d \u2208RM to an embedding E\u03b8(c) and the query encoder maps s to an embedding E\u03b8(s). The top-k relevant documents for s are retrieved based on the sentence-document embedding similarity, which is often computed via dot product: z(s, d) = exp(E\u03b8(s)T E\u03b8(d)). We start with a bi-encoder retriever using an embedding model from OpenAI (subsection A.4). Other ways to set up a bi-encoder retriever, such as DRAGON+ (Lin et al., 2023), are possible. However, those are more useful when involving large-scale data augmentation. The retrieved documents are ranked in two ways, which separates Na\u00efve RAG from Advance RAG. Under the Na\u00efve RAG, we use BM25 relevance scoring to rank the documents, whereas, in Advance RAG, we fine-tune a cross-encoder on REASONS document index RM to better align it with our task of citation generation with LLM. For the fine-tuning of the cross-encoder, we use localized contrastive loss (LCL) for two reasons: (a) In RM, we do not have labeled positive and negative documents, and (b) for a sentence s there is a possibility for more than one true positive documents (Pradeep et al., 2022). LCL is formally defined as follows: LLCLs := \u2212log exp(zs,{d+}) P d\u2208Gs exp(zs,d) LLCL := 1 |S| X s\u2208Rs,Gs\u2208Rs M LLCLs where Gs represents a set of documents for a sentence s, which consist of a set of relevant documents ({d+}) and n-1 non-relevant documents {d\u2212} sampled from Rs M using biencoder. The training of Advance RAG happens through the standard cross entropy loss: LCE(\u02c6 c|s, \u03d5) = Pb i=1 I(\u02c6 cw i = cw i ) \u00b7 log Pr(\u02c6 cw i |\u03d5) where, \u03d5 is parameter of the generator LLM and b is the minibatch fine-tuning in Advance RAG. \u02c6 ci represents ith citation generation, and I(\u02c6 cw i = cw i ) represents word level comparison with ground truth citation (direct query: author names; indirect query: paper titles). For the Na\u00efve and Advance RAG, we employ LLAMA-2 7B and Mistral 7B as competitive models against proprietary LLMs. 4 Results We conducted experiments encompassing four distinct prompting styles applied to twelve scientific domains. This extensive analysis involved 12,723 sentences, resulting in a substantial dataset rigorously evaluated using ten different models. This equates to 508920 instance assessments involving 4 (prompting styles) \u00d7 12,723 (sentences for all domains) \u00d7 10 (models). The total duration required to execute all experiments on the GPU is 238 days, 6 hours, and 59 minutes. For detailed information regarding the time spent on experiments across various domains, please refer to the appendix (see subsection A.6 and Table 5). Zero-Shot Indirect Prompting: In Figure 4, a majority of the models exhibited high HR. As expected for a huge model GPT-4-1106-preview (1 Trillion Parameters) shows a relatively lower HR of 67.73% and a higher PP of 89% averaged across 12 domains. Perplexity-7b-Chat showed an exceptionally high PP of 97.5%, which is surprising, as this LLM is designed specifically for citation generation. RAG Mistral was a competitive model with GPT-4 with a lower PP of 21% and HR of 72.49% in comparison to other LLMs. Analysis shows RAG Mistral is competitive because of the high variance in HR compared to GPT-4-1106-preview. Generation quality measured by F-1 and BLEU scores were predominantly low across the board, with GPT-4 (not the preview, G1) comparatively better scores. RAG Mistral and RAG LLAMA 2 rank second and third best respectively. SID Prompting In Figure 5, showed improvement across all the LLMs in citation generation over indirect queries. An average improvement of 21% was measured, with a reduction in variance. Even though some models like Perplexity-7b-Chat and LLAMA 2 still had high HR rates, the PP dropped significantly, especially for GPT-4-1106-preview. The results of this experiment indicate that SID prompting in LLMs can balance the trade-off between PP and HR, significantly enhancing generation quality with an (8%\u2191) increase in BLEU and a (13%\u2191) in F-1 (The Appendix B provides examples for visual inspection.). Zero-Shot Direct Prompting presents a very idealistic scenario where the LLMs have access to context through direct query. This leads to both lower PP and HR. The citation generation quality significantly improves from zero-shot in\fG1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 50 100 Pass Percentage Figure 2: Averaged Zero-Shot Direct Prompting results of different LLMs across all 12 domains. G1 shows notably lower HR and higher F-1 and BLEU scores, indicating superior performance in generating citations. In contrast, model P exhibits the highest HR and the lowest scores in F-1 and BLEU, suggesting challenges in generating accurate and contextually relevant citations. The RAG models (RM and RL) demonstrate varied results, with RM showing a better accuracy and coherence balance than RL. G1: gpt-4-1106-preview, G2: gpt-4, G3: gpt-3.5-turbo, P: pplx-7b-chat, RM: Na\u00efve RAG mistral-7b-instruct, M: mistral-7b-instruct, RL: Na\u00efve RAG llama-2-7b-chat, L: llama-2-7b-chat, AL: Advance RAG llama-2-7b-chat, AM: Advance RAG mistral-7b-instruct. For the purposes of clarity and saving space, the terms AL and AM are used in the figures to denote Advance RAG llama-2-7b-chat and Advance RAG mistral-7b-instruct, respectively. In the main text of the paper, these are referred to as AdvRAG(L) and AdvRAG(M). G1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.5 1 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.5 1 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 0.5 1 Pass Percentage Figure 3: Averaged Direct Prompting with Metadata results of different LLMs across all 12 domains. The plot indicates that models G1, G2, and G3 stand out with their low HR and impressive F-1 and BLEU scores, in contrast to other models that face challenges. All models except RM reach a 0% PP, suggesting that including metadata significantly enhances their contextual understanding. G1 G2 G3 P RM M RL L 0 50 100 Hallucination Rate G1 G2 G3 P RM M RL L 0 0.2 0.4 0.6 F-1 Score G1 G2 G3 P RM M RL L 0 0.2 0.4 0.6 BLEU Score G1 G2 G3 P RM M RL L 0 50 100 Pass Percentage Figure 4: Averaged Zero-Shot Indirect Prompting across 12 domains. This prompting method led to elevated HR among the models. There was also a notable variance in PP, with models G3, P, and L exhibiting higher scores. Both conditions indicate challenges in understanding context and generating accurate citations when using indirect prompts. G1 G2 G3 P RMM RL L AL AM 0 50 100 Hallucination Rate G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 F-1 Score G1 G2 G3 P RMM RL L AL AM 0 0.2 0.4 0.6 0.8 BLEU Score G1 G2 G3 P RMM RL L AL AM 0 50 100 Pass Percentage Figure 5: Averaged SID Prompting results of different LLMs across all 12 domains. Models G1, G2, and G3 exhibit relatively better outcomes with lower HR and higher F-1 and BLEU scores, suggesting more contextual understanding. Other models demonstrated high HR, indicating difficulties in accurate citation generation with SID Prompting. Notably, while models G1 and G3 have high PPs, indicating some difficulties with SID, their overall performance still reflects a more advanced level of language processing and contextual comprehension compared to the other models. direct and SID promptings, achieving high F-1 and BLEU scores (see Figure Figure 4). However, Perplexity-7b-Chat, oddly, had high PP and HR, suggesting a need for more research on such \fspecialized LLM search engines. We observed that Perplexity-7b-Chat expands its search queries and adds references to the broader content it finds. The issue is that the expanded versions drift too far in meaning from the original. In Direct Prompting with Metadata, when metadata such as abstracts and titles were used with indirect questions, all the LLMs got better at generating citations and had low HR and PP. This shows that having more information helps LLMs create more accurate and related citations, proving the importance of enough data for good language processing. Note that PP dropped to zero for almost all models when direct promoting includes metadata. All GPT LLMs achieved F-1 and BLEU scores close to 1.0 and showed more consistent results overall. Two main points from this experiment are: First, adding metadata to LLMs is effective for all of them, especially RAG models that integrate this augmentation in their learning process. Second, smaller models with advance RAG (Mistral and LLAMA-2) adjust better to metadata than GPT-4-Preview/4/3.5 (see Figure 3). Overall: Advance RAG Mistral 7b outperformed other competitive proprietary and public LLMs in all prompting styles. This superior performance was notably marked by reduced HR, suggesting this model is more adept at generating accurate and relevant responses when adding metadata. Furthermore, improvements in F-1 scores reinforce its reliability in retrieving information. Higher BLEU scores were observed, signifying that the language output of the model aligns closely with human-like text in terms of fluency & coherence. 5 Adversarial Examination The analysis of LLMs using the REASONS dataset highlights significant variability in their performance across different domains. While they perform moderately better in areas like AI and CV with lower HR and higher F-1/BLEU scores, they struggle in complex domains such as QC, Biomolecules, and Cryptography, likely due to limited training data and the complexity of these subjects. This variability in performance indicates that LLMs have varying degrees of contextual understanding, with a tendency to perform better in domains with more extensive training data and less complex structures (e.g., maths and numerics). Motivation and Setup: We conducted adversarial experiments across all models to better assess their contextual understanding. The core concept Group PP(%) BLEU F1 HR Changing Paper Title G1 96.23 0.6210 0.8470 17.99 G2 31.45 0.0524 0.2640 83.66 G3 68.55 0.0389 0.1828 87.35 RM 3.14 0.0796 0.1584 86.78 M 0.00 0.0003 0.0221 94.95 RL 5.03 0.0628 0.1448 87.56 L 0.00 0.0066 0.0254 98.30 AdvRAG(L) 0.00 0.1322 0.4763 85.72 AdvRAG(M) 0.00 0.1569 0.5839 75.41 Changing Paper Abstract G1 95.60 0.4595 0.6451 38.49 G2 32.70 0.0396 0.2186 86.22 G3 76.10 0.0034 0.1013 91.64 RM 7.55 0.0520 0.1216 89.44 M 0.00 0.0074 0.0161 90.20 RL 2.52 0.0445 0.1112 90.16 L 0.00 0.0017 0.0146 99.01 AdvRAG(L) 0.00 0.4101 0.5780 39.67 AdvRAG(M) 0.00 0.4904 0.6954 39.57 Table 2: Performance of various LLMs on adversarial set, designed by swapping titles and abstracts. Models G1, G2, and G3, possibly exposed to similar data during training, struggled with the adversarial sets, resulting in high HR and PP. Conversely, models like AdvRAG(L) and AdvRAG(M) showed better performance, suggesting that these models attempt to understand the context before generating the citations. behind these experiments was to provide the models with incorrect yet similar metadata about the sentences in the prompts. The aim was to discern whether the models generated citations based on the contextual grasp of the provided metadata or if the metadata had minimal influence on the citation generation process. These adversarial experiments comprised two types: 1) Providing inaccurate paper titles related to the sentences. 2) Providing incorrect paper abstracts associated with the sentences. Both experiments were conducted using the SID prompting. To facilitate these experiments, we curated a subsample of 200 sentences from the REASONS dataset spanning all the domains. We extracted each sentence\u2019s most similar paper title or abstract from this dataset and replaced the original metadata. For similarity calculation, we use the RatcliffObershelp metric, which is calculated as twice the length of the longest common substring plus recursively the number of matching characters in the non-matching regions on both sides of the longest common substring (Tang et al., 2023). According to this metric, for the following example title \u201cDiffusion models for counterfactual explanations,\u201d the best replacement is \u201cOctet: Object-aware models for counterfactual explanations (0.736)\u201d as opposed \fto \u201cAdversarial counterfactual visual explanations (0.638)\u201d. We considered a threshold of 0.70 effective in preparing the adversarial set. Findings: We found that incorrect paper titles and abstracts easily fool most LLMs if it is similar to accurate information. In Table 2, G1 is displayed at 17.99%, and its pairing with a high PP of 96.23% indicates a defensive mechanism. This means the LLMs are not very good at understanding the true meaning of what they are given. On such a small adversarial set, we expect LLMs like GPT-4-1106-preview and GPT-4 to perform exceedingly well because of their extensive knowledge; however, we observed counterintuitive results in Table 2, all models show the effect. We do see promising direction with AdvRAG(M) and AdvRAG(L); however, further investigation is required into how rich graphical metadata (e.g., knowledge graph) and graph-theoretic approaches to information retrieval can improve LLM effectiveness (He et al., 2024). 6 Conclusion We have developed a new resource called REASONS (REtrieval and Automated citationS Of scieNtific Sentences), a benchmark designed to assess the ability of LLMs to understand context and generate appropriate citations. This benchmark includes sentences from the related work sections of papers, along with citations and metadata across 12 scientific and computational fields. We evaluated proprietary and public LLMs\u2019 ability to correctly provide author names and paper titles under two conditions: direct and indirect citation. Surprisingly, none of the LLMs demonstrated the readiness to annotate draft reports in various professional settings, such as market analysis, misinformation prevention, defense strategy, and healthcare reporting. We observed a trade-off between PP and HR, where GPT-4 and GPT-3.5 achieved higher accuracy at the cost of a lower HR. In contrast, though smaller with only 7B parameters, the Advance RAG model showed reasonable efficiency. Unlike other models, in adversarial tests where abstracts or paper titles were swapped, Advance RAG unexpectedly outperformed GPT-4, suggesting it does capture context before generating citations. Future Work: Through reasoning and explanation, we plan to explore and mitigate the noted shortcomings in citation generation (trade-off between HR and PP, high variance in BLEU scores, sub-par scores on adversarial set). One approach is to employ the Toulmin model (Naveed et al., 2018)) within Advance RAG. We believe these improvements will improve the quality of citation generation and better equip the models to manage complex reasoning (e.g., hypothesis generation and verification (Tyagin and Safro, 2023)) challenges confidently. Limitations Several factors constrain our study on applying LLMs for citation generation. (a) Primarily, integrating high-parameter-size models (>13B; refer to Table 5 for computation time) with RAG is not feasible, limiting our ability to leverage more complex models. (b) Additionally, the high computational resources required for such models are often inaccessible in academic settings. (c) One constraint in our study was the dataset creation, where we confined ourselves to predominantly IEEE format papers, particularly with domains with a high count of submissions. (d) Another significant limitation is the current inability of LLMs to effectively process and interpret mathematical expressions, a crucial aspect in many academic papers. (e) Due to the latest version of Google API (time stamp: December 04, 2023) lacking the citation generation feature, we have limited our experiments to OpenAI only. (f) While cross-encoders can be more powerful in understanding text relationships, they tend to be more computationally intensive. This is because they need to process every possible pair of inputs together, which can be a significant workload, especially in cases where there are many potential pairs to consider (like in large-scale retrieval tasks in our REASONS dataset). These constraints highlight the need for advancements in model adaptability, computational resource accessibility, dataset diversity, and specialized content processing for more robust and wide-ranging applications. Ethical Considerations We followed the Oxylabs Acceptable Use Policy3 and worked alongside some Oxylabs developers to ensure we respected the terms of services on arXiv. arXiv\u2019s terms of service place restrictions on automated crawling of their site for articles marked by \u201carxiv.org perpetual, non-exclusive license and CC BY-NC-ND\u201d. We paid attention to the following key ethical issues: (a) Privacy and Consent: The content on arXiv is publicly available, but the authors who upload their work there may not have 3https://oxylabs.io/legal/ oxylabs-acceptable-use-policy \fconsented to having their preprints crawled and used for other purposes. It\u2019s important to respect the privacy and intellectual property rights of the researchers who contribute to arXiv. We only crawled articles marked as CC Zero, CC BY, and CC BYSA. (b) Potential misuse: We prepared REASONS only to test the citation generation capability of LLMs for subsequent future downstream applications, such as annotating draft analytic reports. Our focus on HR and PP for citation generation and its quality using BLEU and F-1 shows that the data scraped is not for malicious purposes, such as fine-tuning LLMs to generate misinformation or infringe on copyrights. (c) Transparency and Accountability: We have been mindful of our crawling process, and to the best of our knowledge, we have enumerated sufficient details regarding the process. This would help build trust regarding reproducibility, extend REASONS, and ensure that the crawling process was not abused. (d) Author Identity and Contact: No authors of the crawled papers were contacted through their provided information in the publicly available arXiv papers. This user study was duly approved by the authors\u2019 organization\u2019s Institutional Review Board (IRB)."
18
+ }
title_10K/test_title_short_2405.02235v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02235v1",
3
+ "title": "Learning Optimal Deterministic Policies with Stochastic Policy Gradients",
4
+ "abstract": "Policy gradient (PG) methods are successful approaches to deal with\ncontinuous reinforcement learning (RL) problems. They learn stochastic\nparametric (hyper)policies by either exploring in the space of actions or in\nthe space of parameters. Stochastic controllers, however, are often undesirable\nfrom a practical perspective because of their lack of robustness, safety, and\ntraceability. In common practice, stochastic (hyper)policies are learned only\nto deploy their deterministic version. In this paper, we make a step towards\nthe theoretical understanding of this practice. After introducing a novel\nframework for modeling this scenario, we study the global convergence to the\nbest deterministic policy, under (weak) gradient domination assumptions. Then,\nwe illustrate how to tune the exploration level used for learning to optimize\nthe trade-off between the sample complexity and the performance of the deployed\ndeterministic policy. Finally, we quantitatively compare action-based and\nparameter-based exploration, giving a formal guise to intuitive results.",
5
+ "authors": "Alessandro Montenegro, Marco Mussi, Alberto Maria Metelli, Matteo Papini",
6
+ "published": "2024-05-03",
7
+ "updated": "2024-05-03",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Model AND Based AND Reinforcement AND Learning",
14
+ "gt": "Learning Optimal Deterministic Policies with Stochastic Policy Gradients",
15
+ "main_content": "Introduction Within reinforcement learning (RL, Sutton & Barto, 2018) approaches, policy gradients (PGs, Deisenroth et al., 2013) algorithms have proved very effective in dealing with realworld control problems. Their advantages include the applicability to continuous state and action spaces (Peters & Schaal, 2006), resilience to sensor and actuator noise (Gravell et al., 2020), robustness to partial observability (Azizzadenesheli et al., 2018), and the possibility of incorporating prior knowledge in the policy design phase (Ghavamzadeh & Engel, 2006), improving explainability (Likmeta et al., 2020). PG algorithms search directly in a space of parametric policies for the one that maximizes a performance 1Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133, Milan, Italy. Correspondence to: Alessandro Montenegro <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). function. Nonetheless, as always in RL, the exploration problem has to be addressed, and practical methods involve injecting noise in the actions or in the parameters. This limits the application of PG methods in many real-world scenarios, such as autonomous driving, industrial plants, and robotic controllers. This is because stochastic policies typically do not meet the reliability, safety, and traceability standards of this kind of applications. The problem of learning deterministic policies has been explicitly addressed in the PG literature by Silver et al. (2014) with their deterministic policy gradient, which spawned very successful deep RL algorithms (Lillicrap et al., 2016; Fujimoto et al., 2018). This approach, however, is affected by several drawbacks, mostly due to its inherent off-policy nature. First, this makes DPG hard to analyze from a theoretical perspective: local convergence guarantees have been established only recently, and only under assumptions that are very demanding for deterministic policies (Xiong et al., 2022). Furthermore, its practical versions are known to be very susceptible hyperparameter tuning. We study here a simpler and fairly common approach: that of learning stochastic policies with PG algorithms, then deploying the corresponding deterministic version, \u201cswitching off\u201d the noise.1 Intuitively, the amount of exploration (e.g., the variance of a Gaussian policy) should be selected wisely. Indeed, the smaller the exploration level, the closer the optimized objective is to that of a deterministic policy. At the same time, with a small exploration, learning can severely slow down and get stuck on bad local optima. Policy gradient methods can be partitioned based on the space on which the exploration is carried out, distinguishing between: action-based (AB) and parameter-based (PB, Sehnke et al., 2010) exploration. The first, of which REINFORCE (Williams, 1992) and GPOMDP (Baxter & Bartlett, 2001; Sutton et al., 1999) are the progenitor algorithms, performs exploration in the action space, with a stochastic (e.g., Gaussian) policy. On the other hand, PB exploration, introduced by Parameter-Exploring Policy Gradients (PGPE, Sehnke et al., 2010), implements the exploration at the level of policy parameters by means of a stochastic hyperpolicy. The latter performs perturbations of the parameters of a (typ1This can be observed in several libraries (e.g., Raffin et al., 2021b) and benchmarks (e.g., Duan et al., 2016). 1 arXiv:2405.02235v1 [cs.LG] 3 May 2024 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients ically deterministic) action policy. Of course, this dualism only considers the simplest form of noise-based, undirected exploration. Efficient exploration in large-scale MDPs is a very active area of research, with a large gap between theory and practice (Ghavamzadeh et al., 2020) placing the matter well beyond the scope of this paper. Also, we consider noise magnitudes that are fixed during the learning process, as the common practice of learning the exploration parameters themselves breaks all known sample complexity guarantees of vanilla PG (cf. Appendix C). To this day, a large effort has been put into providing convergence guarantees and sample complexity analyses for AB exploration algorithms (e.g., Papini et al., 2018; Yuan et al., 2022; Fatkhullin et al., 2023a), while the theoretical analysis of PB exploration has been taking a back seat since (Zhao et al., 2011). We are not aware of any global convergence results for parameter-based PGs. Furthermore, even for AB exploration, current studies focus on the convergence to the best stochastic policy. Original Contributions. In this paper, we make a step towards the theoretical understanding of the practice of deploying a deterministic policy learned with PG methods: \u2022 We introduce a framework for modeling the practice of deploying a deterministic policy, by formalizing the notion of white noise-based exploration, allowing for a unified treatment of both AB and PB exploration. \u2022 We study the convergence to the best deterministic policy for both AB and PB exploration. For this reason, we focus on the global convergence, rather than on the first-order stationary point (FOSP) convergence, and we leverage on commonly used (weak) gradient domination assumptions. \u2022 We quantitatively show how the exploration level (i.e., noise) generates a trade-off between the sample complexity and the performance of the deployed deterministic policy. Then, we illustrate how it can be tuned to optimize such a trade-off, delivering sample complexity guarantees. In light of these results, we compare the advantages and disadvantages of AB and PB exploration in terms of samplecomplexity and requested assumptions, giving a formal guise to intuitive results. We also elaborate on how the assumptions used in the convergence analysis can be reconnected to basic characteristics of the MDP and the policy classes. We conclude with a numerical validation to empirically illustrate the discussed trade-offs. The proofs of the results presented in the main paper are reported in Appendix D. The related works are discussed in Appendix B. 2. Preliminaries Notation. For a measurable set X, we denote with \u2206pXq the set of probability measures over X. For P P\u2206pXq, we denote with p its density function. With little abuse of notation, we will interchangeably use x\u201eP or x\u201ep to denote that random variable x is sampled from the P. For nPN, we denote by JnK:\u201ct1, ..., nu. Lipschitz Continuous and Smooth Functions. A function f :X \u010eRd \u00d1R is L-Lipschitz continuous (L-LC) if |fpxq\u00b4fpx1q|\u010fL}x\u00b4x1}2 for every x,x1 PX. f is L2Lipschitz smooth (L2-LS) if it is continuously differentiable and its gradient \u2207xf is L2-LC, i.e., }\u2207xfpxq\u00b4 \u2207xfpx1q}2 \u010fL2}x\u00b4x1}2 for every x,x1 PX. Markov Decision Processes. A Markov Decision Process (MDP, Puterman, 1990) is represented by M:\u201c pS,A,p,r,\u03c10,\u03b3q, where S \u010eRdS and A\u010eRdA are the measurable state and action spaces, p:S \u02c6A\u00dd \u00d1\u2206pSq is the transition model, where pps1|s,aq specifies the probability density of landing in state s1 PS by playing action aPA in state sPS, r:S \u02c6A\u00dd \u00d1r\u00b4Rmax,Rmaxs is the reward function, where rps,aq specifies the reward the agent gets by playing action a in state s, \u03c10 P\u2206pSq is the initial-state distribution, and \u03b3 Pr0,1s is the discount factor. A trajectory \u03c4 \u201cps\u03c4,0,a\u03c4,0,...,s\u03c4,T \u00b41,a\u03c4,T \u00b41q of length T PNYt`8u is a sequence of T state-action pairs. The discounted return of a trajectory \u03c4 is Rp\u03c4q:\u201c\u0159T \u00b41 t\u201c0 \u03b3trps\u03c4,t,a\u03c4,tq. Deterministic Parametric Policies. We consider a parametric deterministic policy \u00b5\u03b8 :S \u00d1A, where \u03b8P\u0398\u010eRd\u0398 is the parameter vector belonging to the parameter space \u0398. The performance of \u00b5\u03b8 is assessed via the expected return JD :\u0398\u00d1R, defined as: JDp\u03b8q:\u201c E \u03c4\u201epDp\u00a8|\u03b8qrRp\u03c4qs, (1) where pDp\u03c4;\u03b8q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 pps\u03c4,t`1|s\u03c4,t,\u00b5\u03b8ps\u03c4,tqq is the density of trajectory \u03c4 induced by policy \u00b5\u03b8.2 The agent\u2019s goal consists of finding an optimal parameter \u03b8\u02da D P argmax\u03b8P\u0398 JDp\u03b8q and we denote J\u02da D :\u201cJDp\u03b8\u02da Dq. Action-Based (AB) Exploration. In AB exploration, we consider a parametric stochastic policy \u03c0\u03c1 :S \u00d1\u2206pAq, where \u03c1PP is the parameter vector belonging to the parameter space P \u010eRdP. The policy is used to sample actions at \u201e\u03c0\u03c1p\u00a8|stq to be played in state st for every step t of interaction. The performance of \u03c0\u03c1 is assessed via the expected return JA :P \u00d1R, defined as: JAp\u03c1q:\u201c E \u03c4\u201epAp\u00a8|\u03c1qrRp\u03c4qs, where (2) pAp\u03c4;\u03c1q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 \u03c0\u03c1pa\u03c4,t|s\u03c4,tqpps\u03c4,t`1|s\u03c4,t,a\u03c4,tq is the density of trajectory \u03c4 induced by policy \u03c0\u03c1.2 In AB exploration, we aim at learning \u03c1\u02da A Pargmax\u03c1PP JAp\u03c1q and we denote JA\u02da :\u201cJAp\u03c1\u02da Aq. If JAp\u03c1q is differentiable w.r.t. \u03c1, PG methods (Peters & Schaal, 2008) update the 2For both JD (resp. JA, JP) and pD (resp. pA, pP), we use the D (resp. A, P) subscript to denote that the dependence on \u03b8 (resp. \u03c1) is through a Deterministic policy (resp. Action-based exploration policy, Parameter-based exploration hyperpolicy). 2 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients parameter \u03c1 via gradient ascent: \u03c1t`1 \u00d0 \u00dd\u03c1t `\u03b6t p \u2207\u03c1JAp\u03c1tq, where \u03b6t \u01050 is the step size and p \u2207\u03c1JAp\u03c1q is an estimator of \u2207\u03c1JAp\u03c1q. In particular, the GPOMDP estimator is:3 p \u2207\u03c1JAp\u03c1q:\u201c 1 N N \u00ff i\u201c1 T \u00b41 \u00ff t\u201c0 \u02dc t \u00ff k\u201c0 \u2207\u03c1log\u03c0\u03c1pa\u03c4i,k|s\u03c4i,kq \u00b8 \u03b3trps\u03c4i,t,a\u03c4i,tq, where N is the number of independent trajectories t\u03c4iuN i\u201c1 collected with policy \u03c0\u03c1 (\u03c4i \u201epAp\u00a8;\u03c1q), called batch size. Parameter-Based (PB) Exploration. In PB exploration, we use a parametric stochastic hyperpolicy \u03bd\u03c1 \u010e\u2206p\u0398q, where \u03c1PRdP is the parameter vector. The hyperpolicy is used to sample parameters \u03b8\u201e\u03bd\u03c1 to be plugged in the deterministic policy \u00b5\u03b8 at the beginning of every trajectory. The performance index of \u03bd\u03c1 is JP :Rd\u03c1 \u00dd \u00d1R, that is the expectation over \u03b8 of JDp\u03b8q defined as:2 JPp\u03c1q:\u201c E \u03b8\u201e\u03bd\u03c1 rJDp\u03b8qs. PB exploration aims at learning \u03c1\u02da P Pargmax\u03c1PP JPp\u03c1q and we denote JP\u02da :\u201cJPp\u03c1\u02da Pq. If JDp\u03c1q is differentiable w.r.t. \u03c1, PGPE (Sehnke et al., 2010) updates the hyperparameter \u03c1 via gradient accent: \u03c1t`1 \u00d0 \u00dd\u03c1t `\u03b6t p \u2207\u03c1JPp\u03c1tq. In particular, PGPE uses an estimator of \u2207\u03c1JPp\u03c1q defined as: p \u2207\u03c1JPp\u03c1q\u201c 1 N N \u00ff i\u201c1 \u2207\u03c1 log\u03bd\u03c1p\u03b8iqRp\u03c4iq, where N is the number of independent parameterstrajectories pairs tp\u03b8i,\u03c4iquN i\u201c1, collected with hyperpolicy \u03bd\u03c1 (\u03b8i \u201e\u03bd\u03c1 and \u03c4i \u201epDp\u00a8;\u03b8iq), called batch size. 3. White-Noise Exploration We formalize a class of stochastic (hyper)policies widely employed in the practice of AB and PB exploration, namely white noise-based (hyper)policies. These policies \u03c0\u03b8p\u00a8|sq (resp. hyperpolicies \u03bd\u03b8) are obtained by adding a white noise \u03f5 to the deterministic action a\u201c\u00b5\u03b8psq (resp. to the parameter \u03b8) independent of the state s (resp. parameter \u03b8). Definition 3.1 (White Noise). Let dPN and \u03c3\u01050. A probability distribution \u03a6d P\u2206pRdq is a white-noise if: E \u03f5\u201e\u03a6dr\u03f5s\u201c0d, E \u03f5\u201e\u03a6dr}\u03f5}2 2s\u010fd\u03c32. (3) This definition complies with the zero-mean Gaussian distribution \u03f5\u201eNp0d,\u03a3q, where E\u03f5\u201eN p0d,\u03a3qr}\u03f5}2 2s\u201ctrp\u03a3q\u010f d\u03bbmaxp\u03a3q. In particular, for an isotropic Gaussian \u03a3\u201c \u03c32Id, we have that trp\u03a3q\u201cd\u03c32. We now formalize the notion of white noise-based (hyper)policy. Definition 3.2 (White noise-based policies). Let \u03b8P\u0398 and \u00b5\u03b8 :S \u00d1A be a parametric deterministic policy and let \u03a6dA be a white noise (Definition 3.1). A white noise-based pol3We limit our analysis to the GPOMDP estimator (Baxter & Bartlett, 2001), neglecting the REINFORCE (Williams, 1992) since it is known that the latter suffers from larger variance. icy \u03c0\u03b8 :S \u00d1\u2206pAq is such that, for every state sPS, action a\u201e\u03c0\u03b8p\u00a8|sq satisfies a\u201c\u00b5\u03b8psq`\u03f5 where \u03f5\u201e\u03a6dA independently at every step. This definition considers stochastic policies \u03c0\u03b8p\u00a8|sq that are obtained by adding noise \u03f5 fulfilling Definition 3.1, sampled independently at every step, to the action \u00b5\u03b8psq prescribed by the deterministic policy (i.e., AB exploration), resulting in playing action \u00b5\u03b8psq`\u03f5. An analogous definition can be formulated for hyperpolicies. Definition 3.3 (White noise-based hyperpolicies). Let \u03b8P\u0398 and \u00b5\u03b8 :S \u00d1A be a parametric deterministic policy and let \u03a6d\u0398 be a white-noise (Definition 3.1). A white noisebased hyperpolicy \u03bd\u03b8 P\u2206p\u0398q is such that, for every parameter \u03b8P\u0398, parameter \u03b81 \u201e\u03bd\u03b8 satisfies \u03b81 \u201c\u03b8`\u03f5 where \u03f5\u201e\u03a6d\u0398 independently in every trajectory. This definition considers stochastic hyperpolicies \u03bd\u03b8 obtained by adding noise \u03f5 fulfilling Definition 3.1, sampled independently at the beginning of each trajectory, to the parameter \u03b8 defining the deterministic policy \u00b5\u03b8, resulting in playing deterministic policy \u00b5\u03b8`\u03f5 (i.e., PB exploration). Definitions 3.2 and 3.3 allow to represent a class of widelyused (hyper)policies, like Gaussian hyperpolicies and Gaussian policies with state-independent variance. Furthermore, once the parameter \u03b8 is learned with either AB and PB exploration, deploying the corresponding deterministic policy (i.e., \u201cswitching off\u201d the noise) is straightforward.4 4. Fundamental Assumptions In this section, we present the fundamental assumptions on the MDP (p and r), deterministic policy \u00b5\u03b8, and white noise \u03a6. For the sake of generality, we will consider abstract assumptions in the next sections and, then, show their relation to the fundamental ones (see Appendix A for details). Assumptions on the MDP. We start with the assumptions on the regularity of the MDP, i.e., on transition model p and reward function r, w.r.t. variations of the played action a. Assumption 4.1 (Lipschitz MDP (logp, r) w.r.t. actions). The log transition model logpps1|s,\u00a8q and the reward function rps,\u00a8q are Lp-LC and Lr-LC, respectively, w.r.t. the action for every s,s1 PS, i.e., for every a,aPA: |logpps1|s,aq\u00b4logpps1|s,aq|\u010fLp}a\u00b4a}2, (4) |rps,aq\u00b4rps,aq|\u010fLr}a\u00b4a}2. (5) Assumption 4.2 (Smooth MDP (logp, r) w.r.t. actions). The log transition model logpps1|s,\u00a8q and the reward function rps,\u00a8q are L2,p-LS and L2,r-LS, respectively, w.r.t. the 4For white noise-based (hyper)policies there exists a one-toone mapping between the parameter space of (hyper)policies and that of deterministic policies (P \u201c\u0398). For simplicity, we assume \u0398\u201cRd\u0398 and A\u201cRdA (see Appendix C). 3 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients action for every s,s1 PS, i.e., for every a,aPA: }\u2207a logpps1|s,aq\u00b4\u2207a logpps1|s,aq}2 \u010fL2,p}a\u00b4a}2, }\u2207arps,aq\u00b4\u2207arps,aq}2 \u010fL2,r}a\u00b4a}2. Intuitively, these assumptions ensure that when we perform AB and/or PB exploration altering the played action w.r.t. a deterministic policy, the effect on the environment dynamics and on reward (and on their gradients) is controllable. Assumptions on the deterministic policy. We now move to the assumptions on the regularity of the deterministic policy \u00b5\u03b8 w.r.t. the parameter \u03b8. Assumption 4.3 (Lipschitz deterministic policy \u00b5\u03b8 w.r.t. parameters \u03b8). The deterministic policy \u00b5\u03b8psq is L\u00b5-LC w.r.t. parameter for every sPS, i.e., for every \u03b8,\u03b8P\u0398: }\u00b5\u03b8psq\u00b4\u00b5\u03b8psq}2 \u010fL\u00b5}\u03b8\u00b4\u03b8}2. (6) Assumption 4.4 (Smooth deterministic policy \u00b5\u03b8 w.r.t. parameters \u03b8). The deterministic policy \u00b5\u03b8psq is L2,\u00b5-LS w.r.t. parameter for every sPS, i.e., for every \u03b8,\u03b8P\u0398: }\u2207\u03b8\u00b5\u03b8psq\u00b4\u2207\u03b8\u00b5\u03b8psq}2 \u010fL2,\u00b5}\u03b8\u00b4\u03b8}2. (7) Similarly, these assumptions ensure that if we deploy an altered parameter \u03b8, like in PB exploration, the effect on the played action (and on its gradient) is bounded. Assumptions 4.1 and 4.3 are standard in the DPG literature (Silver et al., 2014). Assumption 4.2, instead, can be interpreted as the counterpart of the Q-function smoothness used in the DPG analysis (Kumar et al., 2020; Xiong et al., 2022), while Assumption 4.4 has been used to study the convergence of DPG (Xiong et al., 2022). Similar conditions to our Assumption 4.1 were adopted by Pirotta et al. (2015), but measuring the continuity of p in the Kantorovich metric, a weaker requirement that, unfortunately, does not come with a corresponding smoothness condition. Assumptions on the (hyper)policies. We introduce the assumptions on the score functions of the white noise \u03a6. Assumption 4.5 (Bounded Scores of \u03a6). Let \u03a6P\u2206pRdq be a white noise with variance bound \u03c3\u01050 (Definition 3.1) and density \u03d5. \u03d5 is differentiable in its argument and there exists a universal constant c\u01050 s.t.: (i) E\u03f5\u201e\u03a6r}\u2207\u03f5 log\u03d5p\u03f5q}2 2s\u010fcd\u03c3\u00b42; (ii) E\u03f5\u201e\u03a6r}\u22072 \u03f5 log\u03d5p\u03f5q}2s\u010fc\u03c3\u00b42. Intuitively, this assumption is equivalent to the more common ones requiring the boundedness of the expected norms of the score function (and its gradient) (Papini et al., 2022; Yuan et al., 2022, cf. Appendix E). Note that a zero-mean Gaussian \u03a6\u201cNp0d,\u03a3q fulfills Assumption 4.5. Indeed, one has \u2207\u03f5 log\u03d5p\u03f5q\u201c\u03a3\u00b41\u03f5 and \u22072 \u03f5 log\u03d5p\u03f5q\u201c \u03a3\u00b41. Thus, Er}\u2207\u03f5 log\u03d5p\u03f5q}2 2s\u201ctrp\u03a3\u00b41q\u010fd\u03bbminp\u03a3q\u00b41 and Er}\u22072 \u03f5 log\u03d5p\u03f5q}2s\u201c\u03bbminp\u03a3q\u00b41. In particular, for an isotropic Gaussian \u03a3\u201c\u03c32I, we have \u03bbminp\u03a3q\u201c\u03c32, fulfilling Assumption 4.5 with c\u201c1. 5. Deploying Deterministic Policies In this section, we study the performance JD of the deterministic policy \u00b5\u03b8, when the parameter \u03b8 is learned via AB or PB white noise-based exploration (Section 3). We will refer to this scenario as deploying the parameters, which reflects the common practice of \u201cswitching off the noise\u201d once the learning process is over. PB Exploration. Let us start with PB exploration by observing that for white noise-based hyperpolicies (Definition 3.3), we can express the expected return JP as a function of JD and of the noise \u03f5 for every \u03b8P\u0398: JPp\u03b8q\u201c E \u03f5\u201e\u03a6d\u0398 rJDp\u03b8`\u03f5qs. (8) This illustrates that PB exploration can be obtained by perturbing the parameter \u03b8 of a deterministic policy \u00b5\u03b8 via the noise \u03f5\u201e\u03a6d\u0398. To achieve guarantees on the deterministic performance JD of a parameter \u03b8 learned with PB exploration, we enforce the following regularity condition. Assumption 5.1 (Lipschitz JD w.r.t. \u03b8). JD is LJ-LC in the parameter \u03b8, i.e., for every \u03b8,\u03b81 P\u0398: |JDp\u03b8q\u00b4JDp\u03b81q|\u010fLJ}\u03b8\u00b4\u03b81}2. (9) When the MDP and the deterministic policy are LC as in Assumptions 4.1 and 4.3, LJ is Opp1\u00b4\u03b3q\u00b42q (see Table 2 in Appendix A for the full expression). This way, we guarantee that perturbation \u03f5 on the parameter \u03b8 determines a variation on function JD depending on the magnitude of \u03f5, which allows obtaining the following result. Theorem 5.1 (Deterministic deployment of parameters learned with PB white-noise exploration). If the hyperpolicy complies with Definition 3.3, under Assumption 5.1: (i) (Uniform bound) for every \u03b8P\u0398, it holds that |JDp\u03b8q\u00b4JPp\u03b8q|\u010fLJ ?d\u0398\u03c3P; (ii) (JD upper bound) Let \u03b8\u02da P Pargmax\u03b8P\u0398 JPp\u03b8q, it holds that: J\u02da D \u00b4JDp\u03b8\u02da Pq\u010f2LJ ?d\u0398\u03c3P; (iii) (JD lower bound) There exists an MDP, a deterministic policy class \u00b5\u03b8 fulfilling Assumption 5.1, and a noise complying with Definition 3.1, such that J\u02da D \u00b4JDp\u03b8\u02da Pq\u011b0.28LJ ?d\u0398\u03c3P. Some observations are in order. (i) shows that the performance of the hyperpolicy JPp\u03b8q is representative of the deterministic performance JDp\u03b8q up to an additive term depending on LJ ?d\u0398\u03c3P. As expected, this term grows with the Lipschitz constant LJ of the function JD, with the standard deviation \u03c3P of the additive noise, and with the dimensionality of the parameter space d\u0398. In particular, this implies that lim\u03c3P\u00d10` JPp\u03b8q\u201cJDp\u03b8q. (ii) is a consequence of (i) and provides an upper bound between the optimal performance obtained if we were able to directly optimize the deterministic policy max\u03b8P\u0398 JDp\u03b8q and the performance of the parameter \u03b8\u02da P learned by optimizing JPp\u03b8q, i.e., via 4 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients PB exploration, when deployed on the deterministic policy. Finally, (iii) provides a lower bound to the same quantity on a specific instance of MDP and hyperpolicy, proving that the dependence on LJ ?d\u0398\u03c3P is tight up to constant terms. AB Exploration. Let us move to the AB exploration case where understanding the effect of the noise is more complex since it is applied to every action independently at every step. To this end, we introduce the notion of non-stationary deterministic policy \u00b5\u201cp\u00b5tqT \u00b41 t\u201c0 , where at time step t the deterministic policy \u00b5t :S \u00d1A is played, and its expected return (with abuse of notation) is JDp\u00b5q\u201cE\u03c4\u201epDp\u00a8|\u00b5qrRp\u03c4qs where pDp\u00a8|\u00b5q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 pps\u03c4,t`1|s\u03c4,t,\u00b5tps\u03c4,tqq. Let \u03f5\u201c p\u03f5tqT \u00b41 t\u201c0 \u201e\u03a6T dA be a sequence of noises sampled independently, we denote with \u00b5\u03b8 `\u03f5\u201cp\u00b5\u03b8 `\u03f5tqT \u00b41 t\u201c0 the nonstationary policy that, at time t, perturbs the action as \u00b5\u03b8pstq`\u03f5t. Since the noise is independent on the state, we express JA as a function of JD for every \u03b8P\u0398 as follows: JAp\u03b8q\u201c E \u03f5\u201e\u03a6T dA \u201d JDp\u00b5\u03b8 `\u03f5q \u0131 . (10) Thus, to ensure that the parameter learned by AB exploration achieves performance guarantees when evaluated as a deterministic policy, we need to enforce some regularity condition on JD as a function of \u00b5. Assumption 5.2 (Lipschitz JD w.r.t. \u00b5). JD of the nonstationary deterministic policy \u00b5 is pLtqT \u00b41 t\u201c0 -LC in the nonstationary policy, i.e., for every \u00b5,\u00b51: |JDp\u00b5q\u00b4JDp\u00b51q|\u010f T \u00b41 \u00ff t\u201c0 Lt sup sPS \u203a \u203a\u00b5tpsq\u00b4\u00b51 tpsq \u203a \u203a 2 . (11) Furthermore, we denote L:\u201c\u0159T \u00b41 t\u201c0 Lt. When the MDP is LC as in Assumptions 4.1, L is Opp1\u00b4 \u03b3q\u00b42q (see Table 2 in Appendix A for the full expression). The assumption enforces that changing the deterministic policy at step t from \u00b5t to \u00b51 t, the variation of JD is controlled by the action distance (in the worst state s) multiplied by a time-dependent Lipschitz constant. This form of condition allows us to show the following result. Theorem 5.2 (Deterministic deployment of parameters learned with AB white-noise exploration). If the policy complies with Definition 3.2 and under Assumption 5.2: (i) (Uniform bound) for every \u03b8P\u0398, it holds that: |JDp\u03b8q\u00b4JAp\u03b8q|\u010fL?dA\u03c3A; (ii) (JD upper bound) Letting \u03b8\u02da A Pargmax\u03b8P\u0398 JAp\u03b8q, it holds that J\u02da D \u00b4JDp\u03b8\u02da Aq\u010f2L?dA\u03c3A; (iii) (JD lower bound) There exists an MDP, a deterministic policy class \u00b5\u03b8 fulfilling Assumption 5.1, and a noise complying with Definition 3.1, such that J\u02da D \u00b4JDp\u03b8\u02da Aq\u011b0.28L?dA\u03c3A. Similarly to Theorem 5.1, (i) and (ii) provide an upper bound on the difference between the policy performance JAp\u03b8q and the corresponding deterministic policy JDp\u03b8q and on the performance of \u03b8\u02da A when deployed on a deterministic policy. Clearly, also in the AB exploration, we have that lim\u03c3A\u00d10` JAp\u03b8q\u201cJDp\u03b8q. As in the PB case, (iii) shows that the upper bound (ii) is tight up to constant terms. Finally, let us note that our bounds for PB exploration depend on the dimension of the parameter space d\u0398 that is replaced by that of the action space dA in AB exploration. 6. Global Convergence Analysis In this section, we present our main results about the convergence of AB and PB white noise-based exploration to global optimal parameter \u03b8\u02da D of the performance of the deterministic policy JD. Let K PN be the number of iterations and N the batch size; given an accuracy threshold \u03f5\u01050, our goal is to bound the sample complexity NK to fulfill the following last-iterate global convergence condition: J\u02da D \u00b4ErJDp\u03b8Kqs\u010f\u03f5, (12) where \u03b8K is the (hyper)parameter at the end of learning. 6.1. General Global Convergence Analysis In this section, we provide a global convergence analysis for a generic stochastic first-order algorithm optimizing the differentiable objective function J: on the parameters space \u0398\u010eRd, that can be instanced for both AB (setting J: \u201cJA) and PB (setting J: \u201cJP) exploration, when optimizing the corresponding objective. At every iteration kPJKK, the algorithm performs the gradient ascent update: \u03b8k`1 \u00d0 \u00dd\u03b8k `\u03b6k p \u2207\u03b8J:p\u03b8kq, (13) where \u03b6k \u01050 is the step size and p \u2207\u03b8J:p\u03b8kq is an unbiased estimate of \u2207\u03b8J:p\u03b8kq and denote J\u02da : \u201cmax\u03b8P\u0398 J:p\u03b8q. We enforce the following standard assumptions. Assumption 6.1 (Weak gradient domination for J:). There exist \u03b1\u01050 and \u03b2 \u011b0 such that for every \u03b8P\u0398 it holds that J\u02da : \u00b4J:p\u03b8q\u010f\u03b1}\u2207\u03b8J:p\u03b8q}2 `\u03b2. Assumption 6.1 is the gold standard for the global convergence of stochastic optimization (Yuan et al., 2022; Masiha et al., 2022; Fatkhullin et al., 2023a). Note that, when \u03b2 \u201c0, we recover the (strong) gradient domination (GD) property: J\u02da : \u00b4J:p\u03b8q\u010f\u03b1}\u2207\u03b8Jp:\u03b8q}2 for all \u03b8P\u0398. GD is stricter than WGD, and requires that J: has no local optima. Instead, WGD admits local maxima as long as their performance is \u03b2-close to the globally optimal one.5 Assumption 6.2 (Smooth J: w.r.t. parameters \u03b8). J: is 5In this section, we will assume that J: (i.e., either JA or JA) is already endowed with the WGD property. In Section 7, we illustrate how it can be obtained in several common scenarios. 5 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients L2,:-LS w.r.t. parameters \u03b8, i.e., for every \u03b8,\u03b81 P\u0398: }\u2207\u03b8J:p\u03b81q\u00b4\u2207\u03b8J:p\u03b8q}2 \u010fL2,:}\u03b81 \u00b4\u03b8}2. (14) Assumption 6.2 is ubiquitous in the convergence analysis of policy gradient algorithms (Papini et al., 2018; Agarwal et al., 2021; Yuan et al., 2022; Bhandari & Russo, 2024), which is usually studied as an instance of (nonconvex) smooth stochastic optimization. The smoothness of J: PtJA,JPu can be: (i) inherited from the deterministic objective JD (originating, in turn, from the regularity of the MDP) and of the deterministic policy \u00b5\u03b8 (Assumptions 4.14.4); or (ii) enforced through the properties on the white noise \u03a6 (Assumption 4.5). The first result was observed in a similar form by Pirotta et al. (2015, Theorem 3), while a generalization of the second was established by Papini et al. (2022) and refined by Yuan et al. (2022). Assumption 6.3 (Bounded estimator variance p \u2207\u03b8J:p\u03b8q). The estimator p \u2207\u03b8J:p\u03b8q computed with batch size N has a bounded variance, i.e., there exists V: \u011b0 such that, for every \u03b8P\u0398, we have: Varrp \u2207\u03b8J:p\u03b8qs\u010fV:{N. Assumption 6.3 guarantees that the gradient estimator is characterized by a bounded variance V: which scales with the batch size N. Under Assumptions 4.5 (and 4.4 for GPOMDP), the term V: can be further characterized (see Table 2 in Appendix A). We are now ready to state the global convergence result. Theorem 6.1. Consider an algorithm running the update rule of Equation (13). Under Assumptions 6.1, 6.2, and 6.3, with a suitable constant step size, to guarantee J\u02da : \u00b4ErJ:p\u03b8Kqs\u010f\u03f5`\u03b2 the sample complexity is at most: NK \u201c 16\u03b14L2,:V: \u03f53 log maxt0,J\u02da : \u00b4J:p\u03b80q\u00b4\u03b2u \u03f5 . (15) This result establishes a convergence of order r Op\u03f5\u00b43q to the global optimum J\u02da : of the general objective J:. Recalling that J: PtJA,JPu, Theorem 6.1 provides: (i) the first global convergence guarantee for PGPE for PB exploration (setting J: \u201cJP) and (ii) a global convergence guarantee for PG (e.g., GPOMDP) for AB exploration of the same order (up to logarithmic terms in \u03f5\u00b41) of the state-of-the-art one of Yuan et al. (2022) (setting J: \u201cJA). Note that our guarantee is obtained for a constant step size and holds for the last parameter \u03b8K, delivering a last-iterate result, rather than a best-iterate one as in (Yuan et al., 2022, Corollary 3.7). Clearly, this result is not yet our ultimate goal since, we need to assess how far the performance of the learned parameter \u03b8K is from that of the optimal deterministic objective J\u02da D. 6.2. Global Convergence of PGPE and GPOMDP In this section, we provide results on the global convergence of PGPE and GPOMDP with white-noise exploration. The sample complexity bounds are summarized in Table 1 and presented extensively in Appendix D. They all follow from our general Theorem 6.1 and our results on the deployment of deterministic policies from Section 5. PGPE. We start by commenting on the sample complexity of PGPE for a constant, generic hyperpolicy variance \u03c3P , shown in the first column. First, the guarantee on J\u02da D \u00b4ErJDp\u03b8Kqs contains the additional variancedependent term 3LP ?d\u0398\u03c3P originating from the deterministic deployment. Second, the sample complexity scales with r Op\u03f5\u00b43q. Third, by enforcing the smoothness of the MDP and of the deterministic policy (Assumptions 4.2 and 4.4), we improve the dependence on d\u0398 and on \u03c3P at the price of an additional p1\u00b4\u03b3q\u00b41 factor. A choice of \u03c3P which adapts to \u03f5 allows us to achieve the global convergence on the deterministic objective JD, up to \u03f5`\u03b2 only. Moving to the second column, we observe that the convergence rate becomes r Op\u03f5\u00b47q, which reduces to r Op\u03f5\u00b45q with the additional smoothness assumptions, which also improve the dependence on both p1\u00b4\u03b3q\u00b41 and d\u0398. The slower rate \u03f5\u00b45 or \u03f5\u00b47, compared to the \u03f5\u00b43 of the fixedvariance case, is easily explained by the more challenging requirement of converging to the optimal deterministic policy rather than the optimal stochastic hyperpolicy, as for standard PGPE. Note that we have set the standard deviation equal to \u03c3P \u201c \u03f5 6LP ?d\u0398 \u201cOp\u03f5p1\u00b4\u03b3q2d\u00b41{2 \u0398 q that, as expected, decreases with the desired accuracy \u03f5.6 GPOMDP. We now consider the global convergence of GPOMDP, starting again with a generic policy variance \u03c3A (third column). The result is similar to that of PGPE with three notable exceptions. First, an additional p1\u00b4\u03b3q\u00b41 factor appears in the sample complexity due the variance bound of GPOMDP (Papini et al., 2022). This suggests that GPOMDP struggles more than PGPE in long-horizon environments, as already observed by Zhao et al. (2011). Second, the dependence on the dimensionality of the parameter space d\u0398 is replaced with the dimensionality of the action space dA. This is expected and derives from the nature of exploration that is performed in the parameter space for PGPE and in the action space for GPOMPD. Finally, the smoothness of the deterministic policy (Asm. 4.4) is always needed. Adding also the smoothness of the MDP (Asm. 4.2), we can trade a dA factor for a p1\u00b4\u03b3q\u00b41 one. Again, a careful \u03f5-dependent choice of \u03c3A allows us to achieve global convergence on the deterministic objective JD. In the last column, we can notice that the convergence rates display the same dependence on \u03f5 as in PGPE. How6These results should be interpreted as a demonstration that global convergence to deterministic policies is possible rather than a practical recipe to set the value of \u03c3P. We do hope that our theory can guide the design of practical solutions in future works. 6 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Table 1. Sample complexity NK \u201c r Op\u00a8q of GPOMDP and PGPE to converge to a deterministic optimal policy, retaining only dependencies on \u03f5, p1\u00b4\u03b3q\u00b41, \u03c3A, \u03c3P, d\u0398, dA, and \u03b1. Task-dependent constants LP and LA are Opp1\u00b4\u03b3q\u00b42q\u2014see Table 2 in Appendix A. ever, the dependence on the effective horizon p1\u00b4\u03b3q\u00b41 is worse. In this case, the additional smoothness assumption improves the dependency on dA and p1\u00b4\u03b3q\u00b41. 7. About the Weak Gradient Domination So far, we have assumed WGD for the AB JA and PB JP (Assumption 6.1). In this section, we discuss several scenarios in which such an assumption holds. 7.1. Inherited Weak Gradient Domination We start by discussing the case in which the deterministic policy objective JD already enjoys the (W)GD property. Assumption 7.1 (Weak gradient domination for JD). There exist \u03b1D \u01050 and \u03b2D \u011b0 such that for every \u03b8P\u0398 it holds that J\u02da D \u00b4JDp\u03b8q\u010f\u03b1D}\u2207\u03b8JDp\u03b8q}2 `\u03b2D. Although the notion of WGD has been mostly applied to stochastic policies in the literature (Liu et al., 2020; Yuan et al., 2022), there is no reason why it should not be plausible for deterministic policies. Bhandari & Russo (2024) provide sufficient conditions for the performance function not to have any local optima, which is a stronger condition, without discriminating between deterministic and stochastic policies (cf. their Remark 1). Moreover, one of their examples is linear-quadratic regulators with deterministic linear policies. We show that, under Lipschiztianity and smoothness of the MDP and deterministic policy (Assumptions 4.1-4.4), this is sufficient to enforce the WGD property for both the PB JP and the AB JA objectives. Let us start with JP. Theorem 7.1 (Inherited weak gradient domination for JP). Under Assumptions 7.1, 4.1, 4.3, 4.2, 4.4, for every \u03b8P\u0398: JP \u02da \u00b4JPp\u03b8q\u010f\u03b1D}\u2207\u03b8JPp\u03b8q}2 `\u03b2D `p\u03b1DL2 `LP q\u03c3P a d\u0398, where L2 \u201cOpp1\u00b4\u03b3q\u00b43q (full expression in Lemma E.2). The result shows that the WGD property of JD entails that of JP with the same \u03b1D coefficient, but a different \u03b2 \u201c \u03b2Dp\u03b1DL2 `LP q\u03c3P ?d\u0398 that accounts for the gap between the two objectives encoded in \u03c3P. Note that even if JD enjoys a (strong) GD (i.e., \u03b2D \u201c0), in general, JP inherits a WGD property. In the setting of Theorem 7.1, convergence in the sense of J\u02da D \u00b4ErJDp\u03b8Kqs\u010f\u03f5`\u03b2D can be achieved with r Op\u03b16 D\u03f5\u00b45d2 \u0398p1\u00b4\u03b3q\u00b411q samples by carefully setting the hyperpolicy variance (see Theorem D.12 for details). An analogous result can be obtained for AB exploration. Theorem 7.2 (Inherited weak gradient domination on JA). Under Assumptions 7.1, 4.1, 4.3, 4.2, 4.4, for every \u03b8P\u0398: JA \u02da \u00b4JAp\u03b8q\u010f\u03b1D}\u2207\u03b8JAp\u03b8q}2 `\u03b2D `p\u03b1D\u03c8`LAq\u03c3A a dA, where \u03c8\u201cOpp1\u00b4\u03b3q\u00b44q (full expression in the proof). The sample complexity, in this case, is r Op\u03b16 D\u03f5\u00b45d2 Ap1\u00b4 \u03b3q\u00b414q (see Theorem D.13 for details). 7.2. Policy-induced Weak Gradient Domination When the the objective function does not enjoy weak gradient domination in the space of deterministic policies, we can still have WGD with respect to stochastic policies if they satisfy a condition known as Fisher-non-degeneracy (Liu et al., 2020; Ding et al., 2022). As far as we know, WGD by Fishernon-degeneracy is a peculiar property of AB exploration that has no equivalent in PB exploration. White-noise policies satisfying Assumption 4.5 are Fisher-non-degenerate under the following standard assumption (Liu et al., 2020): Assumption 7.2 (Explorability). There exists \u03bbE \u01050 s.t. E\u03c0\u03b8r\u2207\u03b8\u00b5\u03b8psq\u2207\u03b8\u00b5\u03b8psqJs\u013e\u03bbEI for all \u03b8P\u0398, where the expectation over states is induced by the stochastic policy. We can use this fact to prove WGD for white-noise policies: Theorem 7.3 (Policy-induced weak gradient domination). Under Assumptions 4.5, 7.2 and D.1, we have: JA \u02da \u00b4JAp\u03b8q\u010fC ?dA\u03c3A \u03bbE }\u2207\u03b8JAp\u03b8q}2 ` ?\u03f5bias 1\u00b4\u03b3 , for some numerical constant C \u01050, that is, Assumption 6.1 (:=A) is satisfied with \u03b1\u201cC ?dA\u03c3A \u03bbE and \u03b2 \u201c ?\u03f5bias 1\u00b4\u03b3 . 7 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Here \u03f5bias is the compatible-critic error, which can be very small for rich policy classes (Ding et al., 2022). We can leverage this to prove the global convergence of GPOMDP as in Section 7.1, this time to JD \u00b4ErJDp\u03b8qs\u010f\u03f5` ?\u03f5bias 1\u00b4\u03b3 . Tuning \u03c3A, we can achieve a sample complexity of r Op\u03f5\u00b41\u03bb\u00b44 E d4 Ap1\u00b4\u03b3q\u00b410q (see Theorem D.16 for details) This seems to violate the \u2126p\u03f5\u00b42q lower bound by Azar et al. (2013). However, the factor \u03bbE can depend on \u03c3A \u201cOp\u03f5q in highly non-trivial ways, and, thus, can hide additional factors of \u03f5. For this reason, the results granted by the Fishernon-degeneracy of white-noise policies are not compared with the ones granted by inherited WGD from Section 7.1. Intuitively, \u03bbE encodes some difficulties of exploration that are absent in \u201cnice\u201d MDPs satisfying Assumption 7.1. See Appendix D.4 for further discussion and omitted proofs. 8. Numerical Validation In this section, we empirically validate some of the theoretical results presented in the paper. We conduct a study on the gap in performance between the deterministic objective JD and the ones of GPOMDP and PGPE (respectively JA and JP) by varying the value of their exploration parameters (\u03c3A and \u03c3P, respectively). Details on the employed versions of PGPE and GPOMDP can be found in Appendix G. Additional experimental results can be found in Appendix H. We run PGPE and GPOMDP for K \u201c2000 iterations with batch size N \u201c100 on three environments from the MuJoCo (Todorov et al., 2012) suite: Swimmer-v4 (T \u201c200), Hopper-v4 (T \u201c100), and HalfCheetah-v4 (T \u201c100). For all the environments the deterministic policy is linear in the state and the noise is Gaussian. We consider \u03c32 : P t0.01,0.1,1,10,100u. More details in Appendix H.1. From Figure 1, we note that as the exploration parameter grows, the distance of JPp\u03b8Kq and JAp\u03b8Kq from JDp\u03b8Kq increases, coherently with Theorems 5.1 and 5.2. Among the tested values for \u03c3P and \u03c3A, some lead to the highest values of JDp\u03b8Kq. Empirically, we note that PGPE delivers the best deterministic policy with \u03c32 P \u201c10 for Swimmer and with \u03c32 P \u201c1 for the other environments. GPOMDP performs the best with \u03c32 A \u201c1 for Swimmer, and with \u03c32 A \u201c10 in the other cases. These outcomes agree with the theoretical results in showing that there exists an optimal value for \u03c3:. We can also appreciate the trade-off between GPOMDP and PGPE w.r.t. the parameter dimensionality d\u0398 and the horizon T, by comparing the best values of JD found by the two algorithms in each environment. GPOMDP is better than PGPE in Hopper and HalfCheetah. This can be explained by the fact that such environments are characterized by higher values of d\u0398. Instead, in Swimmer, PGPE performs better than GPOMDP. This can be explained by the higher value of T and the lower value of d\u0398. 10\u00b42 10\u00b41 100 101 102 \u00b4100 0 100 200 \u03c32 P J:p\u03b8Kq JD JP (a) PGPE on HalfCheetah. 10\u00b42 10\u00b41 100 101 102 \u00b4100 0 100 200 \u03c32 A J:p\u03b8Kq JD JA (b) GPOMDP on HalfCheetah. 10\u00b42 10\u00b41 100 101 102 150 200 250 \u03c32 P J:p\u03b8Kq JD JP (c) PGPE on Hopper. 10\u00b42 10\u00b41 100 101 102 150 200 250 \u03c32 A J:p\u03b8Kq JD JA (d) GPOMDP on Hopper. 10\u00b42 10\u00b41 100 101 102 20 40 60 \u03c32 P J:p\u03b8Kq JD JP (e) PGPE on Swimmer. 10\u00b42 10\u00b41 100 101 102 20 40 60 \u03c32 A J:p\u03b8Kq JD JA (f) GPOMDP on Swimmer. Figure 1. Variance study on Mujoco (5 runs, mean \u02d8 95% C.I.). 9. Conclusions We have perfected recent theoretical results on the global convergence of policy gradient algorithms to address the practical problem of finding a good deterministic parametric policy. We have studied the effects of noise on the learning process and identified a theoretical value of the variance of the (hyper)policy that allows to find a good deterministic policy using a polynomial number of samples. We have compared the two common forms of noisy exploration, action-based and parameter-based, both from a theoretical and an empirical perspective. Our work paves the way for several exciting research directions. First, our theoretical selection of the policy variance is not practical, but our theoretical findings should guide the design of sound and efficient adaptive-variance schedules. We have shown how white-noise exploration preserves weak gradient domination\u2014the natural next question is whether a sufficient amount of noise can smooth or even eliminate the local optima of the objective function. Finally, we have focused on \u201cvanilla\u201d policy gradient methods, but our ideas could be applied to more advanced algorithms, such as the ones recently proposed by Fatkhullin et al. (2023a), to find optimal deterministic policies with r Op\u03f5\u00b42q samples. 8 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here."
16
+ }
title_10K/test_title_short_2405.02384v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02384v1",
3
+ "title": "CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding",
4
+ "abstract": "Predictive Coding (PC) is a theoretical framework in cognitive science\nsuggesting that the human brain processes cognition through spatiotemporal\nprediction of the visual world. Existing studies have developed spatiotemporal\nprediction neural networks based on the PC theory, emulating its two core\nmechanisms: Correcting predictions from residuals and hierarchical learning.\nHowever, these models do not show the enhancement of prediction skills on\nreal-world forecasting tasks and ignore the Precision Weighting mechanism of PC\ntheory. The precision weighting mechanism posits that the brain allocates more\nattention to signals with lower precision, contributing to the cognitive\nability of human brains. This work introduces the Cognitive Diffusion\nProbabilistic Models (CogDPM), which demonstrate the connection between\ndiffusion probabilistic models and PC theory. CogDPM features a precision\nestimation method based on the hierarchical sampling capabilities of diffusion\nmodels and weight the guidance with precision weights estimated by the inherent\nproperty of diffusion models. We experimentally show that the precision weights\neffectively estimate the data predictability. We apply CogDPM to real-world\nprediction tasks using the United Kindom precipitation and ERA surface wind\ndatasets. Our results demonstrate that CogDPM outperforms both existing\ndomain-specific operational models and general deep prediction models by\nproviding more proficient forecasting.",
5
+ "authors": "Kaiyuan Chen, Xingzhuo Guo, Yu Zhang, Jianmin Wang, Mingsheng Long",
6
+ "published": "2024-05-03",
7
+ "updated": "2024-05-03",
8
+ "primary_cat": "cs.NE",
9
+ "cats": [
10
+ "cs.NE",
11
+ "cs.AI",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding",
17
+ "main_content": "Introduction Predictive Coding (PC) is a theoretical construct in cognitive science, positing that the human brain cognizes the vi*Equal contribution 1School of Software, BNRist, Tsinghua University. Kaiyuan Chen <[email protected]>. Correspondence to: Mingsheng Long <[email protected]>. Preliminary work. sual world through predictive mechanisms (Spratling, 2017; Hohwy, 2020). The PC theory elucidates that the brain hierarchically amends its perception of the environment by anticipating changes in the visual world. Researchers have developed computational models based on the PC theory to simulate the brain\u2019s predictive mechanisms (Keller & Mrsic-Flogel, 2018). Neuroscientists employ these models to empirically validate the efficacy of the PC theory and to find new characteristics. Precision weighting, a pivotal feature of the PC theory, suggests that the brain assigns more attention to signals with lower precision by using precision as a filter in weighting prediction errors. With the advancement of deep learning, predictive learning has emerged as one of the principal learning methods (Rane et al., 2020; Bi et al., 2023). Neural networks are now capable of making effective predictions in video data (Shi et al., 2015; Wang et al., 2017; Ho et al., 2022c). Deep video prediction models have rich applications, such as weather forecasting (Ravuri et al., 2021; Zhang et al., 2023) and autonomous driving simulation (Wang et al., 2018; Wen et al., 2023). Researchers design cognitively inspired video prediction models utilizing the PC theory. PredNet (Lotter et al., 2020), which employs multi-layer ConvLSTM (Shi et al., 2015) networks to predict the next frame in a video sequence, is responsible for predicting the residual between the outcomes of a network layer and the ground truth values. However, the predictive capability of PredNet does not show significant improvement over non-hierarchical video prediction models and has not been validated in real-world video prediction tasks. We posit that the hierarchical modeling mechanism in PredNet is not effectively implemented. PredNet directly targets low signal-to-noise ratio residuals as learning objectives, which complicates the learning process, and fails to extract fundamentally distinct features between layers. Additionally, PredNet lacks the capability to model precision, leading to uniform weighting in learning residuals across different regions. This results in redundant noise information becoming a supervisory signal and hinders the model\u2019s ability to learn from important information. In this study, we propose PC-inspired Cognitive Diffusion Probabilistic Models (CogDPM), which align the main features of PC theory with Diffusion Probabilistic Models 1 arXiv:2405.02384v1 [cs.NE] 3 May 2024 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding (DPMs), a specialized branch of deep generative models. The CogDPM framework innovatively abstracts the multistep inference process characteristic of Diffusion Probabilistic Models into a hierarchically structured model, where each layer is responsible for processing signals at distinct spatiotemporal scales. This hierarchical approach allows for a progressive enhancement in the model\u2019s interpretation of sensory inputs, actively working to reduce prediction errors through iterative refinement. A key feature of the CogDPM framework is its ability to estimate spatiotemporal precision weights based on the variance of states in each hierarchical layer. This methodology plays a crucial role in optimizing the overall precision of predictions, and represents a novel advancement in predictability modeling. We verify the effectiveness of precision weights as well as the predictions skills of CogDPM on real-world spatiotemporal forecasting tasks. To verify precision weights, we use synthetic motion datasets of both rigid body and fluid. Results show precision weights get higher salience on the hard-to-predict region. To validate the prediction capabilities of CogDPM, we apply CogDPM to real-world tasks including precipitation nowcasting (Shi et al., 2015; Ravuri et al., 2021) and high wind forecasting (Barbounis et al., 2006; Soman et al., 2010). We evaluate CogDPM through case studies focusing on extreme weather events and scientific numerical metrics. CogDPM outperforms operational domain-specific models FourCastNet (Pathak et al., 2022) and DGMR (Ravuri et al., 2021) as well as the general deep predictive models. We demonstrate that CogDPM has strong extreme event prediction capabilities and verify the effectiveness of precision estimations of CogDPM which provide useful information for weather-driven decision-making. In summary, we identify the following advantages of CogDPM: \u2022 CogDPM aligns diffusion probabilistic models with Predictive Coding theory, which inherently integrates hierarchy prediction error minimization with precisionweighting mechanics. \u2022 CogDPM delivers skillful and distinct prediction results, particularly in scientific spatiotemporal forecasting, demonstrating a marked improvement in probabilistic forecasting metrics. \u2022 CogDPM presents a novel method for predictability estimation, providing index of confidence modeling for probabilistic forecasting. 2. Related Work Predictive Learning. Predictive learning is a subfield of machine learning that utilizes historical data to make predictions about future events or outcomes. As an important aspect of human cognition that plays a crucial role in our ability to perceive and understand the world, spatiotemporal predictive learning has triggered a substantial amount of research efforts, such as ConvLSTM (Shi et al., 2015), PredRNN (Wang et al., 2017), and ModeRNN (Yao et al., 2023). Recently, diffusion models (Ho et al., 2020) have been successfully applied in video generation (Ho et al., 2022a) so as to capture spatiotemporal correlations, showing a promising trend as a spatiotemporal predictive learning framework. Predictive Coding. In neuroscience, predictive coding is a theory of brain function about how brains create predictions about the sensory input. Rao & Ballard translates the idea of predictive coding into a computational model based on extra-classical receptive-field effects, and shows the brain mechanism of trying to efficiently encode sensory data using prediction. Further research in neuroscience (Friston, 2009; Clark, 2013; Emberson et al., 2015; Spratling, 2017) presents different interpretations of predictive coding theory. Predictive Coding Neural Networks. The development of deep learning has arisen plenty of deep predictive networks with cognition-inspired mechanisms. PredNet (Lotter et al., 2016) implements hierarchical predictive error with ConvLSTM for spatiotemporal prediction using principles of predictive coding. CPC (Oord et al., 2018; Henaff, 2020) and MemDPC (Han et al., 2020) incorporate contrastive learning in the latent space via a predictive-coding-based probabilistic loss. PCN (Wen et al., 2018; Han et al., 2018) proposes a bi-directional and recurrent network to learn hierarchical image features for recognition. Such models introduce the motivation of predictive coding in their taskspecific manners. However, these works ignore precision weighting, a pivotal mechanism in PC theory. Besides, these works have not explored a proper PC-based framework of diffusion models. 3. Method Spatiotemporal forecasting involves extracting patterns from a sequence of vector fields c\u2212N0:0 and providing future evolution x1:N. We give a brief introduction to the framework of predictive coding and propose our CogDPM for implementing Predictive Coding into spatiotemporal forecasting. To avoid confusion, we use the superscript N to represent different moments allowed by time, and the subscript t to denote the ordinal number of the inference steps in the diffusion model. 3.1. CogDPM via Predictive Coding Figure 1a presents a conceptual demonstration of a predictive coding (PC) system. Based on PC theory, we pro2 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding a b c Predictive Error Minimization Reverse Denoising Process Error Estimator s+1 Expectation State s+1 Layer s+1 Error Estimator s Expectation State s Layer s Error Estimator s-1 Expectation State s-1 Layer s-1 Predictions N=1 Inverse Precision Guidance t+1 Latent State t+1 State t State t-1 Observations Step t+1 Guidance t Latent State t Step t Guidance t-1 Latent State t-1 Step t-1 Generative DPM <latexit sha1_base64=\"ML9QSOD9jWaImtpKSb4HLCbG6yE=\">ACzXicjVHLTsJAFD3UF+ILdemkZi4IsUYdUl0oTsxkUcEQtoywIS +Mp2aEMStP+BWf8v4B/oX3hlLohKj07Q9c+49Z+be60Qej6VlvWaMufmFxaXscm5ldW19I7+5VYvDRLis6oZeKBqOHTOPB6wqufRYIxLM9h2P1Z3hmYrXb5mIeRhcy1HE2r7dD3iPu7Yk6ua8M27JAZP2pJMvWEVL3MWlFJQLoqYf4FLXQRwkUCHwBJGEPNmJ6mijBQkRcG2PiBCGu 4wT5EibUBajDJvYIX37tGumbEB75RlrtUunePQKUprYI01IeYKwOs3U8UQ7K/Y37H2VHcb0d9JvXxiJQbE/qWbZv5Xp2qR6OFE18Cpkgzqjo3dUl0V9TNzS9VSXKIiFO4S3FB2NXKaZ9NrYl17aq3to6/6UzFqr2b5iZ4V7ekAZd+jnMW1A6KpaPi4dVhoXyajqLHexin+Z5jDIuU EGVvAM84gnPxqWRGHfG/WeqkUk12/i2jIcPXQCTeA=</latexit>G\u2713 Diff. Perceptual DPM Perceptual DPM <latexit sha1_base64=\"a09bGnXr5N3YkheEshRNCxPgnF8=\">ACzXicjVHLTsJAFD3UF+ILdemkZi4Iq0h6pLoxp2YyCMCIW0Z YEJpm+nUhCBu/QG3+lvGP9C/8M5YEpUYnabtmXPvOTP3XjfyeSwt6zVjLCwuLa9kV3Nr6xubW/ntnVocJsJjVS/0Q9FwnZj5PGBVyaXPGpFgzsj1Wd0dnqt4/ZaJmIfBtRxHrD1y+gHvc+RN1UOpOWHDpTDv5glW09DLngZ2CAtJVCfMvaKGLEB4SjMAQBL24SCmpwkbFi Li2pgQJwhxHWeYIkfahLIYZTjEDunbp10zZQPaK89Yqz06xadXkNLEAWlCyhOE1WmjifaWbG/eU+0p7rbmP5u6jUiVmJA7F+6WeZ/daoWiR5OdQ2cao0o6rzUpdEd0Xd3PxSlSHiDiFuxQXhD2tnPXZ1JpY1656+j4m85UrNp7aW6Cd3VLGrD9c5zoHZUtI+LpatSoXyW jqLPezjkOZ5gjIuUEGVvAM84gnPxqWRGHfG/WeqkUk1u/i2jIcPcrGTgQ=</latexit>P\u2713 Uncertainty Weighting Guidance Update Short-term Obserations Sensation Fields N=1 Predictions N=1 Low-precision Maps N=1 N=0 N=0 Figure 1. a, A general predictive coding framework. The system recognizes the sensation fields with hierarchy error units and expectation units and generates the predictions and precision maps during the process. b, Cognitive Diffusion Probabilistic Models (CogDPM) framework, providing predictions and precision weights with multi-step denoising process. c, Updates of latent states with precisionweighted predictive error. pose Cognitive Diffusion Probabilistic Models (CogDPM) for spatiotemporal forecasting based on multi-step denoising (Ho et al., 2020), which realizes the core mechanisms of hierarchical inference and prediction error minimization. Fig. 1b shows the framework of CogDPM, which takes past observations as input to forecast the evolution of future fields and estimate corresponding prediction error. Hierarchical Inference. Predictive coding theory describes that the brain makes spatiotemporal predictions of the sensations through hierarchical inference with multilayer organized estimators (Walsh et al., 2020). While different layers of the PC system are responsible for processing features at different spatial scales, the hierarchical system gradually performs prediction error minimization and converges on a final consistent predictions (Wiese & Metzinger, 2017). CogDPM aligns the multi-step inference of DPM with the hierarchical inference of the PC system. In the inference phase of CogDPM, the forecast is gradually generated in the hidden states evolution process from xT , xT \u22121, . . . to x0, where xT is a Gaussian prior and x0 indicates the generated target distribution of forecast. CogDPM inherits the properties of DPM that the different inference steps have varying spatial and temporal scales of feature expression capabilities (Zheng et al., 2022). In the initial stages of inference, the model yields holistic and vague results. As it approaches the final steps, the model shifts its focus towards supplementing with detailed information, which is also aligned with the hierarchical property of the PC system. In each internal inference step, the guidance of the diffusion model plays a similar role with the error units of the PC system, taking observation sequence as input and strengthen the correlation between generated results and observations (Dhariwal & Nichol, 2021). Prediction Error Minimization. Each layer in the PC system outputs two key components: predictions for future sensations and estimations of prediction errors (van Elk, 2021). This process is enabled by interactions between two functionally distinct neural sub-components in the layer: expectation units and error units (Walsh et al., 2020). The expectation unit updates expected sensory states from the previous level to the error units, without directly receiving sensory-driven signals as input. The error unit receives and analyzes the discrepancies between perceptual and expected sensory states to compute the error, which is then fed back to the expectation unit in the next layer. The goal of the information transfer between multiple layers is to minimize 3 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding prediction errors, ultimately resulting in more accurate environmental perceptions. CogDPM couples a generative DPM G\u03b8 with a perceptual DPM P\u03b8, where \u03b8 represents their sharing parameters. The previous state xt is the sharing input of both models, while observations c can only be attached by the perceptual DPM. With the previous state as observation, the perceptual DPM acts as sensory stimuli and thus aligns with the bottom-up process in the PC system. The generative DPM, as a comparison, performs as the top-down prediction based on conceptual knowledge. Fig. 1c provides detailed schematic diagram of a single step in CogDPM. Given the outputs G\u03b8(xt) and P\u03b8(xt, c) separately for each step t, the guidance for predictive error minimization can be expressed by: Guidance[xt] = P\u03b8(xt, c) \u2212G\u03b8(xt), (1) i.e., the difference between sensations and predictions. 3.2. Precision Weighting in CogDPM Precision weighting stands as the pivotal mechanism for filtering information transmitted between adjacent layers. It posits that the brain expends more effort in comprehending imprecise information, recognizing that sensory input often contains a substantial proportion of redundant information, which does not necessitate repetitive processing (Hohwy, 2020). During each error minimization phase of the predictive coding (PC) approach, the error unit generates precision maps. These maps selectively filter the signal transmitted to the subsequent layer, assigning greater weight to signals characterized by higher imprecision. Following precision weighting in PC theory, our goal is to design a modeling of imprecision for each denoising process of CogDPM. We therefore delve into the progressive denoising mechanism in the backward process of DPMs. In each denoising step for xt, the model predicts a noise towards the corresponding groundtruth x0 (Song et al., 2020). The model usually shifts xt into xt\u22121 within a tiny step and recursively performs the process to get x0, but can either directly obtain x0 within a single larger step. If the direct predictions from step t and from step t + 1 with generative DPM G\u03b8 differ in a significant manner for a certain spatiotemporal region, the single step produces inconsistent signal from previous steps, indicating the imprecision of the generative model at such region of the current state. Hence, we use the fluctuation field of direct predictions x0 from {xt, . . . , xt+k\u22121} to estimate such imprecision of state xt for each coordinate, formulated by Eq. (2): U[xt] = Var [EG\u03b8 [x0 | xt] , . . . , EG\u03b8 [x0 | xt+k\u22121]] , (2) where Var stands for the variance field along the denoising step, and k is the hyperparameter for window length. In this way, CogDPM provides a modeling of the inverse precision field for multiscale spatiotemporal coordinates in the inference steps. Since only the past observation is given in the forecasting tasks, this precision is a good substitution for the actual precision to weight the minimization. We implement precision weighting in the CogDPM framework, which can be formulated as Eq. (3), xt\u22121 = G\u03b8(xt) + f(U[xt]) \u00b7 Guidance[xt], (3) where f is a parameter-free normalization function shown in Eq. (8). Precision weighting helps to control the balance between diversity and the alignments with the observation, with larger guidance increasing the alignments and decreasing the diversity or the quality of generations. Through this precision weighting mechanism, CogDPM strategically allocates greater guidance intensity to regions with lower predictability, thereby enhancing local precision in a focused manner. Computational details. The framework of a standard DPM starts with x0 sampled from data distribution, and latent states {x1, x2, . . . , xT } following the forward process along a Markov chain as Eq. (4). q(xt+1 | xt) = N \u0000\u221a\u03b1txt, \u221a 1 \u2212\u03b1tI \u0001 , (4) where {\u03b1t}t=1,2,...,T are constant parameters. Each latent state is a corrupted estimation for the future inputs with the three-dimensional shape of N \u00d7 H \u00d7 W. In each step of the backward process, we update the latent state with the denoising network \u03f5\u03b8. We denote the sensation input as c, which has a shape of N0 \u00d7 H \u00d7 W. The perceptual model P\u03b8 and generative model G\u03b8 can be preformed separately as Eq. (5) and (6). P\u03b8(xt, c) = 1 \u221a\u03b1t \u0012 xt \u22121 \u2212\u03b1t \u221a1 \u2212\u00af \u03b1t \u03f5\u03b8(xt, c) \u0013 , (5) G\u03b8(xt) = 1 \u221a\u03b1t \u0012 xt \u22121 \u2212\u03b1t \u221a1 \u2212\u00af \u03b1t \u03f5\u03b8(xt, \u2205) \u0013 , (6) where \u00af \u03b1t = Qt s=1 \u03b1s and \u03f5\u03b8 is the denoising network of the DPM. CogDPM provides inverse precision estimation with Eq. (2), and EG\u03b8 [x0 | xt] can be computed as Eq. (7): EG\u03b8 [x0 | xt] = 1 \u221a\u00af \u03b1t \u0000xt \u2212 \u221a 1 \u2212\u00af \u03b1t\u03f5\u03b8(xt, \u2205) \u0001 . (7) For implementation, we push G\u03b8 (xt) into the estimation queue with a maximal queue length of k, and estimate the precision with Eq. (2). Thus, we can merge G\u03b8(xt) and P\u03b8(xt, c) with respect to the control of precision with Eq. (3). Considering numerical stability, we normalize the inverse precision field in U(xt) and clip the value in a fixed range. The formulation of f is following: f(w) = \u03bb \u00b7 clip \u0012w \u2212\u00af w \u03c3(w) , 0, 1 \u0013 + 1, (8) 4 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding where \u00af w and \u03c3(w) are the mean and standard error of w, \u03bb is a constant that controls the guidance strength. Finally, we merge G\u03b8(xt) and P\u03b8(xt, c) with the guidance weight by inverse precision as Eq. (3). The pseudo code of the inference process of CogDPM framework is shown in Algorithm 1. Objective function. CogDPM follows the training schema in diffusion probabilistic model (Ho et al., 2020) that predicts the noise from the corrupted inputs. We denote the loss term as L(\u03b8). The denoising U-Net \u03f5\u03b8 has parameters \u03b8, and takes the corrupted future observations xs, contexts c and the scalar diffusion step s as input. We adopt the L1 loss to minimize the error between injected noise and the prediction of the denoising U-Nets. L(\u03b8) = Et,x0,\u03f5,c \u0002 \u2225\u03f5 \u2212\u03f5\u03b8 \u0000\u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, c, t \u0001 \u22251 \u0003 (9) To jointly train the conditional and unconditional models, c is replaced by Z \u223cN(0, I) with 10% probability. Algorithm 1 Inference Process of CogDPM framework Input: Context input c, denosing model \u03f5\u03b8, maximul queue length L xT \u223cN(0x, Ix) Define free estimation queue Qfree for t = T to 1 do \u03f5c \u223cN(0c, Ic) \u03f5cond t = \u03f5\u03b8(\u02c6 xt, c) {Network output with condition c.} \u03f5free t = \u03f5\u03b8(\u02c6 xt, \u03f5c) {Network output without condition.} P\u03b8(xt, c) = 1 \u221a\u03b1t (xt \u2212 1\u2212\u03b1t \u221a1\u2212\u00af \u03b1t \u03f5cond t ) G\u03b8(xt) = 1 \u221a\u03b1t (xt \u2212 1\u2212\u03b1t \u221a1\u2212\u00af \u03b1t \u03f5free t ) \u02c6 xt\u21920 = 1 \u221a\u00af \u03b1t \u0000xt \u2212\u221a1 \u2212\u00af \u03b1t\u03f5free t \u0001 {Estimate x0 with xt.} Push \u02c6 xt\u21920 into Qfree if Length of Qfree exceeds L then Drop last term from Qfree end if Get inverse precision estimation w = f(Var(Qfree)) xt\u22121 = G\u03b8(xt)+w\u00b7(P\u03b8(xt, c)\u2212G\u03b8(xt)) {Prediction error minimization with precision weighting.} end for Output: x0 4. Experiments We demonstrate that by incorporating the novel design inspired by the cognitive predictive process, CogDPM can deliver more skillful and improved results in tasks of scientific spatiotemporal field prediction. 4.1. Synthesis Data Experiments In this section, we compare the predictive performance of CogDPM with other mainstream deep predictive networks and investigate the interpretability of Precision weighting within the CogDPM framework in the context of spatiotemporal prediction. We expect high correlation between the precision estimation and the predictability of CogDPM. The inverse precision estimator should allocate more attention to the region with higher prediction difficulty. Benchmarks. We conduct experiments on the MovingMNIST dataset (Wu et al., 2021), which simulates the motion of rigid bodies, and the Turbulence flow dataset, which models fluid dynamics. The Moving MNIST dataset is generated with the same method as (Wu et al., 2021). We create sequences with 20 frames, and each frame contains three handwriting digits. The motion of digits consists of transition, reflection, and rotation. Models predict the next 16 frames with 4 continuous context frames. The turbulent flow dataset is proposed by (Rui et al., 2020). We follow the same dataset parameters as Rui et al. and generate a sequence with 15 frames and 64 x 64 grids on each frame. Four frames are taken to predict the next 11 frames. We have selected a diverse array of deep spatiotemporal forecasting models as baselines for our study. These include the Transformer-based spatiotemporal forecasting model FourCastNet (Pathak et al., 2022) , RNN-type networks such as MotionRNN (Wu et al., 2021) and PredRNN-v2 (Wang et al., 2022), the physics-inspired predictive model PhyDNet (Guen & Thome, 2020), and a predictive DPM model that employs naive Classifier-free Guidance (Ho & Salimans, 2021) and utilizes the same network architecture as CogDPM. For the evaluation metrics, we have chosen the Neighborhood-based CRPS (Continuous Ranked Probability Score), CSI (Critical Success Index), and FSS (Fractional Skill Score), which are commonly used in scientific forecasting tasks. The CRPS metric emphasizes the ensemble forecasting capabilities of the model, with lower values indicating better predictive performance. On the other hand, the CSI and FSS metrics focus on assessing the accuracy of the model\u2019s predictions in peak regions, with higher values denoting stronger predictive capabilities. The implementation details of these metrics are provided in the appendix D, and we will continue to employ them in subsequent experiments on real-world datasets. Numerical Results Table 4 presents the numerical evaluation results for two datasets. Here, w denotes the window size employed in the Neighborhood-based assessment method, while avg and max represent the average and maximum values obtained from this method, respectively. 5 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding Methods / Metrics MovingMNIST Turbulence CRPS \u2193 CSI \u2191 (w5) FSS \u2191 (w5) CRPS \u2193 CSI \u2191 (w5) FSS \u2191 (w5) (w8, avg) (w8, max) (w8, avg) (w8, max) FourCastNet 0.0619 0.2288 0.1915 0.3261 0.0098 0.0119 0.3761 0.6558 MotionRNN 0.0377 0.1232 0.4859 0.6758 0.0037 0.0046 0.7235 0.9354 PhyDNet 0.0325 0.0983 0.6161 0.7969 0.0079 0.009 0.5456 0.8254 PredRNN-v2 0.027 0.0774 0.688 0.8471 0.0033 0.0042 0.7529 0.9507 DPM 0.0323 0.082 0.6959 0.822 0.0023 0.0096 0.6725 0.9668 CogDPM (ours) 0.027 0.0697 0.7365 0.8588 0.0023 0.0034 0.7962 0.9722 Table 1. Numerical Evaluation of Prediction Skills on MovingMNIST and Turbulence Datasets MC Sampling CogDPM Predictions Inverse Precision Temporal Residual -2 2 4 2 4 N=0 8 10 6 0 2 4 6 8 0 2 4 6 8 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.1 0.2 0.3 0.4 0.5 0 2 4 6 8 0 2 4 6 8 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.0 0.5 0.0 0.5 1.0 CogDPM Predictions Predicion Residual -2 4 N=0 MovingMNIST Turbulence 8 12 16 Inverse Precision MC Sampling 0 2 4 6 8 0 2 4 6 8 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.0 0.5 0.0 0.5 1.0 0 2 4 6 8 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2. Predictions and inverse precision of CogDPM on rigid-body MovingMNIST dataset (left) and Turbulence flow dataset (right). The CogDPM model demonstrates consistent improvements over the baseline models in terms of the CRPS, which measures the average ensemble forecasting capability, as well as the CSI and FSS indicators, which assess the accuracy of the model\u2019s predictions in the peak regions. Additionally, when compared to the DPM model based on naive Classifier-free Guidance, CogDPM exhibits superior performance. This underscores the beneficial impact of introducing the Precision Weighting mechanism on enhancing the model\u2019s predictive efficacy. Interpretability of precision weights. Figure 2 presents the outcomes of the CogDPM model. The initial two rows delineate the ground truth images alongside the corresponding prediction results generated by CogDPM. The third row illustrates the prediction residuals, representing the discrepancies between the actual and predicted data as depicted in the preceding rows. The fourth row features images that overlay the inverse precision map, highlighting the top 20% of values with a black contour line, against a backdrop of the residual map. The fifth row shows the precision map estimated by Monte Carlo sampling which estimate the prediction confidence with the variation among multiple independent predictions with difference noise prior (Zhang, 2021). CogDPM provides reasonable predictions in both datasets. In the prediction of rigid body motion, the estimated Inverse Precision effectively encompasses the Precision Residuals, which are primarily located at the edges of objects. The edges of objects present a greater challenge for prediction compared to blank areas or the interior of objects. This outcome aligns with our expectations for the estimation of the precision map. Precision estimated with MC sampling works similarly but provide more false positive region in frame 12 and 14. In the prediction of fluid motion, regions with large temporal residuals exhibit higher accelerations, indicating increased predictive difficulty. The estimated Inverse Precision indeed covers the Temporal Residuals well, meeting our expectations. We observe that in both fluid and rigid body motion prediction tasks, the Precision weights of CogDPM exhibit varying styles, yet consistently depict the model\u2019s confidence on current case. On comparison, MC sampling 6 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding a T=0h Reanalysis CogDPM Predictions Wind Speed Unpredictability Estimation Inverse Precision FourCastNet Predictions MC Uncertainty T=24h T=36 T=12h T=48h <4 m/s 4-6 m/s 6-8 m/s 8-10 m/s 10-12 m/s 12-14 m/s >14 m/s b 0 2 4 6 8 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 0 1 2 3 4 5 6 7 8 Prediction interval [6 hour] 0.02 0.03 0.04 0.05 RMSE PredRNN FourCastNet CogDPM 0 1 2 3 4 5 6 7 8 Prediction interval [6 hour] 0 0.2 0.4 0.6 0.8 CSI V \u2265 12.0 m/s PredRNN FourCastNet CogDPM 0 1 2 3 4 5 6 7 8 Prediction interval [6 hour] 0 0.2 0.4 0.6 0.8 CSI V \u2265 16.0 m/s PredRNN FourCastNet CogDPM 0 1 2 3 4 5 6 7 8 Prediction interval [6 hour] 0.01 0.02 0.03 0.04 CRPS PredRNN FourCastNet CogDPM Figure 3. Experiments on high wind forecasting. a, a Case study of the ERA5 wind forecast from 2017-03-04 18:00. High wind and tornadoes attacked the Mideast USA at 2017-03-06 18:00(T=48h) (Twin Cities, 2017). CogDPM provides alarming forecasts, covering states with the most severe weather reports, Iowa and Missouri. CogDPM precision indicate the credibility of the predictions, helping forecasters to identify the missing and false positive regions. b, Numerical scores on ERA5 wind dataset from 2017-01-01 to 2019-12-31. We report CSI with 12 m/s (first) and 16 m/s (second) threshold, RMSE (third), and CRPS across four ensembles (fourth). method almost fails in this case due to the over-confidence of the prediction result. Difference among multiple predictions have no significant signals but random noise. While, the CogDPM is not effected because its precision describe the continuous enhancing process of model\u2019s confidence during the hierarchy inference. 4.2. Surface Wind Forecasting Experiments Benchmarks. We first evaluate our model by applying it to the task of surface wind forecasting, using the ERA5 reanalysis dataset (Hersbach et al., 2023). Accurate wind field forecasting is crucial for various applications in energy and weather domains. Ensemble forecasting is a key technique to provide more useful information for the forecasters, which provides multiple predictions and the confidence of its predictions. We show that CogDPM not only provide better ensemble forecasts results, but also estimate the prediction confidence with its precision weights. We choose real-world operational metrics for evaluation. In the meteorology domain, forecasters focus on evaluating the risk of high wind and confirming the time for extreme weather issue warnings. On this purpose, we use Critical Success Index (CSI) to measure the consistency between heavy wind regions in forecasts and ground truths. In the energy domain, accurate wind field forecasting supports the prediction of wind power, which is essential for the fluctuation control of clean energy generation (Marug\u00b4 an et al., 2018). Absolute wind speed is the dominant factor that affect the power production of the wind turbine (Port\u00b4 eAgel et al., 2013); thus, we consider pixel-wise Root Mean 7 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding 0 2 4 6 8 0 2 4 6 8 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30 T=0 min Observations CogDPM Predictions Precipitation (mm/h) DGMR Predictions T=60 min T=30 min T=90 min Figure 4. Experiments on precipitation nowcasting. Case study on an extreme precipitation event starting on 2019-07-24 at 03:15 in the UK timezone, CogDPM successfully predicts movement and intensity variation of the squall front, while DGMR produces results with early dissipation. Square Error (RMSE) and Radially Continuous ranked probability score (CRPS) on wind speed for the evaluation of this scenario (Barbounis et al., 2006). Applendix D shows detailed implemation of these metrics. Results. We use the ERA5 reanalysis surface wind data and crop patches centered in the US spanning from 1979 to 2021. We evaluate predictions for the next 48 hours with 6-hour intervals using the observations in past 24 hours. We compare the proposed method with FourCastNet (Pathak et al., 2022), a domain-specialized network for reanalysis field forecasting, and predictive recurrent networks for deterministic video prediction. FourCastNet provides ensemble forecasts based on the Gaussian disturbance on the initial states following (Evensen, 2003). Figure 3a shows studies on a case starting from 2017-03-04 18:00. The results from FourCastNet indicate a failure to accurately forecast the growing high wind region, and the high wind region is underestimated in the 48-hour forecast. In contrast, results from CogDPM not only locate the high wind region more accurately, but also provide intensity estimates much closer to the ground truth, supporting the need for 48-hour-ahead precautions. CogDPM are capable of providing alarming forecasts around 2017-03-06 18:00, when 1,024 128 64 32 16 8 4 Wavelength [km] \u221220 \u221210 0 10 20 30 40 PSD T + 90 min Ground Truth PredRNN DGMR CogDPM 0 3 6 9 12 15 18 Prediction interval [5 min] 0.05 0.1 0.15 0.2 CRPS Grid Scale [km] = 1 PredRNN DGMR CogDPM 0 3 6 9 12 15 18 Prediction interval [5 min] 0.05 0.1 0.15 0.2 0.25 0.3 Pooled CRPS (2km window) Grid Scale [km] = 2 PredRNN DGMR CogDPM 0 0.2 0.4 0.6 0.8 Cost/loss ratio 0.05 0.1 0.15 0.2 0.25 Value Cumulative rain \u2265 20mm PredRNN DGMR CogDPM Figure 5. Experiments on precipitation nowcasting. Numerical verification scores on sampled the United Kingdom precipitation dataset in 2019. CRPS is computed with four ensembles for spatial pooling size 1km x 1km (left top) and 2 km x 2 km (right top); Economic value with 20 mm/h accumulative rain threshold (left bottom); Radially averaged power spectral density on predictions at 90 minutes (right bottom). CogDPM surpasses the operational forecast model DGMR in ensemble forecasting precision and forecast skillfulness. high wind and tornadoes attacked the Mideast USA1. We also visualize the inverse precision fields corresponding to the forecasts, since confidence estimation provide key information for decision-making. In the forecast for the first 24 hours, the uncertainty fields given by FourCastNet are relatively dispersed and not closely related to the evolution of the associated wind field. In the next time period to the 48 hours, FourCastNet produces unreasonable estimates for the windless area in the upper right corner. The inverse precision fields given by CogDPM had much closer correlations to the weather process. In the 48-hour forecast, CogDPM underestimated the forecast intensity in Wyoming and Colorado, but allocated lower precision on that region. Figure 3b shows that CogDPM outperforms baseline methods on CSI, particularly for heavier wind thresholds. For the measurement of RMSE, we take the mean across eight ensemble forecasts for all methods. Although DPMs are not directly optimized by the Mean Squared Error (MSE) loss, the mean ensemble results are competitive with predictive models trained with MSE losses. The CogDPM exhibits a lower CRPS across all prediction times, indicating its ability 1Summary of March 06 2017 Severe Weather Outbreak Earliest Known Tornado in Minnesota\u2019s History, https://www. weather.gov/mpx/SevereWeather_06March2017 8 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding to effectively generate ensemble forecasts. Our results demonstrate that CogDPM is capable of making predictions under severe conditions, supported by the probabilistic forecast ability of the PEM process, while deterministic models avoid predicting severe cases to reduce mistake-making risk. 4.3. Precipitation Nowcasting Experiments Benchmarks. We evaluate our model on the precipitation nowcasting task using the United Kingdom precipitation dataset (Ravuri et al., 2021). Precipitation nowcasting aims to predict high-resolution precipitation fields up to two hours ahead, which provides socioeconomic value on weather-dependent decision-making (Ravuri et al., 2021). Precipitation data is extremely unbalanced on spatiotemporal scales, demanding nowcasting models to focus on vital parts of the field. Fig. 4a shows a case study selected by the chief meteorologist from MetOffice (Ravuri et al., 2021), which involves a squall line sweeping across the United Kingdom. We choose DGMR as a strong baseline on skillful nowcasting (Ravuri et al., 2021), which is datadriven method that forecast precipitation with a generative adversarial network. DGMR is also the operational method deployed by Met Office of the United Kingdom. Results. In Figure 4, our results accurately forecast both the trajectory and intensity fluctuations of the squall line, as depicted by the red precipitation line in the top right segment. CogDPM\u2019s forecasts consistently show the squall line progressing over 30 and 60 minutes, followed by dissipation at the 90-minute mark, mirroring actual events. Conversely, predictions from DMGR indicate a rapid dissipation of the squall line within 30 minutes, and significantly weaker outcomes are projected for the 60-minute mark. We posit that the suboptimal performance of the DGMR model is attributable to the simultaneous use of generative loss and pixel-wise alignment loss functions during its training phase, which leads to unstable training process and still keeps the drawback of dissipation of deterministic alignments. While the generative loss alone is capable of simulating realistic meteorological processes, it falls short in accurately predicting the extent of precipitation and is abandoned in DGMR. On the contrary, CogDPM does not require additional deterministic alignment during training but enhances precision with precision-weighted guidance during inference steps. We present additional case studies in Appendix F. We further explore the numerical evaluations in Fig 5 with metrics on different forecast properties focusing on the accuracy, reality and diversity. Radially Continuous ranked probability score (CRPS) measures the alignment between probabilistic forecast and the ground truth. We also report the spatially aggregated CRPS (Ravuri et al., 2021) to test prediction performance across different spatial scales. Details of these metrics can be found in Extended Data. The first row in Fig 4 shows CogDPM consistently outperforms baseline models for the whole time period. We adopt the decision-analytic model to evaluate the Economic value of ensemble predictions (Ravuri et al., 2021). Curves in Figure 5 with greater under-curve area provide better economic value, and CogDPM outperforms baseline models in this regard. Radially averaged power spectral density (PSD) evaluates the variations of spectral characteristics on different spatial scale. CogDPM achieves the minimal gap with ground truth characteristics. The superior performance metrics of CogDPM stem from its diffusion models\u2019 ability to emulate the hierarchical inference of predictive coding, resulting in smaller prediction errors compared to single-step forecasting models. Furthermore, the integration of precision weighting allows the model to dynamically assess the precision of inputs and adjust the intensity of conditional control accordingly. This targeted approach effectively reduces errors in areas that are challenging to predict, thereby enhancing the accuracy of the model in delineating boundaries and extreme regions. 5. Discussion CogDPM is related to classifier-free diffusion models (Ho & Salimans, 2021), which enhance the class guidance with a conditional DPM and an unconditional DPM. CogDPM framework builds the connection between classifier-free diffusion models and predictive coding. We also introduce the precision estimation method with the reverse diffusion process and use precision to control the guidance strength in spatiotemporal scales. We adopt the ablation study to show the enhancement in prediction skills of the CogDPM framework compared with the vanilla CFG method in appendix E. Active inference (Parr et al., 2019) is also a widely discussed theory of the predictive coding framework, which states that cognition system actively interact with the environment to minimize the prediction error. Active inference is omitted in this work. We take a computational predictive coding model with both active inference and precision weighting as the future work. 6. Conclusion We propose CogDPM, a novel spatiotemporal forecasting framework based on diffusion probabilistic models. CogDPM shares main properties with predictive coding and is adapted for field prediction tasks. The multi-step reverse diffusion process models the hierarchy of predictive error minimization. The precision of a latent expectation can be estimated from the variance of states in the neighboring levels. The CogDPM framework has demonstrated its ability to provide skillful spatiotemporal predictions in precipitation 9 \fCogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding nowcasting and wind forecasting. Case studies and numeric evaluations demonstrate that CogDPM provides competitive forecasting skills. Impact Statements This paper presents work whose goal is to advance the deep learning research for a PC-based spatiotemporal forecasting framework. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here."
18
+ }
title_10K/test_title_short_2405.02426v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02426v1",
3
+ "title": "Generalized Solution for Double-Porosity Flow through a Graded Excavation Damaged Zone",
4
+ "abstract": "Prediction of flow to boreholes or excavations in fractured low-permeability\nrocks is important for resource extraction and disposal or sequestration\nactivities. Analytical solutions for fluid pressure and flowrate, when\navailable, are powerful, insightful, and efficient tools enabling parameter\nestimation and uncertainty quantification. A flexible porous media flow\nsolution for arbitrary physical dimension is derived and extended to double\nporosity for converging radial flow when permeability and porosity decrease\nradially as a power law away from a borehole or opening. This distribution can\narise from damage accumulation due to stress relief associated with drilling or\nmining. The single-porosity graded conductivity solution was initially found\nfor heat conduction, the arbitrary dimension flow solution comes from\nhydrology, and the solution with both arbitrary dimension and graded\npermeability distribution appeared in reservoir engineering. These existing\nsolutions are here combined and extended to two implementations of the\ndouble-porosity conceptual model, for both a simpler thin-film mass transfer\nand more physically realistic diffusion between fracture and matrix. This work\npresents a new specified-flowrate solution with wellbore storage for the\nsimpler double-porosity model, and a new more physically realistic solution for\nany wellbore boundary condition. A new closed-form expression is derived for\nthe matrix diffusion solution (applicable to both homogeneous and graded\nproblems), improving on previous infinite series expressions.",
5
+ "authors": "Kristopher L. Kuhlman",
6
+ "published": "2024-05-03",
7
+ "updated": "2024-05-03",
8
+ "primary_cat": "physics.flu-dyn",
9
+ "cats": [
10
+ "physics.flu-dyn",
11
+ "physics.geo-ph",
12
+ "86A05"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "Generalized Solution for Double-Porosity Flow through a Graded Excavation Damaged Zone",
17
+ "main_content": "Introduction Fluid flow through damage-induced fracture networks in otherwise low-permeability crystalline rocks (e.g., granite, argillite or halite) is of interest to geothermal energy production (Tao et al, 2021), radioactive waste disposal (Tsang et al, 2005), hydrogen storage (AbuAisha and Billiotte, 2021), and compressed air energy storage (Kim et al, 2012). Rock damage around an excavation (i.e., the Excavation Damaged Zone, EDZ; Davies and Bernier (2005)) increases the connected porosity, and leads to increased permeability. Fractured rock often has higher porosity and permeability than intact rock. Damage near a borehole or excavation will decrease the relative contribution from flow in the lower-permeability farfield, and will confound the estimation of hydrologic properties using approaches that assume uniform homogeneous distributions of permeability and porosity. There is a need for a flexible analytical solution for flow to a borehole or excavation in the presence of damage, that includes wellbore storage, doubleporosity flow, and variable flow dimension. This is most evident in a mechanically weak, low-permeability medium like salt, but should also apply to other low-permeability fractured rocks like granite or shale. 1 arXiv:2405.02426v1 [physics.flu-dyn] 3 May 2024 \fIn salt, the far-field (i.e., undamaged) permeability is unmeasurably low (Beauheim and Roberts, 2002) due to salt\u2019s tendency to creep shut any unsupported openings. The permeability around a borehole in salt is derived from accumulated damage due to stress redistribution around the excavation itself (Wallace et al, 1990; Stormont et al, 1991; Cosenza, 1996; Hou, 2003; Kuhlman, 2014). Stormont et al (1991) presented brine and gas permeability data measured in salt for packer-isolated intervals of small boreholes before and after a central 1-meter diameter borehole was drilled (i.e., a mineby experiment). Figure 1 shows these data support the conceptual model of permeability and porosity decaying away from an excavation. Cosenza (1996) proposed the power-law model for permeability and porosity plotted in the figure. These data show porosity and permeability decrease with distance from the central excavation. Two lines are shown with to the data; one is a monomial power-law, the other includes an additive background term. The two curves differ primarily away from the excavation (r/rw \u22653), where larger uncertainties in estimated porosity and permeability exist, for three reasons. First, the access drift EDZ (test conducted in the floor of a 5-m wide room) is superimposed on the 1-m borehole EDZ. Second, the small-diameter (2.5-cm) measurement boreholes themselves each have a small EDZ overprinted on the 1-m borehole EDZ. Lastly, the apparent background permeability may represent the measurement limit of the packer system used (i.e., compliance of the packer inflation elements and working fluid). Especially in salt, the undisturbed background permeability is near zero, and is difficult to measure consistently in the field (Beauheim and Roberts, 2002). The power-law distribution of permeability matches the more certain near-field permeability distribution, and is conceptually more elegant than a finite domain or a flow domain with piece-wise heterogeneous properties (i.e., a higher-permeability EDZ adjacent to lowerpermeability intact rock). Other investigations have also shown porosity and permeability decaying away with distance from an excavation in crystalline rocks (Shen et al, 2011; Cho et al, 2013; Ghazvinian, 2015) and sedimentary rocks (Perras et al, 2010; Perras and Diederichs, 2016). Fig. 1 Permeability and porosity observations around a 1-m borehole (radial distance scaled by excavation radius) in salt from small-scale mine-by experiment (data from Stormont et al (1991)) Salt permeability has been related to both the confining and shear stresses (Reynolds and Gloyna, 1960; Lai, 1971; Stormont and Fuenkajorn, 1994; Alkan, 2009). Confining stresses reduce fracture aperture and bulk permeability, while shear stresses are associated with increased bulk permeability. Aydan et al (1993) present solutions for radial and tangential plane stress and strain (i.e., dilatation or a change in porosity) around a circular excavation. Strain is proportional to r\u22122 D or r\u22123 D (where rD is radial distance 2 \finto the formation scaled by the excavation size), depending on whether the region is experiencing elastic (exponent 2) or plastic (exponent \u22483) deformation. These relationships illustrate a possible behavior of rock in the EDZ. The true extent of the EDZ depends on drilling or excavation method, borehole or tunnel geometry, state of stress, and rock mechanical properties (Hudson et al, 2009). Softer or weaker sedimentary rocks like argillite or halite typically have a larger EDZ than stiffer or stronger rocks like granite. There are several well-known empirical power-law relationships between porosity and permeability in fractured or granular media (e.g., Kozeny, 1927; Carman, 1937) and many studies have discussed their applicability (David et al, 1994; Kuhlman and Matteo, 2018). Permeability in fractured rocks is more sensitive to small changes in porosity than granular rocks (i.e., fractured rocks have higher pore compressibility resulting in larger exponents in porosity-permeability relationships). Based on evidence from these observations, graded dimensionless porosity is assumed to follow n(r) = n0 \u0012 r rw \u0013\u2212\u03b7 , (1) where rw is the borehole or excavation radius [m], n0 = n(rw) is maximum porosity at the borehole wall, and \u03b7 is a dimensionless exponent (see Table 1 for a list of physical variables and notation). Using the same form, the graded permeability can be represented with the form k(r) = k0 \u0012 r rw \u0013\u2212\u03ba , (2) where k0 = k(rw) is the maximum permeability [m2] at the borehole wall and \u03ba is another dimensionless exponent. Based on lab measurements on fractured granite, the empirical relationship \u03ba \u22483\u03b7 has been proposed (Kranz et al, 1979; David et al, 1994). The Stormont et al (1991) salt data (Figure 1) support \u03b7 = 4.5 and \u03ba = 17, which shows a somewhat faster-decaying permeability (\u03ba = 3.8\u03b7) than seen in granitic rocks. The power-law permeability and porosity distribution conceptual model presented here is an alternative to flow models using wellbore skin (Streltsova, 1988; Pasandi et al, 2008), finite domain (Gelbard, 1992; Lin et al, 2016), or low-permeability non-Darcy flow with a threshold gradient (Liu, 2014, 2017). These three conceptualizations all lead to reduced contributions of flow from the far field, but only borehole skin can account for observed distributions of higher porosity or permeability near the excavation, which are important when analyzing pressure or flowrate data at early time. The contribution from lower permeability in the far field are more important at late time. Finite domains and skin can have analytical flow solutions, but low-permeability non-Darcy flow does not typically lend itself to analytical solutions. Barker (1988) developed a generalized solution for converging flow to a borehole with variable noninteger dimension, D. This conceptualization has been used to characterize flow in fractured systems, where lower-dimension (i.e., D < 3) results associated with discrete fractures are more common than higher dimension results (Beauheim et al, 2004; Le Borgne et al, 2004; Bowman et al, 2013; Ferroud et al, 2018). Doe (1991) extended the solution of Barker (1988) to the conceptualization where permeability varies with radial distance, through analogy with the heat conduction literature (Carslaw and Jaeger, 1959). A single-porosity flow solution is derived here with power-law variable properties, like the approach of Doe (1991) (who did not present a derivation). The single-porosity solution is then readily extended to a double-porosity conceptualization, using first the approach of Warren and Root (1963) for thin-film mass transfer between fractures and matrix, then the more physically realistic matrix diffusion approach of Kazemi (1969). Double-porosity flow is a common and efficient conceptualization in fractured rocks (Aguilera, 1980; van Golf-Racht, 1982; Da Prat, 1990). The medium is conceptualized as two communicating physically overlapping continua including fractures with high permeability (but little to no storage) and matrix or intact rock with significant storage (but little to no flow) (Barenblatt and Zheltov, 1960; Barenblatt et al, 1960). Many extensions to the basic double-porosity conceptual model exist, including multiple matrix or fracture porosities, and different assumptions about the geometry or underlying physics governing flow in the fractures or matrix (Chen, 1989; Kuhlman and Heath, 2021). The Warren and Root (1963) 3 \fsolution simplifies the exchange between matrix and fractures to a mass-transfer thin-film approximation, leading to numerous analytical solutions (Aguilera, 1980; Chen, 1989). It is commonly used for this reason, even though it is well-known that spatial pressure gradients in matrix blocks are important, as the matrix is low-permeability and would therefore be expected to experience steep, slow-changing gradients. A series representation of the Kazemi (1969) solution is used here, an extension of the multirate mass transfer model to double-porosity flow (Kuhlman et al, 2015). The more physically correct (but more difficult to solve) solution can be represented by an infinite series of porosities, which can either represent an infinite number of Warren-Root type matrix porosities, or if the coefficients are chosen specifically, a single Kazemi-type matrix diffusion porosity. More recently, Wang et al (2021) has developed a semi-analytical solution for flow in a double-porosity formation, for the case when non-Darcian flow is significant. Moutsopoulos et al (2022) have provided analytical and semi-analytical solutions for two classical problems in flow of unconfined double-porosity aquifers, based on Moutsopoulos (2021). De-Smedt (2022) presented an analytical solution for flow in double-porosity media for fractional flow dimensions, which is a generalization of De-Smedt (2011). Hayek et al (2018) presented a semi-analytical solution for flow due to pumping a double-porosity aquifer via a constant-pressure boundary condition (without wellbore storage) where permeability varied as a power law. The fractal reservoir flow problem (Chang and Yortsos, 1990) is also analogous to the radially variable properties approach presented here, but the governing equations of the two problems are only equivalent when the spectral exponent (\u03b8 in Chang and Yortsos (1990)) in the fractal problem is zero. The fractal reservoir governing equation is typically solved approximately, since the additional terms due to non-zero spectral exponent in the governing equation do not readily allow closed-form analytical solution. In the next section, the governing equations and boundary conditions are developed for the variabledimension single-porosity flow problem (Doe, 1991). This solution is mapped onto the modified Bessel equation, allowing solution for flow to both specified pressure (type-I) and specified flowrate with wellbore storage (type-III). These more general single-porosity solutions are shown to degenerate down to several well-known cases. The single-porosity solutions are then extended to a simpler Warren-Root type doubleporosity model for type-I (Hayek et al, 2018) and type-III (new) and then a new Kazemi type doubleporosity model. The Kazemi series solution approach is then summed analytically to arrive at a new closed-form expression for the response in Laplace space, a solution that is new for both graded and homogeneous domains. Finally, a summary and discussion of limitations is given for the new solutions. The approach taken here, representing the porosity and permeability of fractured rocks as power-law distributions, was first developed by Delay et al (2007), and first pursued by the author for applications in deep (> 3 km) borehole disposal of radioactive waste in basement rock (Brady et al, 2017; Kuhlman et al, 2019). The approach is also applicable to flow in salt surrounding excavations, like those in mine-by experiments (Stormont et al, 1991). 2 Development of Flow Problem To introduce and contrast with the dual-porosity solution, the single-porosity solution is developed first. To make a single solution for Cartesian linear, cylindrical, and spherical geometries, a variable-dimension approach like Barker (1988) is used, including variable permeability and porosity, like Doe (1991). The governing equation for slightly compressible time-dependent change in pressure p [Pa] in a general 1D coordinate (Barker, 1988) is n(r)c\u2202p \u2202t = 1 rm \u2202 \u2202r \u0014k(r)rm \u00b5 \u2202p \u2202r \u0015 , (3) where c is bulk compressibility [1/Pa] and the dimensionless parameter m is 0 for a Cartesian strip, 1 for a cylinder, and 2 for a sphere (i.e., m = D \u22121, where D is the dimension). The derivative of the bracketed term in (3) is expanded via chain rule; starting from (2), dk dr = \u2212\u03bak(r)/r is substituted with the definitions of k(r) and n(r), to get n0c \u0012 r rw \u0013\u2212\u03b7 \u2202p \u2202t = k0 \u00b5 \u0012 r rw \u0013\u2212\u03ba \u0014m \u2212\u03ba r \u2202p \u2202r + \u22022p \u2202r2 \u0015 . (4) For converging radial flow in a semi-infinite domain, the relevant wellbore boundary conditions are constant-pressure (type-I), constant-flux (type-II), or constant-flux with wellbore storage (type-III in 4 \fLaplace space). The initial, far-field, and source borehole boundary conditions for a borehole in an infinite symmetric domain are initial p(r, t = 0) = 0 far \u2212field p(r \u2192\u221e, t) < \u221e wellbore type \u2212I pI(r = rw, t) = p1(t); or (5) wellbore type \u2212II Amk0 \u00b5 \u2202pII(t) \u2202r \f \f \f \f r=rw = Q(t); or wellbore type \u2212III Amk0 \u00b5 \u2202pIII(t) \u2202r \f \f \f \f r=rw = Q(t) + Ac \u03c1g \u2202pw(t) \u2202t , respectively. See Appendix A for definition of source borehole boundary condition terms. These boundary conditions represent a homogeneous uniform initial condition, a requirement that the solution remains finite at large distance, and a specified pressure or pressure gradient at the source (r = rw). The Type-II boundary condition (specified flowrate) is a special case (\u03c3 = 0) of the wellbore storage boundary condition (flowrate linearly proportional to change in pressure), so it is not developed further. 2.1 Dimensional Analysis A solution is derived for equation (4), using the approach of Doe (1991), which was based on analogy with the heat conduction literature (Carslaw and Jaeger, 1959). Reducing the governing equation (4) to dimensionless form using characteristic time, Tc = n0cL2 c\u00b5/k0, and characteristic length, Lc = rw, leads to r\u03ba\u2212\u03b7 D \u2202pD \u2202tD = m \u2212\u03ba rD \u2202pD \u2202rD + \u22022pD \u2202r2 D , (6) where the dimensionless quantities rD = r/Lc, tD = t/Tc, and p{I,III} D = p/p{I,III} c are used (see Table 2 for a summary of dimensionless quantities). The characteristic pressure change is given by pI c = \u02c6 p1, where p1(t) = \u02c6 p1ft separates the timedependent specified pressure into a constant characteristic pressure and a dimensionless variable time behavior (for a constant specified pressure, ft = 1). The dimensionless type-I initial and boundary conditions are pD(rD, tD = 0) = 0 pD(rD \u2192\u221e, tD) < \u221e (7) pI D(rD = 1, tD) = ft. Using pIII c = rw \u02c6 Q\u00b5 Amk0 , where Q(t) = \u02c6 Qft similarly separates the time-dependent volumetric flowrate into a constant characteristic flowrate and a dimensionless time behavior. The dimensionless type-III source borehole boundary condition is \u2202pIII D \u2202rD \f \f \f \f rD=1 = ft + \u03c3 \u2202pIII D \u2202t , (8) where \u03c3 is a dimensionless wellbore storage coefficient (see Appendix A) and the same initial and far-field conditions apply as the type-I case. 2.2 Laplace Transform Taking the dimensionless Laplace transform \u0000 \u00af f(s) = R \u221e 0 e\u2212stDf(tD) dtD \u0001 of the governing partial differential equation (6) (without loss of generality assuming zero initial condition) leads to the ordinary differential equation d2\u00af pD dr2 D + m \u2212\u03ba rD d\u00af pD drD \u2212s\u00af pDr\u03ba\u2212\u03b7 D = 0, (9) 5 \fassuming \u03ba, \u03b7, and m are not functions of time, and s is the dimensionless Laplace transform parameter. The transformed type-I and far-field boundary conditions (7) are \u00af pD(rD \u2192\u221e) < \u221e (10) \u00af pI D(rD = 1) = \u00af ft, where \u00af ft represents the Laplace transform of the boundary condition\u2019s time behavior. For a unit step change at t = 0 (where ft = 1, a typical assumption), \u00af ft = 1 s. Other temporal behaviors are simply handled, including a step change at a non-zero time, an exponentially decaying source term, an arbitrary piecewise-constant or piecewise-linear behavior, or a sinusoidal source term (Kruseman and de Ridder, 1994; Mishra et al, 2013). The transformed wellbore-storage boundary condition is d\u00af pIII D drD \f \f \f \f rD=1 = \u00af ft + \u03c3s\u00af pIII D , (11) which now more clearly resembles a Type-III boundary condition. 2.3 Numerical Inverse Laplace Transform The governing equations and associated boundary conditions are solved exactly in Laplace space, then numerically inverted back to the time domain using one of several viable approaches (Kuhlman, 2013). The equations were rapidly prototyped and inverted using the Python library mpmath (Johansson et al, 2017), which provides arbitrary precision special functions and numerical inverse Laplace transform algorithms. A Fortran program was also developed to facilitate plotting and parameter estimation, implementing the inversion algorithm of de Hoog et al (1982). Python and Fortran implementations of the solution are available at https://github.com/klkuhlm/graded. 3 Solution of Flow Problem 3.1 Mapping onto Modified Bessel Equation The governing ordinary differential equation (9) can be made equivalent to a form of the modified Bessel equation after a change of variables first used by Lommel (1868) for the standard Bessel equation. Appendix B illustrates an analogous change of variables to the modified Bessel equation. Comparing (9) to this scaled version of the modified Bessel equation (41), they are equivalent given the following correspondences \u03b1 =1 2 (\u03ba \u2212m + 1) \u03b3 =1 2 (\u03ba \u2212\u03b7 + 2) (12) \u03bd = s \u03b12 \u03b32 = \u03ba \u2212m + 1 \u03ba \u2212\u03b7 + 2 \u03b2 = r s \u03b32 = s 4s (\u03ba \u2212\u03b7 + 2)2 . The transformed modified Bessel equation has the general solution (37) y = z\u03b1 [AI\u03bd (\u03b2z\u03b3) + BK\u03bd (\u03b2z\u03b3)] , (\u03b3 \u0338= 0) , (13) where A and B are constants determined by the boundary conditions and I\u03bd(z) and K\u03bd(z) are the firstand second-kind modified Bessel functions of non-integer order and real argument (McLachlan, 1955; Bowman, 1958; Spanier and Oldham, 1987; DLMF, 2023). The finiteness boundary condition (10) requires A = 0 to keep the solution finite as rD \u2192\u221e, since the first-kind modified Bessel function grows exponentially with increasing real argument, leaving \u00af pD (rD) = r\u03b1 DBK\u03bd (\u03b2r\u03b3 D) , (14) 6 \fwhich is not defined for \u03b3 = 0 (i.e., \u03ba\u2212\u03b7 = \u22122, which is unrealistic because \u03ba is larger than \u03b7 for physical reasons), and B is determined by the Laplace-space source borehole boundary conditions. 3.2 Constant-Pressure (Type-I) at Borehole The borehole boundary condition (rD = 1) for specified change in pressure leads to the solution (the Warren and Root (1963) double porosity solution for this wellbore boundary condition is equivalent to Hayek et al (2018)) \u00af pI D(rD) = \u00af ftr\u03b1 D K\u03bd (\u03b2r\u03b3 D) K\u03bd (\u03b2) (15) and its radial gradient (i.e., proportional to flow of fluid into the borehole) d\u00af pI D drD = \u00af ftr\u03b1\u22121 D \u0014 (\u03b1 \u2212\u03b3\u03bd) K\u03bd (\u03b2r\u03b3 D) K\u03bd (\u03b2) + \u03b2\u03b3r\u03b3 D K\u03bd\u22121 (\u03b2r\u03b3 D) K\u03bd (\u03b2) \u0015 , (16) using a recurrence relationship for the derivative of the Bessel function in terms of Bessel functions of adjacent orders (DLMF, 2023, \u00a710.29.2). Restricting \u03ba \u2265\u03b7 (i.e., permeability decreases as fast or faster than porosity), then \u03b3 > 0 and \u03b1 = \u03b3\u03bd (for \u03b3 < 0, \u03b1 \u2212\u03b3\u03bd = 2\u03b1). This physically motivated restriction on parameters simplifies (16) to d\u00af pI D drD = \u221as \u00af ftr\u03b1+\u03b3\u22121 D K\u03bd\u22121 (\u03b2r\u03b3 D) K\u03bd (\u03b2) , (17) since \u03b2\u03b3 = \u221as for \u03b3 > 0. When evaluated in the source borehole (rD = 1), the solution simplifies further. Figure 2 shows plots of the predicted pressure gradient at rD = 1 due to a constant-pressure condition there (top row) and the predicted decrease in pressure radially away from the boundary (values of \u03b7, \u03ba, and m for each simulation are listed in the caption and title of each figure). Both rows of plots show the variability with the porosity exponent (\u03b7, given by the line color) and the permeability exponent (\u03ba = \u03b7\u03c4, given by the line type). The same results are shown for Cartesian linear (m = 0), cylindrical (m = 1), and spherical (m = 2) geometries in three columns. For a given set of parameters, a higher-dimensional domain (larger m) leads to a slower drop in produced fluids at any time. The highest sustained flowrate for all dimensions is achieved with constant properties in space (i.e., the red curve \u03b7 = \u03ba = 0). More negative exponents in the porosity and permeability power-laws lead to more rapid decrease in flowrate, as the contribution to flow from large radius vanishes when the exponent increases in magnitude. These types of responses might be mis-interpreted as being associated with lower permeability (which would also lead to a faster decrease in flowrate) using a model with constant properties and a fixed dimension. In the source well (top row of subplots), the effect of \u03ba is different and are predicted to reverse between dimensions. For \u03b7 = 3 (black lines), the \u03ba = {3, 6, 9} cases are swapped between m = 1 and m = 2. For \u03b7 = 2 (blue lines), the \u03ba cases are swapped between m = 0 and m = 1. The bottom row of figures shows the predicted pressure with distance at tD = 10. At locations away from the source well (rD > 1), changes in the porosity exponent, \u03b7, have relatively less impact than changes in the permeability exponent, \u03ba (different colored solid lines are close together, while colored lines of different line type are widely separated). The dimensionality (m) has a smaller effect at locations away from the source borehole than it had on the gradient predicted at the source borehole. 3.3 Constant-Flowrate with Wellbore Storage (Type-III) The wellbore-storage boundary condition for the specified flowrate solution at rD = 1 results in the general solution (that is new for any double-porosity solution with power-law variation in material properties) \u00af pIII D (rD) = \u00af ftr\u03b1 D K\u03bd (\u03b2r\u03b3 D) (\u03b1 \u2212\u03b3\u03bd + \u03c3s) K\u03bd (\u03b2) + \u03b2\u03b3K\u03bd\u22121 (\u03b2), (18) which can be simplified using \u03b1 = \u03b3\u03bd and \u03b2\u03b3 = \u221as to \u00af pIII D (rD) = \u00af ftr\u03b1 D K\u03bd (\u03b2r\u03b3 D) \u221asK\u03bd\u22121 (\u03b2) + \u03c3sK\u03bd (\u03b2). (19) 7 \fFig. 2 Type-I flowrate (top row at rD = 1) and pressure (bottom row at rD > 1 and tD = 10) solution at borehole for m = 0, 1, 2 (Cartesian, cylindrical, and spherical) and at different radial distances. Line color indicates \u03b7; line type indicates \u03ba/\u03b7. Line segments in top row illustrate slopes of 1/2, 1, and 3/2. Analogous to the results for the Type-I solution but only showing the m = 1 and m = 2 cases, Figure 3 shows the predicted pressure through time at the boundary for a specified flowrate at the boundary. Figure 3 results are for no wellbore storage (\u03c3 = 0), while Figure 4 shows the same results with nonzero wellbore storage (all model parameters listed in caption or title of each figure). Wellbore storage is important at early time, leading to a smaller predicted change in pressure, with the predicted response giving a characteristic 1 : 1 slope on log-log plots before formation storage contributes significantly to the flow (i.e., pumping in a bathtub). Wellbore storage makes more of a difference (i.e., shows a larger deviation from \u03c3 = 0 case) for larger \u03b7 (and \u03ba, since \u03ba = 2\u03b7). 3.4 Parameter Combinations Yielding Simpler Solutions When \u03b7 = \u03ba = 0, permeability and porosity are constant in space; in this case (9) simplifies to d2\u00af pD dr2 D + m rD d\u00af pD drD \u2212s\u00af pD = 0, (20) 8 \fFig. 3 Type-II solution (Type-III with \u03c3 = 0) at borehole for m = 1, 2 (cylindrical and spherical). Line color indicates \u03b7; line type indicates \u03ba/\u03b7. which is the dimensionless form of the equation solved by Barker (1988). In this case \u03b3 = 1, \u03b1 = (1\u2212m)/2, \u03bd = \u03b1, and \u03b2 = \u221as. The solution in Laplace-space under these conditions becomes \u00af pD (rD) = r\u03bd DBK\u03bd \u0000\u221asrD \u0001 , (21) which was found by Barker (1988, Eqn. 15). When \u03b7 = \u03ba = m = 0 the time-domain solution simplifies to pD(t) = 1/ \u221a \u03c0t, because \u03bd = 1/2 and \u03bd \u22121 = \u22121/2, the numerator and denominator of (17) are equal since K\u03bd(z) \u2261K\u2212\u03bd(z). Another simplification occurs when m = \u03ba = \u03b7, not necessarily zero. In this case, the permeability and porosity decrease at the same rate radially that the surface area of the domain grows in size (A0 \u221d1, A1 \u221drD, A2 \u221dr2 D), resulting in an equivalent Cartesian coordinate system, d2\u00af pD dr2 D \u2212s\u00af pD = 0, (22) which has a solution in terms of sin(\u221asrD) and cos(\u221asrD) or exp(\u00b1\u221asrD) and typically has an explicit inverse Laplace transform. In this case \u03b1 = \u03bd = 1/2, \u03b3 = 0, and \u03b2 = \u221as. When \u03bd = n \u00b1 1 2 (for n integer), the modified Bessel functions become modified spherical Bessel functions (DLMF, 2023, \u00a710.47), and when \u03bd = \u00b1 1 3, they become Airy functions (DLMF, 2023, \u00a79.6). These additional special cases are not handled differently here (i.e., the more general solution in terms of modified Bessel functions is still valid), since in the case given here \u03bd varies with \u03ba, \u03b7, and m (12). 4 Extension of Solution to Double Porosity 4.1 Mass-Transfer Coefficient Approximation Beginning with the Warren and Root (1963) formulation for double-porosity (i.e., high-conductance fractures and high-capacity matrix), the power-law permeability and porosity distributions are incorporated. 9 \fFig. 4 Type-III solution at borehole (rD = 1), for m = 1, 2 (cylindrical and spherical). Line color indicates \u03b7; line type indicates \u03c3. All curves for \u03ba/\u03b7 = 2. The equations for double-porosity flow in the fractures and matrix are 1 rm \u2202 \u2202r \u0014kf \u00b5 \u2202pf \u2202r \u0015 = nrcr \u2202pr \u2202t + nfcf \u2202pf \u2202t \u02c6 \u03b1kr \u00b5 (pf \u2212pr) = nrcr \u2202pr \u2202t (23) where \u02c6 \u03b1 is the shape factor [1/m2] of Warren and Root (1963), subscript f indicates fracture, and subscript r indicates matrix (rock). The matrix equation does not involve a spatial gradient of pressure, nor a matching of pressure and flux at the boundary, but simply a difference between the fracture and matrix pressure (i.e., the mass transfer coefficient approximation often used for heat transfer across thin films). This behavior is sometimes referred to in the petroleum engineering literature as \u201csteady-state\u201d flow between the fracture and matrix (Da Prat, 1990), but it also represents one-dimensional diffusion in the matrix with a thin-film mass-transfer approximation between the fracture and matrix reservoirs, analogous to Newton\u2019s law of cooling. Substituting the permeability ki = ki0 \u0010 r rw \u0011\u2212\u03bai and porosity ni = ni0 \u0010 r rw \u0011\u2212\u03b7i (i \u2208{f, r}), then converting to dimensionless form using an analogous approach to Warren and Root (1963), where \u03c9 = nf0cf/ (nr0cr + nf0cf) is the dimensionless fracture storage coefficient and \u03bb = \u02c6 \u03b1krr2 w/kf is the dimensionless interporosity exchange coefficient. Finally, taking the Laplace transform of both equations results in the pair of ordinary differential equations \u0014d2\u00af pfD dr2 D + m \u2212\u03baf rD d\u00af pfD drD \u0015 r\u2212\u03baf = (1 \u2212\u03c9)r\u2212\u03b7r D \u00af pmDs + \u03c9r\u2212\u03b7f D \u00af pfDs \u03bb (\u00af pfD \u2212\u00af prD) r\u2212\u03bar D = (1 \u2212\u03c9)r\u2212\u03b7r D \u00af prDs. (24) Solving for matrix pressure in the matrix equation, \u00af prD = \u00af pfD\u03bbr\u2212\u03bar D / \u0002 (1 \u2212\u03c9)sr\u2212\u03b7r D + \u03bbr\u2212\u03bar D \u0003 , and substituting this into the fracture equation leads to a single equation solely in terms of dimensionless 10 \fFig. 5 Type-I flowrate solution at borehole (left) and Type-II solution for pressure (\u03c3 = 0, right), for m = 1 (cylindrical). Line color indicates \u03bb; line type indicates \u03c9. Laplace-domain fracture pressure \u0014d2\u00af pfD dr2 D + m \u2212\u03baf rD d\u00af pfD drD \u0015 r\u2212\u03baf = r\u2212\u03b7r D \u00af pfD ( (1 \u2212\u03c9)sr\u2212\u03bar D \u03bb (1 \u2212\u03c9)sr\u2212\u03b7r D + \u03bbr\u2212\u03bar D ) + \u03c9r\u2212\u03b7f D \u00af pfDs. (25) To force the term in curly brackets in (25) to be independent of rD, \u03bar = \u03b7r is assumed. Setting \u03bar and \u03b7r equal to \u03b7f allows rD and \u00af pfD to be similar form to previous solutions. Simplifying the subsequent notation \u03baf \u2192\u03ba, \u03b7r \u2192\u03b7, and \u00af pfD \u2192\u00af pD results in d2\u00af pD dr2 D + m \u2212\u03ba rD d\u00af pD drD = r\u03ba\u2212\u03b7 D \u00af pD \u0014 (1 \u2212\u03c9)s\u03bb (1 \u2212\u03c9)s + \u03bb + \u03c9s \u0015 , (26) which is the same form as (9). This solution corresponds to the same scaled Bessel equation, with only the definition of \u03b2 changing to \u03b2W R = s\u0014 \u03bb \u03bb/(1 \u2212\u03c9) + s + \u03c9 \u0015 s \u03b32 . (27) Any more general spatial behavior of matrix properties (e.g., \u03b7r \u0338= \u03bar) would not be solvable with the same approach. This limitation still makes physical sense, as the the most important terms to vary with space are the fracture permeability and the matrix storage. Setting \u03ba = \u03b7 = 0 and m = 1 results in the Warren and Root (1963) solution. Figure 5 shows typical solution behaviors for the cylindrical (m = 1) case for Type-I and Type-II wellbore boundary conditions, for \u03b7 = 3 and \u03ba = 6. Figure 6 shows behavior from the \u201cmiddle\u201d curve in Figure 5 (\u03bb = 10\u22125 and \u03c9 = 10\u22124), for a range of porosity and permeability exponents similar to those shown in Warren and Root (1963), listed in the figure caption. 11 \fFig. 6 Type-I flowrate solution at borehole (left) and Type-II solution for pressure (\u03c3 = 0, right), for m = 1 (cylindrical). All curves are for \u03bb = 10\u22125 and \u03c9 = 10\u22124 (middle curves shown in Figure 5). Line color indicates \u03b7; line type indicates \u03ba/\u03b7. 4.2 Matrix Diffusion The matrix diffusion problem of Kazemi (1969) is more physically realistic (Aguilera, 1980; Da Prat, 1990), but it is typically solved numerically or via late-time approximations (De Swaan, 1976), rather than analytically like Warren and Root (1963). The series approach of Kuhlman et al (2015) is used here to represent matrix diffusion in a single matrix continuum through the sum of an infinite series of Warren-Root matrix continua, and the infinite sum is then analytically summed. The generalization of (23) to multiple matrix continua starts with 1 rm \u2202 \u2202r \u0014kf \u00b5 \u2202pf \u2202r \u0015 = N X j=1 njcj \u2202pj \u2202t + nfcf \u2202pf \u2202t \u02c6 \u03b1jkj \u00b5 (pf \u2212pj) = njcj \u2202pj \u2202t j = 1, 2, . . . N, (28) where N is the number of matrix continua (one additional equation for each continuum). Similarly taking the Laplace transform of this set of equations, solving for \u00af pf, substituting the matrix equations into the fracture equation, and simplifying the notation leads to d2\u00af pD dr2 D + m \u2212\u03ba rD d\u00af pD drD = r\u03ba\u2212\u03b7 D \u00af pD\u03c9s(1 + \u00af g), (29) where \u00af g = N X j=1 \u02c6 \u03bejuj s + uj (30) is a matrix memory kernel (Haggerty and Gorelick, 1995), \u02c6 \u03bej is related to the storage properties of each matrix continuum (analogous to \u03c9 of Warren and Root (1963)), and uj is related to the interporosity flow coefficient of each matrix continuum (analogous to \u03bb of Warren and Root (1963)). The Laplacespace memory kernel approach is flexible, and is used elsewhere in hydrology and reservoir engineering (Herrera and Yates, 1977; Haggerty et al, 2000; Schumer et al, 2003). Equation (29) can be simplified to 12 \fWarren and Root (1963) with a particular choice of \u00af g and N = 1, and to the solution for a triple-porosity reservoir (Clossman, 1975) with a different choice of \u00af g and N = 2 (Kuhlman et al, 2015). When N \u2192\u221ein (30), the it is more convenient to specify the mean and variance of the parameter distributions than the individual parameters associated with each porosity. Several different distributions are possible (Haggerty and Gorelick, 1995). In the form presented by Kuhlman et al (2015), the parameters are specified as the infinite series uj = (2j \u22121)2\u03c02\u03bb 4(1 \u2212\u03c9) \u02c6 \u03bej = 8(1 \u2212\u03c9) (2j \u22121)2\u03c9\u03c02 j = 1, 2, . . . N \u2192\u221e (31) which leads to the Kazemi (1969) solution for matrix diffusion. The parameters \u03bb and \u03c9 have the same definitions as in Warren and Root (1963). Setting \u03ba = \u03b7 = 0 results in the solution of Kuhlman et al (2015). The new governing equation is the same form and the modified Bessel function solution, only requiring re-definition of \u03b2 as \u03b2KZ = v u u t \" N X j=1 \u03c9\u02c6 \u03bejuj uj + s + \u03c9 # s \u03b32 , N \u2192\u221e. (32) Substituting the definitions of u and \u02c6 \u03be from (31) and simplifying leads to \u03b2KZ = v u u t \" N X j=1 2\u03bb W 2 j \u03bb/(1 \u2212\u03c9) + s + \u03c9 # s \u03b32 , N \u2192\u221e, (33) where Wj = \u03c0(2j \u22121)/2. This is similar in form to (27) but the term in the denominator grows as the index increases, illustrating how the series solution approximates the Kazemi (1969) solution through an infinite series of modified Warren and Root (1963) matrix porosities. Further simplifying the approach of Kuhlman et al (2015), the infinite series in (33) can be evaluated in closed form using residue methods (Wolfram Research, Inc., 2021), resulting in \u03b2KZ = v u u t \"r \u03bb(1 \u2212\u03c9) s tanh r s(1 \u2212\u03c9) \u03bb ! + \u03c9 # s \u03b32 , (34) where tanh(\u00b7) is the hyperbolic tangent. This closed-form expression derived here is more accurate and numerically more efficient than truncating or accelerating the infinite series in (32), which is an improvement over the series presented in Kuhlman et al (2015) for graded or homogeneous domains. Figure 7 illustrates the transition from the Warren and Root (1963) (N = 1) to the Kazemi (1969) series approximation for increasing terms (N = {2, 10, 100, 1000}, heavy colored solid lines) and the expression for the infinite sum (34) (heavy black dashed line) for flow to a specified flux (type-II, \u03c3 = 0) cylindrical (m = 1) borehole of constant material properties (\u03ba = \u03b7 = 0). The bounding Theis (1935) behavior is shown for the fracture and matrix compressibilities (thin red dashed lines). 5 Applications and Limitations A general converging radial flow solution for specified flowrate or specified wellhead pressure was derived for domains with power-law variability in porosity and permeability due to damage. The single-porosity version has already been presented by Doe (1991), and a solution for constant-pressure condition without wellbore storage was derived by Hayek et al (2018), but the specified-flowrate double-porosity solution with wellbore storage presented here is new. The infinite series approximation to Kazemi was summed analytically, resulting in a new closed-form expression of the series presented in Kuhlman et al (2015), which is an improvement for both graded and homogeneous properties. The newly developed analytical solutions are more general (i.e., several existing solutions are special cases of the new solution) and include more behaviors typical in well-test solutions (i.e., wellbore storage, positive skin, double porosity), 13 \fFig. 7 Type-II solution for pressure at source borehole (\u03c3 = 0), for m = 1 (cylindrical) for different number of terms. All curves are for \u03bb = 10\u22125, \u03c9 = 10\u22124, \u03ba = \u03b7 = 0. while still being straightforward and parsimonious (i.e., as few free parameters as possible) in their implementation. The basic flow solution assumes linear single-phase flow of a fluid in a slightly compressible formation. The double-porosity solution assumes the fractures are high permeability, with low storage capacity, while the matrix (i.e., intact rock between fractures) is high storage capacity with low permeability. These assumptions are representative for analytical solutions to subsurface porous media flow problems in the hydrology and petroleum engineering literature, and are shared by the solutions of Barker (1988), Doe (1991), Warren and Root (1963), Kazemi (1969), and Kuhlman et al (2015). To apply this analytical solution to observed data, either observed data would be transformed into dimensionless space, or the analytical solution could be transformed to dimensional space, then a parameter estimation routine would be used to minimize the model-data misfit, and possibly explore the uncertainty or uniqueness of the solution. The solution method developed to solve these solutions uses numerical inverse Laplace transforms and runs quickly enough to be used in parameter estimation (e.g., Monte Carlo methods that require hundreds of thousands of evaluations). The analytical solution might be of most use with parameter estimation to fit observations, but the non-uniqueness of the curves may make estimation of unique physical parameters difficult, without further physical or site-specific constraints. Realistically, the parameters in the Bessel equation may be estimable (i.e., \u03b1, \u03b2, \u03b3, and \u03bd defined in (12)), but without defining the flow dimension (m) or the relationship between the porosity and permeability exponents (\u03c4 = \u03ba/\u03b7), it may be difficult to identify all the parameters from data alone, since many the curves have similar shapes, unlike classical Type curves (Bourdet et al, 1989). 14 \fAc borehole cross-sectional area m2 Am borehole cylindrical surface area m2 c bulk compressibility 1/Pa ft time variability \u2212 g gravitational acceleration m/s2 h hydraulic head m k permeability m2 Lc characteristic length (rw) m m dimension (D \u22121) \u2212 n porosity \u2212 p change in pressure Pa s Laplace transform parameter \u2212 Q volumetric flowrate m3/s r distance coordinate m rw borehole or excavation radius m \u02c6 \u03b1 Warren and Root (1963) shape factor 1/m2 \u03b7 porosity power-law exponent \u2212 \u03ba permeability power-law exponent \u2212 \u03c1 fluid density kg/m3 \u00b5 fluid viscosity Pa \u00b7 s Table 1 Physical Properties and Parameters pD scaled pressure p/pc tD scaled time tk0/n0cL2 c\u00b5 rD scaled distance r/Lc \u03bb interporosity exchange coefficient \u02c6 \u03b1krr2 w/kf \u03c3 wellbore storage coefficient Ac/(rwn0c\u03c1gAm) \u03c9 fracture storage coefficient nf0cf/(nr0cr + nf0cf) Table 2 Dimensionless Quantities Statements and Declarations Funding The author thanks the U.S. Department of Energy Office of Nuclear Energy\u2019s Spent Fuel and Waste Science and Technology program for funding. Conflicts of Interest The author has no competing interests to declare. Availability of Data and Material No data or materials were used by the author in the preparation of the manuscript. Code Availability The source code of Fortran and Python implementations of the program are available from the author upon request. Acknowledgments This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish 15 \for reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan https://www.energy.gov/downloads/doe-public-access-plan. The author thanks Tara LaForce from Sandia for technically reviewing the manuscript. 6 Appendix A: Wellbore Storage Boundary Condition The wellbore-storage boundary condition accounts for the storage in the finite borehole arising from the mass balance Qin \u2212Qout = Ac \u2202hw \u2202t . Qin [m3/s] is volumetric flow into the borehole from the formation, Qout is possibly time-variable flow out of the well through the pump (Q(t) [m3/s]), and \u2202hw \u2202t is the change in hydraulic head [m] (hw = pw \u03c1g + z) of water standing in the borehole through time, pw is change in pressure [Pa] of water in the borehole, \u03c1 is fluid density [kg/m3], z is an elevation datum [m], and g is gravitational acceleration [m/s2]. Ac is the cross-sectional surface area of the pipe, sphere or box providing storage (it may be a constant or a function of elevation); for a typical pipe, it becomes Ac = \u03c0r2 c, where rc is the radius of the casing where the water level is changing. The mass balance is then Amk0 \u00b5 \u2202p \u2202r \f \f \f \f r=rw \u2212Q(t) = Ac \u03c1g \u2202pw \u2202t , (35) where Am is the area of the borehole communicating with the formation. For the integer m considered here these are A0 = b2, A1 = 2\u03c0rwb, A2 = 4\u03c0r2 w (b is a length independent of the borehole radius). Assuming the change in water level in the borehole (hw = pw/ (\u03c1g)) is equal to the change in formation water level (h = p/ (\u03c1g)), this can be converted into dimensionless form as \u2202pD \u2202rD \f \f \f \f rD=1 \u2212ft = \u03c3 \u2202pD \u2202t , (36) where \u03c3 = Ac/ (rwn0c\u03c1gAm) is a dimensionless ratio of formation to wellbore storage; \u03c3 \u21920 is an infinitesimally small well with only formation response, while \u03c3 \u2192\u221eis a well with no formation response (i.e., a bathtub). 7 Appendix B: Transformation of Modified Bessel Equation Following the approach of Bowman (1958), alternative forms of the Bessel equation are found, this approach is a simplification of the original approach of Lommel (1868). An analogous approach is applied here to \u201cback into\u201d the desired modified Bessel equation. The equation satisfied by the pair of functions y1 = x\u03b1I\u03bd (\u03b2x\u03b3) , y2 = x\u03b1K\u03bd (\u03b2x\u03b3) (37) is sought, where \u03b1, \u03b2, \u03b3, and \u03bd are constants. Using the substitutions \u03b6 = yx\u2212\u03b1 and \u03be = \u03b2x\u03b3 gives \u03b61 = I\u03bd (\u03be) and \u03b62 = K\u03bd (\u03be), which are the two solutions to the modified Bessel equation (DLMF, 2023, \u00a710.25.1), \u03be d d\u03be \u0012 \u03be d\u03b6 d\u03be \u0013 \u2212(\u03be2 + \u03bd)\u03b6 = 0. (38) Given \u03be d d\u03be \u0012 \u03be d\u03b6 d\u03be \u0013 = x \u03b32 d dx \u0012 x d\u03b6 dx \u0013 , (39) and x d dx \u0012 x d\u03b6 dx \u0013 = y\u2032\u2032 x\u03b1\u22122 \u2212(2\u03b1 \u22121) y\u2032 x\u03b1\u22121 + \u03b12y x\u03b1 , (40) the standard-form equation satisfied by y is y\u2032\u2032 + (1 \u22122\u03b1) y\u2032 + \u03b12y x\u03b1 \u2212 \u0012 \u03b22\u03b32x2\u03b3\u22122 \u2212\u03b12 \u2212\u03bd2\u03b32 x2 \u0013 y = 0. (41) 16 \fThis equation can be compared to the Laplace-space ordinary differential equation (9), allowing direct use of the product of powers and modified Bessel function (37) as solutions (13)."
18
+ }
title_10K/test_title_short_2405.02478v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02478v1",
3
+ "title": "Continuous Learned Primal Dual",
4
+ "abstract": "Neural ordinary differential equations (Neural ODEs) propose the idea that a\nsequence of layers in a neural network is just a discretisation of an ODE, and\nthus can instead be directly modelled by a parameterised ODE. This idea has had\nresounding success in the deep learning literature, with direct or indirect\ninfluence in many state of the art ideas, such as diffusion models or time\ndependant models. Recently, a continuous version of the U-net architecture has\nbeen proposed, showing increased performance over its discrete counterpart in\nmany imaging applications and wrapped with theoretical guarantees around its\nperformance and robustness. In this work, we explore the use of Neural ODEs for\nlearned inverse problems, in particular with the well-known Learned Primal Dual\nalgorithm, and apply it to computed tomography (CT) reconstruction.",
5
+ "authors": "Christina Runkel, Ander Biguri, Carola-Bibiane Sch\u00f6nlieb",
6
+ "published": "2024-05-03",
7
+ "updated": "2024-05-03",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "eess.IV"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Continuous Learned Primal Dual",
16
+ "main_content": "Introduction Computed Tomography (CT) is an ubiquitous imaging technique in modern medicine that allows for imaging of patients using X-rays. In brief, CT relies on measuring a series of images corresponding to the attenuation of X-rays by the object of interest (a human, in medicine), by rotating the X-ray source and detector around the patient, typically around a full circle. CT reconstruction thus refers to the problem of obtaining the image that produced the measurements, often called sinograms. This problem, mathematically modelled by the Radon transform (line integrals over a domain), is ill-posed, as, in general, breaks the three conditions that define a well-posed problem: Firstly the solution is not continuously dependant on the measurement, as small changes in the measured sinogram will represent large changes in the image. Secondly it has no unique solution in the general Preprint. 1 arXiv:2405.02478v1 [cs.LG] 3 May 2024 \fsense, particularly in undersampled or limited view tomography. Finally, under sufficient measurement noise, there may be no solution. This theoretical analysis has direct implication in medicine, as signal noise is directly related to X-ray dose, which ideally is desired to be reduced as much as possible, as X-rays are ionizing-radiation, which leads to cell dead and can increase likelihood of cancer arising in living tissue. Similarly, the non-uniqueness can be an issue, as albeit most standard CT is performed using full circular trajectories, clinical applications like breast tomosynthesis or image guided surgery often have physical limitations on the scanning range of the CT machine, and thus inherently cannot acquire all the required data to ensure uniqueness of solutions. In practice, it is thus rare to reduce the noise and the scanning range, as the reconstruction is often unusable. But, if a robust enough reconstruction method can be found, dose reduction becomes feasible. Classically (and very often, clinically) the method that solves the CT reconstruction is the Filtered Backprojection (FBP) algorithm, an approximation of the inverse of the aforementioned Radon transform. As this method assumes a continuous sampling with no noise, it performs sufficiently well under those conditions, but rapidly degrades with increased noise and undersampling. Other methods have been proposed, based on the variational regularization approach [1] [2] that, by using the physics of CT, can iteratively solve the CT reconstruction problem, generally with much better performance against noise, particularly under appropriate choices of regularization, such as Total Variation. In recent years, these methods have been enhanced by using datadriven methods, i.e. Machine Learning (ML). A variety of methods have been proposed for data driven CT reconstructions, but in this work we will focus on the Learned Primal Dual (LPD). The goal of this work is a robustness enhancement of learned methods, and showcasing a proof of concept using LPD. The motivation of this work is driven by Neural Ordinary Differential Equations (Neural ODEs), a way to interpret the typical convolutions and layers that convolutions neural networks (CNNs) are made of as a discretization of a continuous ODE. This continuation of the discrete layers produces better performing networks that are provably robust to noise, and have been shown to outperform their discrete counterparts in practical scenarios (see e.g., [3,4]). Given that noise rejection is a key feature of a good CT reconstruction solver, this work proposes to put together data-driven models and Neural ODEs, to further enhance their performance. We propose the Continous LPD (cLPD), an idea that however is feasible to implement in any other datadriven inverse problem, in principle. 2 Methods In this section we first introduce CT reconstruction, then the LPD algorithm and Neural ODEs. This leads to the novelty in this work, the cLPD and its architecture. 2 \f2.1 Variational formulation of reconstruction Mathematically, one can write the CT reconstruction problem as seeking to recover a function (the image) x : R3 \u2192R from the measured projections, described by the Radon transform as y(\u2113) = R \u2113x(z) dz, \u2113\u2208L, where L represents the lines in R3 from the X-ray source to each detector, defined by the scanner geometry and rotation. This is often linearized and discretized as Ax = y + \u02dc e (1) where A represents the integral over the lines (often referred as the forward operator), x is a vector representing the pixel values, y is a vector representing the measured sinogram values and \u02dc e is the noise or error, either from measurement of from the linearization. To solve 1 in a robust manner, the variational regularization approach has found significant success in the literature, proposing the following optimization: \u02c6 x = arg min x D(y, Ax) + R(x) (2) where D measures the data fidelity between the measurement and the image estimate (most commonly the l2 distance in CT, due to the characteristics of the noise) and R is a regularization function that promotes images of desired properties, also called a prior. The optimization literature has proposed many methods to solve 2, given particular choices of D and R. These methods have been shown to outperform the FBP algorithm under most conditions, given appropriate choice of functions and parameters. 2.2 Data-driven methods: Learned Primal Dual In recent years, NN have been proposed to solve problems like CT in 1. While many methods can be proposed as a post-processing of a reconstruction, generally FBP as \u00af x = N\u03b8(\u02c6 x) (3) being N\u03b8 a NN parametrized by \u03b8. While these produce solutions \u00af x of high quality, they solutions are not guaranteed to be of small D(y, A\u00af x) (i.e. fitting to the measured data), as there is no such constraint in N\u03b8. Thus data-driven model-based methods where proposed in the literature, attempting to mix data driven methods with algorithms that use explicit use of the physics knowledge of the model, A. While several methods exist, in this work we focus on the LPD [5]. LPD was formulated starting from the Primal Dual Hybrid Gradient (PDHG) algorithm [6], that solves 2 using classical methods, and can be expressed as in algorithm 1, with an appropriate initialization of x0 (e.g. FBP) and z0 (often zero). This algorithm uses proximal operators, defined as prox\u03c4F(x) = arg min u F(u) + \u03c4 2\u2225u \u2212x\u22252 2. (4) 3 \fAlgorithm 1 Primal Dual Hybrid Gradient Input: x0, z0, \u03c3 > 0, \u03c4 > 0, \u03c1 \u2208[0, 1] 1: for i = 1, ... do 2: zi+1 \u2190prox\u03c3D(zi + \u03c3A\u00af xi) 3: xi+1 \u2190prox\u03c4R(xi \u2212\u03c4AT zi+1) 4: \u00af xi+1 \u2190xi+1 + \u03c1(xi+1 \u2212xi) 5: end for LPD thus proposes to replace these proximal operators, and also the update step for \u00af xi+1 for NNs, leading to: Algorithm 2 Learned Primal Dual Input: x0, z0 1: for i = 1, ..., I do 2: zi+1 \u2190\u0393\u03b8d i (zi, A\u00af xi, y) 3: xi+1 \u2190\u039b\u03b8p i (xi, AT zi+1) 4: end for In algorithm 2, the number of iterations I is predefined (therefore the common name of unrolled method), and networks \u0393\u03b8d i and \u039b\u03b8p i therefore are defined by a different set of parameters \u03b8i in each iteration. In practice often zi+1 and xi+1 are composed of several channels, but only one of them is used to update the respective variable. Interestingly, these primal and dual networks require small parametrizations, as the intuition of replacing a proximal suggest, they do not need to represent a complex transform, only a small step change. In comparison to a typical NN, LPD uses the operator A, thus limiting the results to the space of valid images. It has been shown that in simulated studies, LPD outperforms most well known classical variational methods and post-processing NN methods of the form of 3, leading to many variations being proposed [7\u20139]. It is important to note that while LPD has the form of a classical optimizer with convergence guarantees, such properties are lost once parametrized with a network [10]. It is more appropriate to see the entirety of algorithm 2 as a single network LPD\u03b8(y) The LPD is finally trained given a set of training data Tj = (xj, yj), j \u2208[1, J] with a loss function L(\u03b8) minimizing the empirical loss L(\u03b8) = 1 J J X j=0 \u2225LPD\u03b8(yj) \u2212xj\u2225, (5) and using the resulting \u03b8, employing typical minimization algorithms from machine learning literature, such as Adam. For the purpose of this work, however, we are interested in how \u0393\u03b8d i and \u039b\u03b8p i are constructed. As its standard in imaging applications, these are constructed as a series of discrete convolutions. While this method of constructing NNs 4 \fis overwhelmingly the standard, there is evidence that one can obtain better results if these convolutions are modelled by a continuous function, rather than a discrete operation. This continuous representation was proposed and named Neural Ordinary Differential Equations or, Neural ODEs [11]. 2.3 Neural Ordinary Differential Equations Neural ordinary differential equations (NeuralODEs) as introduced in [11] are based on the fact that neural networks like ResNet [12] can be seen as an Euler discretisation of a continuous transformation [13\u201315] Every discrete layer thus computes xt+1 = xt + f\u03b8t(xt) for a parametrised function f\u03b8t and an input xt. By reducing the size of the steps, i.e., adding more layers to the network, in the limit the network f\u03b8 describes the dynamics of hidden units as the following ordinary differential equation (ODE): \u2202x(t) \u2202t = f\u03b8(x(t), t). (6) The output of the network x(T) thus can be computed by solving the ODE initial value problem at time T via standard ODE solvers. Computing the backward step to compute gradients during training of the network requires backpropagating through the solver. As this is memory-inefficient due to the solver possibly needing hundret of function evaluations, Chen et al [11] introduced the adjoint method. The adjoint method treats the solver as a black box and uses a second ODE going backward in time, starting with the gradients of the original output with respect to the loss function. Using automatic differentiation, the gradients with respect to the parameters can be calculated in a memory efficient way. Neural ODEs are known to be memory and parameter efficient and robust to noise while providing theoretical underpinnings from the theory of ordinary differential equations. 2.4 The Continuous Learned Primal Dual The aim of the continuous learned primal dual algorithm (cLPD) is to combine both the advantages of the classical learned primal dual algorithm with those of neural ODEs. Continuous learned primal dual therefore replaces the discrete convolutional blocks in both networks \u0393\u03b8d i and \u039b\u03b8p i by continuous neural ODE blocks \u0393c \u03b8d i and \u039bc \u03b8p i (see Algorithm 3). As neural ODEs have proven to be more robust to noise, a better handling of noise that is inherent in the data can be achieved a feature that is particularly useful for CT reconstruction. 2.5 Network Architecture The network architecture for both the dual and primal iterates of the continuous learned primal dual algorithm is highlighted in Figure 1. We define the ODE by using five convolutional layers with parametric ReLU (PReLU) activation functions for primal and dual iterates. 5 \fAlgorithm 3 Continuous Learned Primal Dual Input: x0, z0 1: for i = 1, ..., I do 2: zi+1 \u2190\u0393c \u03b8d i (zi, A\u00af xi, y) 3: xi+1 \u2190\u039bc \u03b8p i (xi, AT zi+1) 4: end for (a) Dual iterates, \u0393\u03b8d i . (b) Primal iterates, \u039b\u03b8p i . Figure 1: Network architecture for both the dual and primal iterates of the (continuous) learned primal dual algorithm. Each of the rectangles describes a convolution and ODE for the LPD and cLPD, respectively. The number of input channels is denoted below the box and the kernel size specified in the middle of the rectangle. 3 Experimental Setup To emphasise the advantages of our continuous learned primal dual algorithm, we conduct experiments on the following different radiation doses and geometries: 1. Clinical setting: We firstly test the clinical setting, i.e., a clinical radiation dose on a full circle. 2. Reduced dose setting: An ongoing challenge in CT reconstruction is minimising the radiation dose per patient. This can be achieved by either reducing the X-ray dose or decreasing the number of angles that get measured. We thus test the following experimental settings: a) Extreme low dose, full circle: Reducing the X-ray dose by measuring over the full circle. b) Sparse angle, clinical dose: Reducing the number of angles to measure while keeping the clinical X-ray dose. c) Sparse angle, extreme low dose: Reducing both the number of angles to measure and the X-ray dose. 3. Restricted setting: Clinicians additionally are also interested in a restricted setting. In this setting, it is not possible to measure the full circle but 6 \fjust up to a very limited angle increasing the difficulty of reconstructing images drastically. a) Limited angle, clinical dose: We firstly test the restricted setting on a clinical X-ray dose. b) Limited angle, extreme low dose: Additionally, we then try the limited angle setting on an extreme low X-ray dose. In the following, we will analyse the results for the experimental settings above for our continuous learned primal dual algorithm, the standard learned primal dual with discrete layers and filtered backprojection as comparison to a classical method. We train both the cLPD and LPD with a batch size of 2, learning rate of 10\u22124 and the original LPD parameters used in [5] for 100 epochs on the LIDC-IDRI dataset [16] using the Adam optimiser [17]. 4 Experimental results This section details the results of the experiments. 1. Clinical setting: For the clinical setting, the continuous learned primal dual performs on par with the classical learned primal dual algorithm. The structural similarity index measure (SSIM) for the standard LPD algorithm is slightly higher than for the continuous version while in terms of the peak signalto-noise ratio (PSNR) cLPD outperforms LPD. The cLPD and LPD perform significantly better than FBP, both in terms of image quality metrics as well as visual results (see Subfigure 2a). 2. Reduced dose setting: To analyse the effect that a reduced dose has on the proposed algorithm, we additionally test an extreme low dose and sparse angle geometry. a) Extreme low dose, full circle: When decreasing the dose while measuring over the full circle, similarly as for the clinical setting, cLPD and LPD perform on par, while the average SSIM and PSNR decrease from 0.61 to 0.58 and 34 to 32, respectively. Comparing both algorithms to FBP, FBP is not able to handle the increased noise level (see visual results in Subfigure 2b) while both cLPD and LPD reconstruct denoised images. b) Sparse angle, clinical dose: Reducing the number of angles to measure from while keeping the X-ray dose at a clinical dose, the continuous version of the learned primal dual outperforms the classical LPD and FBP both in terms of SSIM and PSNR (see Table 1). Visually, the FBP is not able to reconstruct any details of the image while cLPD and LPD are able to preserve most of the features. c) Sparse angle, extreme low dose: Further reducing the dose by decreasing the X-ray dose and the number of angles to measure on, cLPD outperforms both the classical learned primal dual and FBP algorithm in terms of 7 \fSSIM, PSNR and visual results. Whith increasing amounts of noise, the reconstructions of both cLPD and LPD get more blury and less detailed while the FBP algorithm produces noisy results without any high-level features. 3. Restricted setting: Analysing a restricted setting, we obtain the following results: a) Limited angle, clinical dose: Firstly testing on a clinical dose, in the restricted setting our proposed continuous learned primal dual outperforms the classical learned primal dual and FBP to an even greater extend. While the average SSIM and PSNR of the reconstructions produced by cLPD compared to 2.c) dropped by 0.09 and 5.26, respectively, the average SSIM and PSNR of the LPD reconstructions decreased by 0.13 and 7.01, respectively \u2013 highlighting the robustness of the cLPD algorithm to noise. The visual results highlighted in Subfigure 2e further highlight these advantages of the cLPD. Even for a restricted setting our method is able to preserve low-level features like the shape of the lungs and introducing barely any artifcats. The LPD and FBP algorithm however reconstruct artifact heavy images that do not resemble the target reconstructions. b) Limited angle, extreme low dose: Secondly testing on an extreme low dose, the performance gap between our cLPD algorithm and both the standard LPD and FBP persists. Similiarly to the previous setting, the visual results (see Subfigure 2f) highlight the robustness to noise of the continuous version of the learned primal dual algorithm. 8 \f(a) Visual results for clinical setting (1.). (b) Visual results for extreme low dose, full circle setting (2.a)). (c) Visual results for sparse angle, clinical dose setting (2.b)). (d) Visual results for sparse angle, extreme low dose setting (2.c)). (e) Visual results for limited angle, clinical dose setting (3.a)). (f) Visual results for limited angle, extreme low dose setting (3.b)). Figure 2: Visual results for a randomly picked image of the test set for all experimental settings. We highlight the results of our cLPD, standard LPD, FBP and the target reconstruction from left to right. With increasing noise levels, our approach (cLPD) is able to outperform the LPD more and more significantly. Both cLPD and LPD outperform FBP in all experimental settings. In the case of a limited angle geometry, cLPD reconstructs artifact free results while the standard LPD starts to blur. For high noise levels and the restricted setting especially, FBP is unsuitable as it introduces artifacts. 9 \fTable 1: Overview of mean structural similarity index measure (SSIM), peak signal-to-noise ration (PSNR) and their standard deviations for the experimental settings highlighted in Section 3. For experimental settings in which the noise level is comparatively low (1. and 2.a)), our proposed algorithm (cLPD) performs as well as its standard version (LPD). In these cases, both the cLPD and LPD outperform the classical FBP. With increasing noise levels, the advantages of NeuralODEs come into play and the cLPD outperforms both LPD and FBP (2.c)-3.b)). Experimental setting Algorithm Mean SSIM (\u2191) Mean PSNR (\u2191) 1.) Full angle, clinical dose cLPD 0.6140 \u00b1 0.1263 34.1787 \u00b1 3.1489 LPD 0.6157 \u00b1 0.1245 34.1159 \u00b1 3.1387 FBP 0.0602 \u00b1 0.0207 16.8117 \u00b1 1.8200 2.a) Full angle, extremely low dose cLPD 0.5773 \u00b1 0.1287 32.3713 \u00b1 2.5793 LPD 0.5790 \u00b1 0.1251 32.2299 \u00b1 2.5228 FBP 0.0213 \u00b1 0.0086 11.2341 \u00b1 1.8579 2.b) Sparse angle, clinical dose cLPD 0.5627 \u00b1 0.1269 31.2625 \u00b1 2.2977 LPD 0.5571 \u00b1 0.1169 30.8406 \u00b1 2.1796 FBP 0.0108 \u00b1 0.0044 8.1548 \u00b1 1.7939 2.c) Sparse angle, extremely low dose cLPD 0.5316 \u00b1 0.1232 29.6664 \u00b1 1.9520 LPD 0.5265 \u00b1 0.1185 29.2588 \u00b1 1.8851 FBP 0.0024 \u00b1 0.0012 2.6769 \u00b1 1.8622 3.a) Limited angle, clinical dose cLPD 0.4465 \u00b1 0.1099 24.4042 \u00b1 1.6947 LPD 0.3951 \u00b1 0.0937 22.4654 \u00b1 1.7108 FBP 0.0103 \u00b1 0.0045 7.7079 \u00b1 1.8264 3.b) Limited angle, extremely low dose cLPD 0.4371 \u00b1 0.1081 24.0181 \u00b1 1.6651 LPD 0.3823 \u00b1 0.0933 22.2501 \u00b1 1.6938 FBP 0.0037 \u00b1 0.0018 3.4242 \u00b1 2.0003 10 \f5 Discussion and Conclusions In this work, we introduced a continuous version of the learned primal dual algorithm for CT reconstruction. We showed that for a clinical, i.e., low noise setting, our approach performs as good as the vanilla learned primal dual. The more reduced the dose, i.e., the noisier the measurements, the bigger the gap in performance between cLPD and LPD gets with cLPD outperforming the discrete LPD. In comparison to FBP, cLPD significantly outperforms FBP in all experimental settings tested. Our approach has furthermore shown to be especially powerful for restricted settings with a limited angle geometry. In contrast to both LPD and FBP which fail to reconstruct any features, cLPD did not introduce artifacts. As NeuralODEs are provably robust to noise, introducing continuous blocks into the standard LPD algorithm showed to be successful in the experiments conducted. Interestingly, continuous LPD requires normalisation at every layer for a stable training whereas the standard LPD achieves best results without any form of normalisation. It would be interesting to further investigate the reasons for this difference. As cLPD uses the adjoint method, i.e., solving an ODE going backward in time, for backpropagation, normalisation might be required to stabilise the backward pass. Future work also includes exploring continous representations for other algorithms in CT reconstruction and inverse problems in general."
17
+ }
title_10K/test_title_short_2405.02696v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02696v1",
3
+ "title": "DiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model",
4
+ "abstract": "Latent Diffusion Models (LDMs) enable a wide range of applications but raise\nethical concerns regarding illegal utilization.Adding watermarks to generative\nmodel outputs is a vital technique employed for copyright tracking and\nmitigating potential risks associated with AI-generated content. However,\npost-hoc watermarking techniques are susceptible to evasion. Existing\nwatermarking methods for LDMs can only embed fixed messages. Watermark message\nalteration requires model retraining. The stability of the watermark is\ninfluenced by model updates and iterations. Furthermore, the current\nreconstruction-based watermark removal techniques utilizing variational\nautoencoders (VAE) and diffusion models have the capability to remove a\nsignificant portion of watermarks. Therefore, we propose a novel technique\ncalled DiffuseTrace. The goal is to embed invisible watermarks in all generated\nimages for future detection semantically. The method establishes a unified\nrepresentation of the initial latent variables and the watermark information\nthrough training an encoder-decoder model. The watermark information is\nembedded into the initial latent variables through the encoder and integrated\ninto the sampling process. The watermark information is extracted by reversing\nthe diffusion process and utilizing the decoder. DiffuseTrace does not rely on\nfine-tuning of the diffusion model components. The watermark is embedded into\nthe image space semantically without compromising image quality. The\nencoder-decoder can be utilized as a plug-in in arbitrary diffusion models. We\nvalidate through experiments the effectiveness and flexibility of DiffuseTrace.\nDiffuseTrace holds an unprecedented advantage in combating the latest attacks\nbased on variational autoencoders and Diffusion Models.",
5
+ "authors": "Liangqi Lei, Keke Gai, Jing Yu, Liehuang Zhu",
6
+ "published": "2024-05-04",
7
+ "updated": "2024-05-04",
8
+ "primary_cat": "cs.CR",
9
+ "cats": [
10
+ "cs.CR",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "DiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model",
16
+ "main_content": "INTRODUCTION The strides made in latent diffusion models [10, 17, 28, 35] have substantially elevated the capacity for synthesizing photorealistic content in image generation and profoundly impact text-to-image [32, 46], image editing [5, 24], in-painting [21, 31], super-resolution [12, 33], content creation [26, 27] and video synthesis [4, 16]. Relevant commercial applications are becoming mainstream creative tools for designers, artists, and the general public. However, contemporary text-to-image generation models, such as Stable Diffusion and Midjourney, can generate a multitude of novel images as well as convincing depictions of fabricated events for malicious purposes. Criminals might utilize LDMs to produce insulting or offensive images, which shall be disseminated to spread rumors and pose a substantial threat to societal security. The hazards of deepfakes, impersonation and copyright infringement are also prevalent issues associated with current generative models. The potential illicit use of text-to-image models has spurred research for embedding watermarks in model outputs. Watermarked images contain signals imperceptible to humans but are marked as machine-generated. Copyright information of the model and the identity information of the model users will be embedded into images. Extracting watermarks from AI-generated images enables the detection of model copyrights and tracing unauthorized users. False and harmful images can be promptly identified and removed from platforms and unauthorized users of the model can be traced through the extraction of image information, which mitigates the potential harm caused by AI-generated content. Existing research on image watermarking tended towards postprocessing solutions. The core concept involves embedding the watermark into the image with minimal adjustments, emphasizing subtlety and intricacy. For instance, the watermark implemented arXiv:2405.02696v1 [cs.CR] 4 May 2024 \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu in Stable Diffusion [8] operates by altering a particular Fourier frequency within the generated image. This type of watermark faces a key trade-off between watermark robustness and image quality. For diffusion model watermarks, Some researchers have proposed embedding fixed messages into generated images by fine-tuning diffusion models like U-Net [30] or variational autoencoders. However, this approach only allows embedding fixed information into the generated images, requiring re-finetuning of the diffusion model when the embedding information needs to be changed. Moreover, if the model owner distributes the diffusion model to a large number of users, each distributed model must be fine-tuned separately, resulting in significant consumption of computational resources and time. Additionally, when the model requires iterative updates, the stability of the watermark becomes unreliable due to adjustments in model parameters. Recent studies [48] have demonstrated that methods involving the random addition of noise to images to disrupt watermarks, followed by image reconstruction using diffusion models, can effectively remove a significant portion of post-processing watermarking schemes. This poses new challenges to the robustness of watermarking. To address the aforementioned challenges and achieve high extraction accuracy, robustness and image quality, we propose a new watermarking scheme called DiffuseTrace. DiffuseTrace differs fundamentally from previous watermarking methods. DiffuseTrace embeds the watermark into the latent variables of the model, subtly influencing the sampling phase of the model. The watermark is embedded at the semantic level prior to image generation, without any post-processing of the generated images. We specialize in a watermarking scheme that can be seamlessly integrated into a wide range of latent diffusion models. DiffuseTrace can serve as a plug-and-play solution across various diffusion models. Taking practical application scenarios into account, we categorize the roles involved in model usage into two types: model producers and model users. Model producers train and possess all pre-trained models, including diffusion models, watermark encoders, watermark decoders. Model producers assign specific binary identity information to each user. By providing APIs, model producers offer generative model services to users. When malicious images resembling model-generated content or images suspected of copyright infringement appear on art platforms, news outlets or other sharing platforms, model producers can trace illegal usage or infringement-involved users by extracting watermark information from the generated images. For watermark modules, we control the distribution of the watermark through an encoder and dynamically allocate a watermark close to the standard normal distribution for each user. Since the data distribution and sampling process remain consistent with the original model, the generated images can achieve transparent watermark embedding with semantic consistency. Human inspection cannot distinguish watermark samples from random samples. Through transforming images into latent variables and inversely diffusing them to obtain the initial latent variables, the watermark can be decoded through a decoder. Considering the diverse processing stages in the flow of image data as well as the potential bias introduced by the inverse diffusion of the diffusion model, we employ adversarial training and fine-tuned the watermark decoder to enhance the robustness of watermark extraction. The primary contributions of this work are outlined as follows: (1) Among diffusion watermarking schemes based on initial hidden variables, DiffuseTrace is the first scheme that embeds robust multi-bit watermarks. DiffuseTrace is embedded at the semantic level of diffusion-model-generated images without relying on the trade-off between image quality and watermark robustness. It exhibits evident advantages over post-processing methods in terms of image quality. (2) Compared to the state-of-the-art post-processing watermarking and diffusion model watermarking schemes, DiffuseTrace not only exhibits significant performance in common image processing but also shows remarkable robustness against attacks based on variational autoencoders and diffusion models. The paper provides a thorough analysis at the theoretical level regarding the superior watermark robustness of DiffuseTrace. (3) The proposed universal watermark module for latent diffusion models can be seamlessly integrated across different versions of diffusion models. The watermark message of DiffuseTrace can be flexibly modified without being affected by model fine-tuning or model update iterations. Our code is open source: https://anonymous.4open.science/r/DiffuseTrace6DED. Paper Organization. The overview of this paper is organized as follows: The basic introduction of DiffuseTrace is shown in Section 1. The background of DiffuseTrace are summarized in Section 2. In Section 3, we introduce the problem formulation for DiffuseTrace. In Section 4, we demonstrate DiffuseTrace in detail. In Section 5, We have provided a detailed theoretical exposition and security analysis of the proposed scheme. In Section 6, we summarize and analyze the experimental results. In Section 7, we present the realted work of watermarking for LDMs. In Section 8, we summarize the DiffuseTrace watermarking scheme. 2 BACKGROUND 2.1 Diffusion Model based Image Generation Diffusion models progressively transitions the sample x from the true data distribution \ud835\udc5d(\ud835\udc65) to stochastic noise and adeptly reverses this process through iterative denoising of the noisy data [17]. A typical diffusion model framework involves a forward process that progressively diffuses the data distribution \ud835\udc5d(\ud835\udc65,\ud835\udc50) towards the noise distribution \ud835\udc5d\ud835\udc61(\ud835\udc67\ud835\udc61,\ud835\udc50) for \ud835\udc61\u2208(0,\ud835\udc47], where c denotes the conditional context. The conditional gaussian distribution of the diffusion process can be formulated as: \ud835\udc5d\ud835\udc61(\ud835\udc67\ud835\udc61|\ud835\udc65) = \ud835\udc5d\ud835\udc61(\ud835\udc67\ud835\udc61|\ud835\udefc\ud835\udc61\ud835\udc65, \ud835\udf0e2 \ud835\udc61\ud835\udc3c), (1) where \ud835\udefc\ud835\udc61, \ud835\udf0e\ud835\udc61\u2208R+. \ud835\udefc\ud835\udc61and \ud835\udf0e\ud835\udc61are the strengths of signal and noise respectively decided by a noise scheduler. \ud835\udc67\ud835\udc61= \ud835\udefc\ud835\udc61\ud835\udc65+ \ud835\udf0e\ud835\udc61\ud835\udf16is the noisy data. It has been proved that there exists a denoising process with the same marginal distribution as the forward process [35]. The estimation of the only variable can be derived as: \u25bd\ud835\udc67\ud835\udc61log\ud835\udc5d\ud835\udc61(\ud835\udc67\ud835\udc61,\ud835\udc50) \u2248 \ud835\udefc\ud835\udc61\ud835\udc65\ud835\udc61 \ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc50) \u2212\ud835\udc67\ud835\udc61 \ud835\udf0e2 \ud835\udc61 . (2) Specifically, given a noise-predicting diffusion model parameterized by \ud835\udf03, which is typically structured as a U-Net [30], training can be \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY T T M Step1: DiffuseTrace Enc./Dec. Pretraining M\u2019 Step2: DiffuseTrace Decoder Finetuning \u201c a cute cat \u201d Step3: Sematic Watermarked Image Generation Initial Latents Distribute Sample Watermark Region M Sample Distribute Initial Latents Attack ... Iterative Denoising \ufffd0 VAE Reconstruct T Locate M\u2019 Watermark Region W Sample Distribute Initial Latents \u201c a cute cat \u201d Diffusion Model Step4: Watermark Message Extraction Diffusion Inversion Reconstruct Locate Watermark Region W Diffusion Model DiffuseTrace Enc./Dec. U-Net of Diffusion Model Variational Autoencoder VAE Rec. Loss Distri. Loss Figure 1: Methods of DiffuseTrace. (Step1) Train the DiffuseTrace Encoder through resampling methods to generate latent variables approximate to a standard normal distribution and jointly train the decoder to decode the information. M: Random n-bit messages. (Step2) Keep the encoder fixed and train the decoder. Randomly select prompts for the diffusion model denoising process to generate images. Decode the images after passing through the attack layer to obtain latent variables and execute diffusion inversion to extract the initial latent variables. Compare the decoded message from the initial latent variables with the initial message to build a reconstruction loss for fine-tuning the decoder. (Step3) Assign watermark message w and generate initial watermarked latent variables by the encoder to generate images. (Step4) Extract watermark message after inverting the images and trace the source through statistical testing. formulated as the following noise prediction problem: \ud835\udc5a\ud835\udc56\ud835\udc5b \ud835\udf03 E\ud835\udc65,\ud835\udc61,\ud835\udf0e|| \u02c6 \ud835\udf16\ud835\udf03(\ud835\udefc\ud835\udc61\ud835\udc65+ \ud835\udf0e\ud835\udc61\ud835\udf16,\ud835\udc61) \u2212\ud835\udf16||2 2, (3) where \ud835\udc61refers to the time step; \ud835\udf16is the ground-truth noise; the noise \ud835\udf16\u223cN (\ud835\udf16|0, \ud835\udc3c) is a standard Gaussian. Recently, LDMs [28] streamlines inference processes by incorporating denoising process within the encoded latent space derived from a pre-trained variational autoencoder (VAE) [6]. Diffusion models reconstructs images through the latent state. During the inference phase, stable diffusion models take both a latent seed and a text prompt as an input. The U-Net progressively removes noise from random latent image representations guided by text embeddings. The noise residual from the U-Net is utilized in conjunction with a scheduler algorithm to generate a denoised latent. When synthesizing images, a crucial technique, classifierfree guidance is adopted to enhance the quality of generated images. \u02dc \ud835\udf16\ud835\udc61\u210e\ud835\udc52\ud835\udc61\ud835\udc4e(\ud835\udc61,\ud835\udc67\ud835\udc61,\ud835\udc50) = \ud835\udc64\u02c6 \ud835\udf16\ud835\udc61\u210e\ud835\udc52\ud835\udc61\ud835\udc4e(\ud835\udc61,\ud835\udc67\ud835\udc61,\ud835\udc50) + (\ud835\udc64\u22121) \u02c6 \ud835\udf16\ud835\udc61\u210e\ud835\udc52\ud835\udc61\ud835\udc4e(\ud835\udc61,\ud835\udc67\ud835\udc61,\ud835\udf19) (4) where The guidance scale \ud835\udc64can be modified to regulate the influence of conditional information on the produced images, aiming to strike a balance between quality and diversity. \u02c6 \ud835\udf16\ud835\udc61\u210e\ud835\udc52\ud835\udc61\ud835\udc4e(\ud835\udc61,\ud835\udc67\ud835\udc61,\ud835\udf19) denotes the unconditional diffusion obtained by empty prompt. 2.2 Diffusion Denoising and Inversion The well-trained diffusion model leverages a diverse range of samplers to generate samples from noise and execute denoising procedures. A notable denoising method is the Denoising Diffusion Implicit Model (DDIM) [34] which stands out for its efficiency and deterministic output. DDIM accomplishes denoising with significantly fewer steps. The image \ud835\udc650 will be reproduced with 50 inference steps to the standard 1000-step process. Formally, for each denoising step \ud835\udc61, DDIM utilizes a learned noise predictor \ud835\udf16\ud835\udf03to estimate the noise \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) added to \ud835\udc650, which leads to the estimation of \ud835\udc650 as follows: \u02c6 \ud835\udc650 = \ud835\udc65\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) \u221a\u00af \ud835\udefc\ud835\udc61 . (5) the estimated noise \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) is recombined with the approximated \u02c6 \ud835\udc650 to compute \ud835\udc65\ud835\udc61\u22121: \ud835\udc65\ud835\udc61\u22121 = \u221a\u00af \ud835\udefc\ud835\udc61\u22121 \u02c6 \ud835\udc650 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u22121\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) . (6) DDIM also incorporates an inversion mechanism [10], which facilitates the reconstruction of the noise representation \ud835\udc65\ud835\udc47from an image \ud835\udc650. The recovered \ud835\udc65\ud835\udc47should be mappable to an image approximate to \ud835\udc650. Based on the assumption that \ud835\udc65\ud835\udc61\u22121 \u2212\ud835\udc65\ud835\udc61\u2248\ud835\udc65\ud835\udc61+1 \u2212\ud835\udc65\ud835\udc61, The DDIM inversion shall be formulated as: \u02c6 \ud835\udc65\ud835\udc61+1 = \u221a\u00af \ud835\udefc\ud835\udc61+1\ud835\udc650 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61+1\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61) (7) Essentially, this process follows the forward diffusion process as described in Equation 6. Diffusion inversion, even in zero-text inversion within conditional diffusion, can still achieve decent accuracy. Meanwhile, the method is applicable to deterministic sampling methods like DPM++ [20]. Our watermarking scheme leverages this property of diffusion inversion. \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu 3 PROBLEM FORMULATION 3.1 Threat Model In this paper, we consider two parties: the defender and the adversary. The defender is the owner of the generative model. Latent diffusion model is deployed as an online service. The core objectives are protecting the copyright of the model and tracing the illegal usage through model outputs. Conversely, the adversary\u2019s objective is to disrupt the watermark information in the model output and circumvent the copyright protection and tracing mechanisms of the model. Adversary\u2019s Motivation. The adversary\u2019s motivation stems from two aspects: Firstly, training a latent diffusion model requires gathering a significant amount of data, expertise in architecture or algorithms and numerous failed experiments, all of which are expensive. As a result, the model parameters are considered proprietary information for businesses. Generative model services are deployed as online services. Adversaries may manipulate images to destroy watermark information and redistribute the outputs of online services to cloud platforms, effectively becoming commercial competitors. Secondly, Attackers may exploit online generative services to generate insulting or offensive images for malicious purposes such as fabricating fake news or spreading rumors and remove watermarks from the images to evade tracing. Adversary\u2019s Background Knowledge. We assume that adversaries can access the victim\u2019s latent diffusion model in a black-box manner. Attackers can query the victim\u2019s latent diffusion model with data samples and obtain corresponding responses. Specifically, we categorize adversary background knowledge into two dimensions: the architecture of the victim\u2019s diffusion model and the watermark removal capability. For the architecture of the diffusion model, we assume adversaries can access it since such information is typically publicly accessible. Regarding watermark removal capability, we assume adversaries can manipulate images using techniques such as Gaussian blur, color jittering and image compression. Meanwhile, we consider adversaries who possess the capability to perform state-of-the-art watermark removal attacks using variational autoencoders and diffusion models. 3.2 Image Watermarking and Verification Formally, the validation scheme for generative image watermarking in Diffusers is defined as follows: The generative image watermarking verification scheme is a tuple Verification = \u27e8\ud835\udc47\ud835\udc5f\ud835\udc5b, \ud835\udc38\ud835\udc5a\ud835\udc4f, \ud835\udc38\ud835\udc63\ud835\udc4e,\ud835\udc49\ud835\udc5f\ud835\udc53\u27e9of processes: A Train process \ud835\udc47\ud835\udc5f\ud835\udc5b(\ud835\udc37,\ud835\udc34\ud835\udc5f\ud835\udc50[\u00b7], \ud835\udc38\ud835\udc5b\ud835\udc50[\u00b7], \ud835\udc3f) = {\ud835\udc38\ud835\udc5b\ud835\udc50[\ud835\udc4a], \ud835\udc37\ud835\udc52\ud835\udc50[\ud835\udc4a]}, is a fine-tuning or training process that takes training data \ud835\udc37= {\ud835\udc65\ud835\udc51,\ud835\udc66\ud835\udc51} as inputs and outputs the models \ud835\udc38\ud835\udc5b\ud835\udc50[\ud835\udc4a] and \ud835\udc37\ud835\udc52\ud835\udc50[\ud835\udc4a] by minimizeing a given loss L. An embedding process \ud835\udc38\ud835\udc5a\ud835\udc4f(\ud835\udc5d\ud835\udc5f\ud835\udc5a,\ud835\udc34\ud835\udc5f\ud835\udc50[\u00b7], \ud835\udc38\ud835\udc5b\ud835\udc50[\u00b7], \ud835\udc3f\ud835\udc4e\ud835\udc61,\ud835\udc46\ud835\udc56\ud835\udc54) = \ud835\udc43\ud835\udc56\ud835\udc50 [\ud835\udc46\ud835\udc56\ud835\udc54] is an inference process that embeds the signature \ud835\udc46\ud835\udc56\ud835\udc54into latent variables through an encoder and performs inference through the model \ud835\udc34\ud835\udc5f\ud835\udc50[\u00b7] to output the watermarked image \ud835\udc43\ud835\udc56\ud835\udc50[\ud835\udc46\ud835\udc56\ud835\udc54]. An quality evaluation process \ud835\udc38\ud835\udc63\ud835\udc4e\ud835\udc59(\ud835\udc34\ud835\udc5f\ud835\udc50[\u00b7], \ud835\udc40, \ud835\udc3f\ud835\udc4e\ud835\udc61,\ud835\udf16) = {\ud835\udc47\ud835\udc5f\ud835\udc62\ud835\udc52, \ud835\udc39\ud835\udc4e\ud835\udc59\ud835\udc60\ud835\udc52} is to evaluate whether or not the discrepency is less than a predefined threshold i.e. |\ud835\udc40(\ud835\udc34\ud835\udc5f\ud835\udc50[\ud835\udc4a,\ud835\udc46\ud835\udc56\ud835\udc54], \ud835\udc3f\ud835\udc4e\ud835\udc61, \ud835\udc38\ud835\udc5b\ud835\udc50[\u00b7]) \u2212\ud835\udc40| \u2264\ud835\udf16, where \ud835\udc40(\ud835\udc34\ud835\udc5f\ud835\udc50[\ud835\udc4a,\ud835\udc46\ud835\udc56\ud835\udc54], \ud835\udc3f\ud835\udc4e\ud835\udc61, \ud835\udc38\ud835\udc5b\ud835\udc50[\u00b7]) denotes the image fidelity or semantic consistency tested against a set of watermarked latents. \ud835\udc40is the target generation performance. A verification process \ud835\udc38\ud835\udc63\ud835\udc4e(\ud835\udc3c\ud835\udc5a\ud835\udc54, \ud835\udc46\ud835\udc56\ud835\udc54, \ud835\udc34\ud835\udc61\ud835\udc58, \ud835\udc37\ud835\udc52\ud835\udc50[\u00b7],\ud835\udf16) = {\ud835\udc47\ud835\udc5f\ud835\udc62\ud835\udc52, \ud835\udc39\ud835\udc4e\ud835\udc59\ud835\udc60\ud835\udc52} checks whether the expected signature \ud835\udc46\ud835\udc56\ud835\udc54of a given generative image can be successfully verified by Decoder \ud835\udc37\ud835\udc52\ud835\udc50[\u00b7] when facing image attacks. Watermark Detaction. DiffuseTrace embed a k-bit secret message \ud835\udc5a\u2208{0, 1}\ud835\udc58into the watermark image. The watermark detection algorithm includes an extractor that can extract the hidden signal \ud835\udc5a\u2032 from the watermarked image. It uses statistical testing to set a threshold \ud835\udf0f\u2208{0, 1, 2...\ud835\udc58} for the extracted bits . If the number of matching bits \ud835\udc38(\ud835\udc5a,\ud835\udc5a\u2032) \u2265\ud835\udf0f, the image is marked as watermarked. Formally, We establish the hypothesis H1: The image pic is generated by DiffuseTrace against the null hypothesis H0: The image is not generated by DiffuseTrace. Under \ud835\udc3b0, we assume that the extracted bits \ud835\udc5a\u2032 1,\ud835\udc5a\u2032 2...\ud835\udc5a\u2032 \ud835\udc58are independent and identically distributed Bernoulli random variables with a probability of 0.5. \ud835\udc38(\ud835\udc5a,\ud835\udc5a\u2032) follows a binomial distribution \ud835\udc35(\ud835\udc58, 0.5). Type I error (false positive rate (FPR) , \ud835\udf0e) equals the probability of \ud835\udc38(\ud835\udc5a,\ud835\udc5a\u2032) exceeding \ud835\udf0f, derived from the binomial cumulative distribution function. It has a closed form using the regularized incomplete beta function \ud835\udc3c\ud835\udc65(\ud835\udc4e;\ud835\udc4f). \ud835\udf161(\ud835\udf0f) = P(\ud835\udc38(\ud835\udc5a,\ud835\udc5a\u2032) > \ud835\udf0f| \ud835\udc3b0) = 1 2\ud835\udc58 \ud835\udc58 \u2211\ufe01 \ud835\udc56=\ud835\udf0f+1 ( \ud835\udc58 \ud835\udc56 ) = \ud835\udc3c1/2(\ud835\udf0f+ 1,\ud835\udc58\u2212\ud835\udf0f). (8) If we reject the null hypothesis \ud835\udc3b0 with a p-value less than 0.01, we consider the image to be without a watermark. In practice, for a watermark of 48 bits (\ud835\udc58= 48), at least 34 bits should be extracted to confirm the presence of the watermark. This provides a reasonable balance between detecting genuine watermarks and avoiding false positives. 3.3 Objective for Watermarking DiffuseTrace should have the following propeties: Robust against Watermarking Attacking: Images with watermarks may undergo various image processing operations. Even after post-processing, the watermark can still be fully recovered. DiffuseTrace should withstand watermark removal attacks, such as Gaussian noise, color jittering, Gaussian blur and others. Meanwhile, DiffuseTrace should be able to defend against the latest watermark attacks based on the state-of-the-art variational autoencoder and diffusion model techniques. Generalizability: Considering the cost of embedding fixed information into fine-tuned models, DiffuseTrace can adjust embedded message flexibly and should be compatible with various versions of diffusion models and remain unaffected by model fine-tuning or model update iterations. Fidelity: Minimizing the impact on the model\u2019s output before and after watermarking to the greatest extent possible. The images generated by the DiffuseTrace maintain consistency with the original model in terms of semantic consistency and image quality. The watermark samples generated by DiffuseTrace should exhibit no significant differences in visual and semantic quality compared to normal samples. \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY The goal is to design a watermark that is flexible, robust to postprocessing, generalizable and does not compromise the quality or semantic consistency of the image. Additionally, it should remain unaffected by model fine-tuning or update iterations. 4 PROPOSED WATERMARKING SCHEME 4.1 Overview The overview of our method is in figure 1. As described in the first section, we have three objectives: \u2022 The watermark is embedded into the initial latent variables at the semantic level without altering semantic consistency and image quality. \u2022 Watermark messages can be modified flexibly without retraining or fine-tuning the model. \u2022 The watermark is robust against various image processing techniques and state-of-art watermark removal methods. The core idea of DiffuseTrace is to embed the watermark into latent variables. The initial latent variables of the latent space are divided into multiple watermark regions, with each region corresponding to a portion of the watermark information. To ensure both lossless quality and semantic consistency of the image, the embedded watermark should approximate a standard normal distribution and be extractable by the decoder. Specifically, DiffuseTrace consists of a watermark encoder and watermark decoder. The model owner encodes the initial latent variables through the watermark encoder. The latent variables are then processed through a scheduler guided by prompts and denoised through a U-Net. Afterward, latent variables are decoded by a variational autoencoder into watermarked images. The watermarked images are subjected to an attack layer and decoded back into the latent space. Through diffusion inversion, the original latent variables are restored. The watermark is then extracted from the decoded latent variables through the decoder. 4.2 Pre-training Watermark Encoder-Decoder The paper pretrains the encoder-decoder structure for watermark embedding and extraction. The training objective of the encoder is to construct a unified representation of watermark information and latent variables under a standard Gaussian distribution based on the watermark information and the embedded watermark region. Specifically, when binary identity information of the user is inputted into the encoder, it will produce watermark-embedded latent variables which adhere to a standard Gaussian distribution. The following explains the reason for latent variables with watermark conforming to a standard normal distribution. When the latent generator of the LDM samples latent variables \ud835\udc4dfrom noise, the function of the U-Net is to iteratively denoise Gaussian noise matrices within the diffusion cycle guided by text and timesteps. By subtracting the predicted noise from random Gaussian noise matrices, the random Gaussian noise matrices are eventually transformed into the latent variables of the image. Since the noise introduced during the training process of the U-Net follows a normal distribution, the initial latent variables of the LDM inference process should ideally approximate a standard normal distribution. When training a variational autoencoder to encode images into latent variables, one of the training objectives is to approximate the latent variables to adhere roughly to a standard normal distribution. More precisely, the training set for U-net involves repeatedly adding noise from a standard normal distribution to images. With a sufficient number of iterations, the original images will converge close to a standard normal distribution. Hence, during the denoising image generation phase, the initial noise is selected to conform to a standard normal distribution. If there is a significant deviation of the initial noise from the standard normal distribution, it may lead to inconsistencies between image quality and semantics. Watermarked latent variables that are closer to a standard normal distribution better conform to the standard denoising process. Due to the non-differentiability and gradient descent limitations of distributions, the encoder\u2019s model architecture employs the reparameterization technique to generate watermark-embedded latent variables. Considering the difficulty of explicitly distributing watermark regions at the trillion level, we have adopted an implicit partitioning of the watermark regions. Sampled latent variables are constrained by Kullback-Leibler divergence to approximate a standard normal distribution. Each watermark information independently maps to a portion of the probability distribution. The specific principles are detailed in our theoretical analysis 5. The decoder network is the inverse of the encoder. The training objective of the decoder is to extract watermark information from the initial latent variables. The encoder and decoder are jointly trained to ultimately produce watermark-embedded latent variables that conform to a standard normal distribution. The decoder then outputs the corresponding watermark information based on these latent variables. According to the analysis of 5.2, the reconstruction of watermark refers to maximizing the expected probability distribution of the watermark \ud835\udc64given the latent variable. The loss for reconstructing the message L\ud835\udc64is calculated as the Mean Square Error (MSE) loss between the original watermark message and the decoded message: L\ud835\udc64= \ud835\udc40\ud835\udc46\ud835\udc38(\ud835\udc5a\ud835\udc52\ud835\udc60\ud835\udc60\ud835\udc4e\ud835\udc54\ud835\udc52,\ud835\udc51\ud835\udc52\ud835\udc50(\ud835\udc5a\ud835\udc52\ud835\udc60\ud835\udc60\ud835\udc4e\ud835\udc54\ud835\udc52\u2032)) (9) For the loss of the initial latent variable distribution, We compute the Kullback-Leibler (KL) divergence between the distribution of the latent variables and the standard normal distribution as the distribution loss. KL divergence is a measure of how one probability distribution diverges from the expected probability distribution. Suppose we have two probability distributions P and Q for a random variable \ud835\udf09. If \ud835\udf09is a discrete random variable, the KL divergence from P to Q is defined as: DKL(\ud835\udc43\u2225\ud835\udc44) = \u2211\ufe01 \ud835\udc56 \ud835\udc43(\ud835\udc56) ln \u0012 \ud835\udc43(\ud835\udc56) \ud835\udc44(\ud835\udc56) \u0013 (10) According to the analysis of 5.1, we assume that the output follows a normal distribution, denoted as \ud835\udc5d1 \u223cN (\ud835\udf071, \ud835\udf0e2 1) and the standard normal distribution is denoted as \ud835\udc5d2 \u223cN (0, 1). The distribution loss L\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61used in this paper is as follows: L\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61= \ud835\udc3e\ud835\udc3f(\ud835\udc5d1\u2225\ud835\udc5d2) = \u22121 2 \u00d7 [2 log\ud835\udf0e1 + 1 \u2212\ud835\udf0e2 1 \u2212\ud835\udf072 1] (11) In the above two loss functions, L\ud835\udc64ensures the correct decoding of watermark information, while L\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61guarantees the initial distribution of latent variables, thereby ensuring the quality and semantic \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu consistency of the images. The encoder and decoder are jointly trained by minimizing the following loss function: L = \ud835\udf061L\ud835\udc64+ \ud835\udf062L\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61 (12) \ud835\udf061 and \ud835\udf062 represent the proportion constant parameter. Therefore, the trained encoder is capable of embedding information into latent variables that approximately adhere to a standard normal distribution, while the decoder can be seen as the inverse process of the encoder to extract the watermark. 4.3 Decoder Fine-Tuning According to the analysis of 5.3, throughout the entire watermark embedding-extraction process, following factors contribute to decoding imprecision: (1) The diffusion inversion process approximates the differences between adjacent step latent variables; (2) Since the prompt is not available in the practical scenarios during the decoding stage, we utilize zero-text inversion; (3) Potential alterations and manipulations to the image occur through various image processing techniques. These factors contribute to inevitable deviations in the inferred initial latent variables. In essence, these processes result in a global shift of the initial latent variables in the semantic space of the images. The decoder can accurately extract most of the watermark information, but samples located at the edges of the watermark region exhibit significant inaccuracies. During the fine-tuning phase of the decoder, the objectives are to adapt to the shift occurring in the watermark region and accurately extract the watermark from the attacked samples. Specifically, we fix the encoder of the watermark model and diffusion model. To simulate the image processing procedures in real-world scenarios, the attack layer employs an image perturbation technique after generating various images with randomly prompted words. The perturbation layer includes randomly adding Gaussian noise, applying Gaussian blur, color jittering and image compression to the images. Adversarial training will enhance the robustness of watermark detectors against image processing. After inverting the images subjected to image processing, we obtain the modified initial latent variables. We fine-tune the decoder by computing the mean squared error between the decoded messages and the original watermark messages as the loss function. 4.4 Error correction mechanism The scheme clearly delineates the watermark region, but during watermark detection, the effects of inversion and image processing due to adversarial training on the decoder can lead to overlap in watermark detection areas. This results in bit errors for samples at the edges of the watermark region where overlap occurs during adversarial training. We have provided detailed reasons and explanations in the security analysis 5.4 and elucidated the reasons and necessity for employing error correction codes. Recursive Systematic Convolutional (RSC) Codes: RSC codes provide a systematic approach to encoding and decoding bitstreams, allowing for error correction of data and adaptive recovery of the original message from corrupted data. Concretely, Given an input bitstream m, the RSC encoder transforms it into another bitstream \ud835\udc5a+\ud835\udc501 +\ud835\udc502...\ud835\udc50\ud835\udc58. where each \ud835\udc50\ud835\udc56is a bitstream that has the same length as the bitstream \ud835\udc5aand the symbol + indicates the concatenation of bitstreams. A higher encoding ratio can withstand a greater proportion of errors but results in a lengthier encoded bitstream. When decoding, if the modified bit string \ud835\udc5a\u2032 +\ud835\udc501...\ud835\udc50\ud835\udc56is input to the RSC decoder and the error rate of the encoded stream is less than a certain threshold, the original information m can be recovered. We can utilize this property to make corrections to watermark encoding. Turbo Codes [3]: Turbo codes can tolerate more bit errors compared to other codes at the same bit rate. A typical Turbo code consists of two convolutional codes and an interleaver. The primary function of the interleaver is to shuffle the outputs of the two convolutional codes, increasing the independence of each code and thus enhancing error correction performance. During the decoding process, an iterative algorithm is utilized to estimate and rectify errors iteratively, thereby enhancing the error correction performance. In our experiments, we utilize Turbo codes as error correction codes to further enhance the stability of watermark extraction. The specific process involves the model owner assigning identity information to the model user, which is then encoded into identity information codes with redundancy using Turbo codes. These identity information codes undergo encoding by the encoder, denoising of latent variables, inversion of latent variables, extraction of watermark information and error correction of the extracted identity information redundancy codes to restore the initial identity information. The mechanism of error correction codes combines partial watermark regions into a unified part, correcting the initial latent variables located at the boundaries of watermark detection regions, thereby enhancing the robustness of watermark detection. 5 THEORETICAL ANALYSIS OF THE PROPOSED SCHEME 5.1 Unified Representations of Watermark Regions and Latent Variables Based on the initial requirements, we aim to establish a unified representation for watermark information and latent variable regions. For each watermark \ud835\udc4a, specific distributions of latent variables are distributed. These settings ensure that all images generated by the model can be attributed to the initial distribution of latent variables. Formally, We set both the diffusion model and the watermark model to share the same latent space. For specific parts of this latent space, we can sample and extract watermark features based on a probability function \ud835\udc43(\ud835\udc67). We assume a series of deterministic functions \ud835\udc53(\ud835\udc67;\ud835\udf03) parameterized by a vector \ud835\udf03in some space \u03a6, where \ud835\udc53: \ud835\udc4d\u00d7 \u03a6 \u2192X . When \ud835\udf03is fixed and \ud835\udc67\u223cN (1, 0), \ud835\udc53(\ud835\udc67;\ud835\udf03) can generate latent variables that conform to a standard Gaussian distribution. By adopting this approach, we can construct watermark distribution regions corresponding to specific watermark information. These regions coincide with the latent variables of the diffusion model, achieving a unified representation of both. Additionally, the distribution of the watermark conforms to a standard normal distribution. This embedding process solely alters the selection of initial latent variables, preserving semantic consistency and image quality. \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY We aim to optimize \ud835\udf03such that \ud835\udc43(\ud835\udc67) can be sampled \ud835\udc67from while ensuring it closely matches the watermark \ud835\udc4a. To formalize this concept mathematically, the objective of DiffuseTrace is to maximize the probability of each \ud835\udc4athroughout the entire watermark extraction process. This objective stems from the principle of maximum likelihood. If the decoder is capable of reconstructing the watermark from the latent variables, it is also likely to reconstruct watermark from similar samples and unlikely to reconstruct watermark from dissimilar ones. To illustrate the dependence of \ud835\udc43(\ud835\udc67) on\ud835\udc4a, we transform \ud835\udc53(\ud835\udc67,\ud835\udf03) into \ud835\udc43(\ud835\udc4a|\ud835\udc67;\ud835\udf03). The probability density function can be formalized as follows: \ud835\udc43(\ud835\udc4a) = \u2211\ufe01 \ud835\udc67 \ud835\udc43(\ud835\udc4a|\ud835\udc67;\ud835\udf03)\ud835\udc43(\ud835\udc67) (13) The output distribution conforms to a Gaussian distribution after watermark embedding. Therefore, \ud835\udc43(\ud835\udc4a|\ud835\udc67;\ud835\udf03) satisfies the following distribution: \ud835\udc43(\ud835\udc4a|\ud835\udc67;\ud835\udf03) = N (\ud835\udc4a|\ud835\udc53(\ud835\udc67;\ud835\udf03), \ud835\udf0e2\ud835\udc3c) (14) After embedding the watermark, the latent variables have a mean of \ud835\udc53(\ud835\udc67;\ud835\udf03) and a covariance equal to the identity matrix \ud835\udc3cmultiplied by the scalar \ud835\udf0ewhich is a hyperparameter. 5.2 The Implicit Allocation of Watermarks Essentially, we need to partition the standard normal distribution, with each partition capable of accurately reconstructing the original watermark. For a 48-bit watermark, dividing into over two hundred eighty-one trillion regions presents a challenge in manually determining the watermark encoding regions given the complexity of explicitly partitioning watermark regions under the standard normal distribution. This implicit partitioning problem is analogous to the challenges faced by variational autoencoders in fitting distributions to data. As outlined in the paper [11], any distribution in \ud835\udc51dimensions can be generated using a set of \ud835\udc51variables drawn from a normal distribution and mapped through a sufficiently complex function. For \ud835\udc43(\ud835\udc4a), within the partitioning of the watermark into over two hundred eighty-one trillion blocks, most sampled \ud835\udc67contribute minimally to \ud835\udc43(\ud835\udc4a), since \ud835\udc43(\ud835\udc4a|\ud835\udc67) is close to zero for most \ud835\udc67. The approximation of the prior distribution can be simplified by introducing the posterior distribution \ud835\udc5e(\ud835\udc67|\ud835\udc65). By computing the KL divergence between the posterior and prior distributions, we obtain: \ud835\udc37[\ud835\udc5e(\ud835\udc67|\ud835\udc64)||\ud835\udc5d(\ud835\udc67|\ud835\udc64)] = E\ud835\udc67\u223c\ud835\udc5e[(\ud835\udc67|\ud835\udc64) \u2212log\ud835\udc5d(\ud835\udc67|\ud835\udc64)] (15) The same to solving the variational evidence lower bound, we derive the watermark reconstruction evidence lower bound through Bayesian transformation: log\ud835\udc5d(\ud835\udc64) \u2265E\ud835\udc67\u223c\ud835\udc5e[log\ud835\udc5d(\ud835\udc64|\ud835\udc67)] \u2212\ud835\udc37[\ud835\udc5e(\ud835\udc67|\ud835\udc64)||\ud835\udc5d(\ud835\udc67)] (16) The first term in the equation represents maximizing the expected probability distribution of the watermark \ud835\udc64given the latent variable \ud835\udc67, i.e., the loss incurred by the watermark decoder in reconstructing the watermark. The second term is for the approximate posterior distribution of the latent space \ud835\udc67to closely resemble the prior distribution, i.e., the watermark information generated by the encoder and the standard normal distribution should be as similar as possible. 5.3 Offset of Watermark Detection Region As stated in Equation 7, diffusion inversion attributes the generated image to the initial latent variables. The assumption of diffusion inversion approximates \ud835\udc4b\ud835\udc61\u22121\u2212\ud835\udc4b\ud835\udc61to \ud835\udc4b\ud835\udc61+1\u2212\ud835\udc4b\ud835\udc61. While unconditional diffusion inversion can yield accurate results, excessive guidance scale in conditional diffusion amplifies the errors introduced by null-text diffusion inversion [23]. In fact, after extracting the semantic embeddings of the images, conducting a forward pass after each inversion and applying gradient descent can enhance the effectiveness of the inversion process. Let the current latent variable be \ud835\udc4d\ud835\udc61\ud835\udc56. \ud835\udc4d\ud835\udc61\ud835\udc56+1 is obtained after performing inversion on \ud835\udc4d\ud835\udc61\ud835\udc56. The process of solving \ud835\udc4d\ud835\udc61\ud835\udc56+1 under the guidance of extracting semantic embeddings can be mathematically expressed as follows: \u2207\ud835\udc9b\ud835\udc61\ud835\udc56+1 \u2225\ud835\udc9b\ud835\udc61\ud835\udc56\u2212\ud835\udc9b\u2032 \ud835\udc61\ud835\udc56\u22252 2 (17) Theoretically, refining the decoder by restoring the initial latent variables through the aforementioned approach would yield better results. However, considering the computational overhead of the gradient descent process, the approach adopted in the paper accepts the inaccuracy of diffusion inversion under zero text and defines this inaccuracy as the offset to the watermark detection region. The purpose of fine-tuning is to learn the offset vector as \ud835\udc5d. The watermark encoder trained with maximum likelihood exhibits the following properties, similar samples have more similar latent variables, while dissimilar samples have greater distances in the latent space. The distance between the similar latent variables obtained after inversion and the initial latent variables should be less than a certain threshold \ud835\udf16to guarantee the accuracy of detection. After the watermark region is segmented, the watermark detection area is offset due to diffusion inversion, the refinement target at this point transforms into: min \ud835\udf03 E(\ud835\udc65\ud835\udc56,\ud835\udc66\ud835\udc56)\u223cD [max \ud835\udc3f(\ud835\udf03,\ud835\udc56\ud835\udc5b\ud835\udc63(\ud835\udc51\ud835\udc52\ud835\udc5b\ud835\udc5c(\ud835\udc65\ud835\udc56)) + \ud835\udc5d\ud835\udc56,\ud835\udc66\ud835\udc56)]. (18) \ud835\udc65\ud835\udc56denotes specific latent variables and \ud835\udc66\ud835\udc56denotes the watermark region \ud835\udc65\ud835\udc56belong. In the formula 19, \ud835\udc51\ud835\udc52\ud835\udc5b\ud835\udc5crepresents the process of diffusion denoising 6, \ud835\udc56\ud835\udc5b\ud835\udc63represents the precise inversion process, and \ud835\udc5d\ud835\udc56denotes the offset of the watermark detection area caused by approximation 7. After fine-tuning the watermark decoder, it should satisfy \ud835\udc5d< \ud835\udf16to ensure detection accuracy. Taking into account that images may undergo various treatments including image blurring, Gaussian noise, color transformations, etc., such attacks can affect samples on the edges of the watermark region, leading to decreased detection accuracy. Essentially, this process does not alter the watermark\u2019s region, but it notably aids in repairing the evasion of edge samples. Adversarial training can appropriately expand the range of the watermark detection region for various attacks. Therefore, the refinement target can be further transformed into: min \ud835\udf03 E(\ud835\udc65\ud835\udc56,\ud835\udc66\ud835\udc56)\u223cD [max \ud835\udc3f(\ud835\udf03,\ud835\udc56\ud835\udc5b\ud835\udc63(\ud835\udc51\ud835\udc52\ud835\udc5b\ud835\udc5c(\ud835\udc65\ud835\udc56) + \ud835\udeff) + \ud835\udc5d\ud835\udc56,\ud835\udc66\ud835\udc56)]. (19) The variable \ud835\udeffcan be expressed as the deviations caused by various attacks on the image, where images generated after such attacks are semantically similar but deviate from the original image in semantic space. Correcting this step further enhances the watermark decoder\u2019s detection accuracy for edge samples. \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu 5.4 Security Analysis Based on the above analysis, DiffuseTrace divides the watermark region into multiple contiguous areas. Assuming the image undergoes image processing resulting in changes compared to the original image, this change is assumed to be \ud835\udf16\ud835\udc5dwithin the latent variable space. The initial latent variable corresponding to the original image is \ud835\udc4d0 \u2208X. As long as \ud835\udc4d\ud835\udc47+\ud835\udf16\ud835\udc5d\u2208X, the watermark verification is successful. For the initial latent variables close to the center of the watermark region, the distance from the latent variables to other watermark regions is \ud835\udc37\u226b\ud835\udf16\ud835\udc5d. In this case, watermark verification is straightforward. However, for samples at the edges of the watermark region, \ud835\udc4d\ud835\udc47+ \ud835\udf16\ud835\udc5d\u2209X. In the detection phase, we effectively expanded the detection area for each watermark, considering the outer radius \ud835\udc5fof each watermark region as part of the region itself. This process can be formalized as follows: \ud835\udc51\ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc50\ud835\udc61(\ud835\udc67\ud835\udc47+ \ud835\udf16\ud835\udc5d) = \ud835\udc57\ud835\udc62\ud835\udc51\ud835\udc54\ud835\udc52(\ud835\udc670 + \ud835\udf16\ud835\udc5d\u2208(X + r)) (20) The size of \ud835\udc5fdepends on the magnitude of the perturbations in adversarial samples used during adversarial training. Expanding the partition of watermark regions actually increases the risk of overlap to some extent between different watermark regions. We set the distance \ud835\udc51between the two watermark regions. Since the encoder remains fixed, the region of the watermark itself won\u2019t change. However, due to inversion-induced overall shifts and image processing, the detection area post-inversion corresponds to a deviated initial region. If \ud835\udc5f\u2264\ud835\udc51, adversarial training enhances the robustness of the watermark, ensuring that even edge latent variables can still extract the watermark. Security Analysis Without Attack. If the magnitude of adversarial training \ud835\udc5fexceeds \ud835\udc51, it causes the watermark from one edge sample to fall within the detection range of another, leading to bit errors. Indeed, during training, adversarial samples at the boundary regions steer the model in the wrong direction, while correct samples in these regions guide the model back on track. As a result, the accuracy of samples in these areas remains above fifty percent but unstable, leading to a fluctuating state. To correct such errors, we employ error correction codes. As mentioned by 4.4, if the error rate of samples in the boundary region is within an acceptable range, error correction codes can restore the original information. Essentially, this approach uses a larger amount of information to rectify errors and merges multiple regions into one. Security Analysis of Image Processing. In our scheme, we consider image manipulation where the same image undergoes a certain offset in the latent space, but within an acceptable range smaller than a certain threshold. If the corresponding change in the latent space is less than \ud835\udc51, adversarial training ensures that both central and marginal latent variables can successfully decode the information. Common image manipulations such as Gaussian transformations, color jittering, brightness variations and image compression all keep the image\u2019s position in the latent space within an acceptable range. Therefore, DiffuseTrace effectively defends against such attacks. Even with significant image manipulations such as JPEG compression to 10 percent, contrast increase to 8, and brightness increase to 6, DiffuseTrace maintains a certain level of accuracy. Security Analysis of VAE-based Attacks and Diffusionbased Attacks. The core idea behind attacks such as VAE-based attacks and Diffusion-based attacks in the proposed scheme is to disrupt and reconstruct. Disruption involves adding noise to the image, while reconstruction involves removing the noise through a diffusion model. The reason why such attacks can succeed is that the primary objective of most watermarking schemes is to add minimal watermark noise to the image while still being able to extract the watermark information. These methods often utilize the LPIPS loss [47] or differences in the color channels of the image as the loss function, aiming to minimize the SSIM and PSNR metrics of the final image. This allows reconstruction attacks to exploit this vulnerability by continuously adding noise to gradually degrade the stability of the watermark. Eventually, the reconstruction process generates an image that is indistinguishable from the watermark. While some watermarking schemes, such as Stegastamp, sacrifice image quality and significantly increase adversarial training to enhance their stability, there is no defense against reconstruction attacks when constructive steps become sufficiently numerous. In fact, reconstruction attacks can even produce images that are clearer than the watermark samples. The watermark based on the initial latent variables primarily operates at the semantic level, allowing for stable watermark extraction as long as there are no significant changes in the image within the latent space. Attacks on watermarks based on diffusion models by adding noise do not alter the original semantic content of the image actually. The initial hidden space positions of the image can still be discerned which makes it resistant to such reconstruction attacks. This is exactly the advantage of DiffuseTrace in combating attacks. 6 EXPERIMENTS 6.1 Experimental settings Datasets. In the experiment, we utilized the following datasets: \u2022 Real Photos: 500 images were randomly selected from MSCOCO [18], which contains over 328K images along with their annotations. \u2022 AI-Generated Images Prompts: 500 prompts were randomly sampled from Diffusion Prompts, a database of approximately 80,000 prompts filtered and extracted from image finders. \u2022 AI-Generated Images: 500 images and prompts are randomly chosen from StableDiffusionDB [41]. This dataset contains images generated by Stable Diffusion based on prompts and hyperparameters provided by actual user interactions. Watermark Baselines. For traditional watermarking schemes, we selected DcTDwt [1] and DcTDwtSvD [8] which is deployed in Stable Diffusion as a watermark with an embedding capacity of 48. For post-processing watermarking schemes based on EncoderDecoder/GAN structures, we chose RivaGAN [44], Hidden [51] and StegaStamp [37] with embedding capacities of 32, 48, 48, and 48 respectively. For watermarking schemes based on Variational Autoencoders, we chose Stable Signature [13] and and SSLWatermark [14] with an embedding capacity of 48. Additionally, for watermarking schemes based on latent variables, we chose Tree-Ring with a watermark radius of 10. Given that tree-ring is a zero-bit watermark scheme, we utilized p-values as the detection metric. The \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Table 1: Bit Accuracy/Detection Accuracy Under Image Processing Method Brightness Noise Contrast Hue JPEG Blur Resize BM3D Value 2.0 0.05 2.0 0.25 50 7*7 0.3 30 Traditional Wm. DwtDct 0.601/0.000 0.801/0.642 0.497/0.000 0.479/0.000 0.488/0.000 0.582/0.092 0.493/0.000 0.498/0.000 D.Svd 0.612/0.042 0.850/0.999 0.718/0.118 0.485/0.000 0.498/0.000 0.989/0.999 0.506/0.000 0.632/0.084 Enc.-Dec. Wm. RivaGan 0.975/0.999 0.960/0.994 0.832/0.992 0.984/0.999 0.773/0.801 0.867/0.924 0.504/0.000 0.858/0.873 Hidden 0.964/0.999 0.971/0.994 0.979/0.999 0.992/0.999 0.849/0.823 0.816/0.852 0.825/0.873 0.626/0.168 S.Stamp 0.937/0.999 0.979/0.999 0.972/0.999 0.995/0.999 0.952/0.999 0.981/0.999 0.972/0.999 0.980/0.999 VAE-Based Wm. S.Signa 0.971/0.999 0.976/0.996 0.965/0.999 0.954/0.994 0.806/0.809 0.781/0.822 0.513/0.011 0.604/0.013 Latent-Based Wm. SSLWm. 0.927/0.999 0.627/0.124 0.975/0.999 0.942/0.997 0.547/0.000 0.997/0.999 0.844/0.901 0.620/0.224 Ours 0.942/0.999 0.915/0.999 0.959/0.999 0.982/0.999 0.912/0.999 0.966/0.999 0.922/0.999 0.902/0.999 Table 2: Image Sematic Quality and Undetectability Evaluation. The table demonstrates the impact of adding semantic watermarks on image quality through two No-inference Metrics, NIQE and PIQE. The semantic consistency before and after adding the DiffuseWatermark is evaluated through the Clip metric. Dataset Method NIQE\u2193PIQE\u2193Clip\u2191Bit/Detect DiffusionDB No-Watermark 4.91 28.21 0.342 0.511/0.000 Tree-ring(rad10) 5.32 30.28 0.332 -/0.999 Tree-ring(rad20) 6.64 37.33 0.301 -/0.999 DiffuseTrace(16) 4.22 29.08 0.344 0.999/0.999 DiffuseTrace(32) 5.04 29.77 0.339 0.992/0.999 DiffuseTrace(48) 4.72 28.41 0.340 0.984/0.999 MS-COCO Prompts No-Watermark 3.85 33.28 0.335 0.504/0.000 Tree-ring(rad10) 4.32 34.28 0.324 -/0.999 Tree-ring(rad20) 5.64 38.33 0.291 -/0.999 DiffuseTrace(16) 4.12 33.25 0.333 0.999/0.999 DiffuseTrace(32) 3.81 30.21 0.326 0.994/0.999 DiffuseTrace(48) 4.17 32.34 0.330 0.990/0.999 Diffusion Prompts No-Watermark 4.88 29.72 0.326 0.488/0.999 Tree-ring(rad10) 5.32 30.28 0.327 -/0.999 Tree-ring(rad20) 5.94 37.33 0.303 -/0.999 DiffuseTrace(16) 4.93 28.42 0.358 0.999/0.999 DiffuseTrace(32) 5.11 30.18 0.353 0.999/0.999 DiffuseTrace(48) 4.70 26.33 0.328 0.984/0.999 corresponding bit capacity of DiffuseTrace is 48 bits. Considering the additional overhead of redundancy codes, no error correction codes were used in the comparative experiments. Attack Baselines. To thoroughly evaluate the robustness of DiffuseTrace, we test it against a comprehensive set of baseline attacks that represent common image processing, VAE-based attack and diffusion-based attack. Specially, The set of attacks employed in our testing includes: \u2022 brightness and contrast change of 2.0 \u2022 addition of Gaussian noise with standard deviation of 0.05 \u2022 Adjustment of the hue by 0.25. \u2022 JPEG compression with a quality setting of 50. \u2022 BM3D denoising algorithm with Peak Signal-to-Noise Ratio Standard Deviation of 30. \u2022 Gaussian blur with kernel size 7 and standard deviation of 1 \u2022 Two Variational AutoEncoder (VAE) based image compression models, Bmshj18 [2] and Cheng20 [7], both with compression factors of 3. \u2022 A stable diffusion-based image regeneration model for watermark attack, Zhao23 [49] with 40 denoising steps. Evaluation Metrics. The two main objectives of incorporating watermarks are copyright protection and user tracing. Therefore, we utilize \ud835\udc5d\ud835\udc63\ud835\udc4e\ud835\udc59\ud835\udc62\ud835\udc52as the standard for copyright tracing and utilize bit accuracy rate as the standard for user tracing. We set a decision threshold to reject the null hypothesis for \ud835\udc5d< 0.01, requiring detection of the corresponding method-corrected 24/32 and 34/48 bits. Otherwise, the image is deemed to be without a watermark. Semantic consistency. Since our images have watermarks added before image generation, the watermark is reflected at the semantic level. Therefore, we choose the CLIP score metric [25] to evaluate the semantic consistency between generated images and prompt words. The reference metric will be used to evaluate the semantic quality difference between the generated images with and without watermark embedding in DiffuaeTrace in order to evaluate the fidelity of watermark embedding Image Quality. We evaluate the quality of an image through two no-reference metrics, the Natural Image Quality Evaluator (NIQE) score [22] and the Perceptual Image Quality Evaluator (PIQE) score [38] . The reference indicators will be used to compare the quality of images without watermarks with images with watermarks embedded in order to evaluate the loss of watermark embedding on the image and the invisibility of the watermark. 6.2 Sematic and image quality evaluation From Table 2, we evaluated the impact of embedding the watermark on image quality and semantic consistency before and after. The experiment utilized the stable-diffusion-2-1-base model [29] with 25 inference steps at the guidance scale of 5. Results indicate no significant differences in NIQE and PIQE quality metrics across different watermark bits. Additionally, semantic alignment of generated images, as assessed by Clip scores, remains similar to the original model. This suggests that DiffuseTrace does not rely on the trade-off between image quality and watermark robustness typical of post-processing watermarks. Since images are generated \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu 0 1 2 3 4 5 6 Brightness Intesity 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Brightness Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0.0 0.1 0.2 0.3 0.4 Standard Deviation of Noise 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Noise Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0 2 4 6 8 Contrast Intensity 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Contrast Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 20 40 60 80 100 Quality Rate 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR JPEG Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Kernel Size 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Blur Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 20 40 60 80 Resize Scale 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Resize Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0 20 40 60 80 100 Denoising Strength 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR BM3D Attack Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 2 4 6 8 Quality Rate 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR VAE-based Attack (Cheng 20) Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp 0 25 50 75 100 125 150 Denoiset Steps 0.00 0.25 0.50 0.75 1.00 Bit accuracy/TPR@1%FPR Diffusion-based Attack (Zhao 23) Bit accuracy TPR@1%FPR Stable Signature DwtDctSvd SSL Watermark Stegastamp Figure 2: The figure illustrates the performance of DiffuseTrace in response to various attacks of different intensities, measured by bit accuracy and [email protected]. It also compares [email protected] of traditional watermarks such as DwtDctSvd, SSL Watermark and Stable Signaturez under corresponding attacks. The watermark capacity for each scheme in the figure is 48 bits. entirely through correct sampling processes and variables are properly distributed without subsequent modifications to the images, the DiffuseTrace approach exhibits significant advantages in terms of image quality compared to post-processing solutions. Compared to other methods that embed watermarks in the latent space, the SSL watermark remains a post-processing solution. Tree-ring alters the distribution of initial latent variables by embedding the watermark in the frequency domain of the latent space through Fourier transformation. However, U-net fails to recover from the losses incurred during this process. Consequently, as the watermark radius increases, the generated images suffer significant losses in both quality and semantic consistency. 6.3 Robustness evalution against image processing From Table 1, we evaluated the robustness of the watermark against various image processing attacks. It could be observed from the graph that DiffuseTrace demonstrates stability and roubustness when faced with common image processing attacks. DiffuseTrace achieve watermark detection rate close to 100 percent and average bit accuracy above 90 percent under common attack. Compared to post-processing watermarking schemes and VAE-based watermarking schemes, DiffuseTrace demonstrates excellent stability when faced with significant image compression and resizing. Stegastamp remain highly robust in the comparison, since it sacrifices image quality and utilizes error correction codes to compress 96 bits into 48 bits, with a relatively large error correction space. Stable Signature watermark specially for diffusion model remain stable under most attacks, but is vulnerable to denoise algorithm and processing to high extent. Furthermore, We conducted detailed experiments on various types of image disturbance amplitudes and compared the current method to latent-based method ssl watermark and vaebased watermark stable signature under maximum attack intensity. The results showed that DiffuseTrace has a significant advantage in image processing stability compared to other methods. 6.4 Robustness evaluation against VAE-based attacks and diffusion based attacks. In Table 3, we evaluated the accuracy of various watermarking schemes when faced with deep learning-based attacks. The experiments consist of VAE-based attack and diffision-based attack which is the latest watermark attack. The table reveals that the majority of \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY schemes are unable to withstand this type of reconstruction attack. Stable Signature exhibits fragile From the results, it is evident that DiffuseTrace exhibits a significant advantage in countering VAEbased attacks utilizing diffusion models. SSL watermarks and stable signatures all exhibit low watermark detection rates, indicating that they are unable to resist both VAE-based and diffusion based attacks. For diffusion based attacks, except for DiffuseTrace, other watermark schemes have experienced a significant drop in bit accuracy. In subsequent experiments, we increased the intensity of the diffusion attack. From the results, it is evident that DiffuseTrace exhibits significantly higher resilience against reconstruction attacks compared to other methods. Even the VAE-attack with a quality coefficient of 1 or the diffusion-based attack with 150 denoise steps do not fundamentally affect the stability of the watermark and only the DiffuseTrace was able to maintain accuracy in high-intensity reconstruction. Reconstruction attacks are achieved by maintaining semantic accuracy, continuously adding noise to destroy watermarks and continuously reconstructing and restoring images to obtain images without watermarks. However, this process essentially does not alter the semantic consistency of the image nor does it significantly alter the initial latent variables of image inversion. Therefore, DiffuseTrace can remain stable under reconstruction attacks. Table 3: Bit Accuracy/Detection Accuracy Under Deeplearning-based Attack Method VAE A. Diffusion A. Bmshj18 [2] Cheng20 [7] Zhao23 [49] Traditional Wm. DwtDct 0.524/0.000 0.517/0.012 0.489/0.000 D.Svd 0.504/0.000 0.512/0.013 0.523/0.014 Enc.-Dec. Wm. RivaGan 0.611/0.063 0.632/0.070 0.588/0.070 Hidden 0.621/0.170 0.641/0.198 0.497/0.009 S.Stamp 0.979/1.000 0.965/1.000 0.852/0.927 VAE-based Wm. Stable Signature 0.616/0.224 0.682/0.409 0.541/0.014 Latent-based Wm. SSL Wm. 0.623/0.123 0.631/0.144 0.655/0.149 Tree-ring /0.993 /0.991 /0.997 Ours 0.972/1.000 0.967/1.000 0.970/1.000 6.5 Ablation Experiments This section, we experimentally quantify the impact of several key hyperparameters mentioned in the theoretical analysis 5 on the inaccuracy. we consider the impact of the guidance scale used during the generation phase, the inference steps employed during the inversion phase, and the version of the model on watermark detection in order to demonstrate the effectiveness of DiffuseTrace. Ablation on Guidance Scale. In the theoretical analysis 5.3, we elaborate on the reasons why the guidance scale introduces errors into the experiments. In the following experiments, we quantify the impact of the guidance scale on the DiffuseTrace watermarking scheme through experimentation. For the ablation experiment on the guidance scale, the scheduler is set the dpm++ [20] scheduler. The experimental setup includes setting both the inference steps and reverse inference steps to 20. We adjust the guidance scale to assess its influence on the experimental results. The specific experimental results depicted in the graph 3 show that as the guidance scale increases during the inference stage, the bit accuracy gradually decreases, while the detection accuracy remains relatively stable within the guidance scale range of 0 to 20. This indicates that the watermark detection accuracy is maintained, demonstrating the robustness of the watermark. Thus, users can freely adjust the guidance scale during the image generation stage while still ensuring traceability of the watermark. Deploying the diffusion model as a service can provide users with the option to adjust the guidance scale hyperparameter, which will not significantly affect watermark detection. Ablation on Inference Steps. The ablation experiment for reverse inference steps employed the DPM++ scheduler, with an inference step setting of 20 and a guidance scale set to 5. The evaluation of the experiment\u2019s results involves adjusting the reverse inference steps to assess their impact. The experimental results, as depicted in the figure 3, indicate that after 5 inference steps of inversion, the watermark detection rate stabilizes. Even with only 2 inference steps, a good detection rate can still be maintained. This suggests that the number of inference steps does not significantly affect the accuracy of detection. Therefore, during the detection phase, to increase efficiency, a small number of reverse inference steps can be employed to extract the image watermark. 7 RELATED WORK 7.1 Detection of AI-Generated Images It is difficult for humans to distinguish between real and fake images. Realistic fake images intensify concerns about the disinformation dissemination. To tackle this problem, various fake image detection approaches have been proposed. A typical approach [15, 36, 39] involves extracting temporal, frequency, and texture features from images. Subsequently, a feature extraction network is constructed to train a binary classifier to distinguish between AI-generated images and real images. However, this image detection method exhibits noticeable performance degradation when applied to diffusion models. For AI-generated image detection based on diffusion models [40], leveraging a pre-trained diffusion model allows for a more accurate reconstruction of the characteristics of images generated through the diffusion process. By reconstructing the diffusion process, differences between real images and images generated by the diffusion model can be detected thereby enabling the detection of AI-generated images. Adding a watermark to generated images is also a method for identifying AI-generated images. 7.2 Image Watermarking The strategy of adding watermarks to images for protecting intellectual property rights has a long history in the field of computer vision. Traditional image watermarking methods typically involve embedding watermarks into appropriate frequency components of the image, utilizing techniques such as Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) [1] or Singular Value Decomposition (SVD) [19]. Deep learning-based approaches, such as HiDDeN [51] , StegaStamp [37], have demonstrated competitive results in terms of robustness against various geometric \fConference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY Liangqi Lei, Keke Gai, Jing Yu, and Liehuang Zhu 0 5 10 15 20 Guidance Scale 0.0 0.2 0.4 0.6 0.8 1.0 Bit accuracy/TPR@1%FPR Ablation on Guidance Scales Stable-diffusion-v1-4 Stable-diffusion-2-1-base TPR@1%FPR Bit accuracy 5 10 15 20 25 30 Reverse Inference Steps 0.0 0.2 0.4 0.6 0.8 1.0 Bit accuracy/TPR@1%FPR Ablation on Reverse Inference Steps Stable-diffusion-v1-4 Stable-diffusion-2-1-base TPR@1%FPR Bit accuracy Figure 3: The figure (left) illustrates the ablation experiment concerning the guidance scale, where adjusting the guidance scale leads to a gradual decrease in the watermark\u2019s bit accuracy, while the watermark detection rate remains stable. The figure (right) shows the results of the ablation study on reverse inference steps, where the bit rate detected stabilizes after two inference steps. transformations. These methods often employ deep learning encoders and extractors to embed and extract watermarks respectively. the aforementioned watermarking methods primarily focus on postprocessing existing images. The core idea is to achieve robustness against various attacks while minimizing the impact on the visual quality of the image. Therefore, post-processing methods are confronted with a trade-off between watermark stability, watermark capacity and image quality. For diffusion model watermarking, it can mainly be categorized into three types: Watermark embedding during training phase. In methods incorporating watermarks during the training phase, watermarks are embedded into the training data. The data is encoded with the watermark during training and a decoder is trained to extract the watermark. During the detection phase, all images generated by diffusion models will carry encoded binary strings. Watermark [50] is a representative approach. Methods of this kind typically have stringent requirements for watermark embedding, involving the incorporation of watermarks into a substantial dataset of images followed by training the entire model. Fine-tuning phase with watermark incorporation. The main purpose of such watermark embedding methods is to integrate the watermark component into the model component, making it inseparable during distribution. Watermarks are incorporated into model components during fine-tuning. For instance, methods like Stable Signature [13] and FSwatermark [43] fine-tune the variational autoencoders to ensure that all generated images carry the watermark. It\u2019s approximate to integrating the watermark into the final generation stage. Watermark embedding into latent space during inference. During inference steps, watermarks are added to the latent variable space of the model. Methods like Tree-ring [42] and ZoDiac [45] achieve this by diffusing inversion and applying frequency domain transformations to latent variables, ensuring that all generated images carry the watermark. DiffuseTrace also falls into this category of methods. The watermark is embedded in the image prior to its generation. 7.3 Image Watermarking Attack The goal of image watermark attacks is to assess the robustness of image detection after practical modifications. These attacks mainly fall into two categories: image processing attacks and deep learningbased attacks. image processing attacks. Common image processing techniques include adding noise, color jitter, image compression, image scaling and Gaussian blur. Image processing or compression methods may utilize frequency-domain or 3D transformation-based approaches including BM3D denoising algorithm [9]. Deep learning-based attack. Deep learning-based attack methods, including methods based on variational autoencoders such as [2] and [7] can disrupt watermarks embedded in images. In recent research, diffusion based attacks [49] are used to encode the semantic features of images, add noise to disrupt watermark and regenerate images. Reconstruction models exhibit prominent performance and can eliminate most watermarks injected by most existing methods. 8 CONCLUSION In this paper we propose DiffuseTrace, a plug-in multibit watermarking module to protect copyright of diffusion models and trace generated images sematically. DiffuseTrace extends semantic image watermarking of latent diffusion models further into multi-bit scenarios. DiffuseTrace does not rely on the balance between image quality and watermark robustness and has significant advantages in \fDiffuseTrace: A Transparent and Flexible Watermarking Scheme for Latent Diffusion Model Conference acronym \u2019XX, June 03\u201305, 2024, Woodstock, NY image quality compared to previous watermarking schemes. Compared to state-of-the-art schemes, DiffuseTrace demonstrates prominent performance against variational autoencoders and diffusionbased watermark attacks. Due to its flexibility and generalizability, DiffuseTrace can be seamlessly applied to copyright protection in diffusion models, recognition of machine-generated images, and user traceability in machine-generated image services. We assess the security of the watermarking scheme through theoretical analysis and demonstrate its robustness against image processing attacks and state-of-the-art image watermarking attack schemes through experiments."
17
+ }
title_10K/test_title_short_2405.02710v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02710v1",
3
+ "title": "Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning",
4
+ "abstract": "With the deluge of information delivered by the daily news cycle, there is a\ngrowing need to effectively and efficiently summarize news feeds for quick\nconsumption. We leverage large language models (LLMs), with their advanced\nlearning and generative abilities as compared to conventional language models,\nto generate concise and coherent summaries for news articles from the XSum\ndataset. Our paper focuses on two key aspects of LLMs: Efficient in-context\nLearning (ELearn) and Parameter Efficient Fine-tuning (EFit). Under ELearn, we\nfind that increasing the number of shots in prompts and utilizing simple\ntemplates generally improve the quality of summaries. We also find that\nutilizing relevant examples in few-shot learning for ELearn does not improve\nmodel performance. In addition, we studied EFit using different methods and\ndemonstrate that fine-tuning the first layer of LLMs produces better outcomes\nas compared to fine-tuning other layers or utilizing LoRA. We also find that\nleveraging more relevant training samples using selective layers does not\nresult in better performance. By combining ELearn and EFit, we create a new\nmodel (ELearnFit) that leverages the benefits of both few-shot learning and\nfine-tuning and produces superior performance to either model alone. We also\nuse ELearnFit to highlight the trade-offs between prompting and fine-tuning,\nespecially for situations where only a limited number of annotated samples are\navailable. Ultimately, our research provides practical techniques to optimize\nnews summarization during the prompting and fine-tuning stages and enhances the\nsynthesis of news articles.",
5
+ "authors": "Che Guan, Andrew Chin, Puya Vahabi",
6
+ "published": "2024-05-04",
7
+ "updated": "2024-05-04",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
14
+ "gt": "Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning",
15
+ "main_content": "Introduction There has been an overload of information with each passing day \u2013 data is more voluminous, comes in more varieties and arrives at higher velocity. The news cycle is a good example of this trend, making it more difficult to read and synthesize the vast amount of information coming our way. The advent of large language models (LLMs) has led to a substantial improvement in the effectiveness and comprehensibility of news summarization. LLMs present two ways to address downstream tasks \u2013 through prompt engineering and fine-tuning. In our research, we explore various techniques to improve model performance through better prompts and finetuning methods. First, we study efficient in-context learning, which we call ELearn to denote the process of the model learning through prompts. We examine the impact of LLM size, the number of shots, and various templates during the in-context learning . We also select relevant samples in prompting in an attempt to improve performance. We then explore efficient methods to fine-tune LLMs. Calling this technique EFit, we test the performance of selective layer fine-tuning and LoRA in news summarization. We also utilize selective samples to improve the training set for the fine-tuning process. Finally, we combine ELearn and EFit to create ELearnFit and find that this model achieves superior performance versus either model alone. We make various contributions to existing research on news summarization 1. Through ELearn, we find that using larger models, increasing the number of shots during prompting, and leveraging simple templates can all enhance model performance. We also show that utilizing selective relevant examples during prompting does not meaningfully impact performance. Through EFit, we find that fine-tuning the first layer of LLMs produces better outcomes as compared to fine-tuning other layers or utilizing LoRA, and leveraging more relevant training samples using selective samples does not result in better performance. The combined model, ELearnFit, leverages the best of both worlds and suggests practical implementations for practitioners, especially when using a limited number of annotated samples. 2 Related Work The evolution of news summarization techniques has been driven by advancements in NLP and the increasing availability of largescale datasets. Early news summarization techniques relied on statistical methods, such as frequency analysis and clustering, to extract important information from news articles. These methods were limited in their ability to capture the semantics and context of the news content. With the advent of deep learning, news summarization techniques have undergone a significant transformation. Deep learning models, particularly transformer-based architectures such as BERT and GPT-3[3, 5, 17], have demonstrated remarkable performance in various NLP tasks, including news summarization [7]. These models are able to learn complex representations of news articles and generate summaries that are both informative and coherent. Recent research in news summarization has focused on developing techniques that can handle diverse types of news articles, including 1The codes used in this study were derived from the class \"Deep Multi-Task and Meta Learning\" offered by Stanford School of Engineering. We implemented and adapted these foundational project codes to meet the specific requirements of our study. arXiv:2405.02710v1 [cs.CL] 4 May 2024 \flong and complex articles, and generate summaries that are tailored to specific user needs and preferences. Additionally, there has been growing interest in explainable news summarization [8, 13], which aims to provide users with insights into how summaries are generated and the rationale behind the selection of specific sentences or phrases. Fine-tuning LLMs and in-context learning are two powerful techniques that have been successfully applied to summarization [2, 6, 19]. Fine-tuning LLMs [15] involves adapting the pre-trained LLM to the specific task of news summarization by fine-tuning its parameters on a smaller, task-specific dataset. This allows the LLM to leverage its learned knowledge and adapt it to the task of generating informative and coherent summaries. In-context learning is a technique where a pre-trained LM utilizes text input to define a task. By providing the model with an instruction and/or a few task demonstrations, it gains the ability to predict subsequent steps and complete additional instances of the task [3]. Furthermore, in-context learning can be viewed as a form of implicit Bayesian inference [18]. The model learns to infer a latent concept from the context and uses it to generate a response. The pretraining distribution can be seen as a mixture of hidden Markov models (HMMs), where each HMM represents a different concept. When prompted with a specific context, the model implicitly infers the latent concept that is most relevant to the context and generates a response based on that concept. To facilitate the training and evaluation of news summarization models, large-scale datasets such as CNN/Daily Mail and XSum [4, 14] have proven invaluable. These datasets provide a diverse collection of news articles and human-generated summaries, enabling researchers to benchmark different summarization techniques and track progress in the field. 3 Approach In this study, we utilize the XSum dataset, a large-scale collection of news articles with annotated summarizations, as analyzed in Subsection 3.1 , to explore various methods for enhancing prompting (ELearn) and fine-tuning (EFit), which will be explained in Subsections 3.2 and 3.3, respectively, in a more efficient manner. Furthermore, we investigate the advantages of combining these techniques through our proposed ELearnFit approach, which will be described in Subsection 3.4. To run each model, the input consists of the testing article, which may or may not be accompanied by support article-summary pair samples in the prompt. The output is the generated summary. To generate the summary, we sample from a pre-trained language model using greedy decoding, producing tokens one by one until a stop token is encountered or the maximum token limit of 100 is reached. To evaluate the performance of the model, the ROUGE-1 F1 score is employed. This metric measures the overlap between the generated summary and the reference summary Although the main emphasis of this paper is on fine-tuning LLaMa2 models, it is worth highlighting that the strategies and techniques discussed in the following sections can be adapted to optimize the performance of other transformer-based models. 3.1 Analysis of Data and Performance of Existing Models on Leaderboard We conduct our research using the XSum dataset, which consists of a training set comprising 204,045 article-summary samples meticulously curated by the original researchers. In Figure 1, the distributions of article and summary lengths in the training set are displayed. Due to limited resources, we face constraints (refer to Section 4.6 for more details) in using powerful GPU machines to fine-tune models using the complete training dataset. Additionally, the size and input token limits for several representative open-source GPT models, as indicated in Table 1, could pose restrictions on testing few-shot learning scenarios. Consequently, we create a smaller dataset consisting of 17,806 samples from the training set. This subset is obtained by filtering out rows from the training dataset where the combined word count of the article and summary exceeded 100. The length distributions of the filtered articles and summaries are displayed in Figure 2. Based on numerical testing observations, it has been determined that even the filtered dataset is still too large to adequately explore optimal parameters in experiments. To ensure a fair comparison across all experiments, we further reduce the dataset by selecting the initial 256 article-summary pairs as the fine-tuning set. The remaining 125 pairs are reserved for testing purposes. It\u2019s important to note that while the testing set consists of only 125 pairs, the entire filtered dataset (excluding the testing pairs) is utilized to assist the model in selecting relevant support samples for prompting and fine-tuning, as explained in subsections 4.2 and 4.4, respectively. According to the leaderboard ranking [1], the top-performing papers in news summarization achieve impressive results by assigning probability mass to candidate summaries [20] or by aligning model-generated sequences with reference sequences [12]. These approaches consistently yield Rouge-1 scores close to 0.5 across the entire testing dataset. However, in our work, we simply sample from a pre-trained LLM using greedy decoding, generating tokens iteratively until either a stop token is encountered or the maximum token limit of 100 is reached. It is worth noting that our primary focus is on optimizing efficient techniques for in-context learning and fine-tuning in news summarization, with the specific choice of dataset and token adjustment not being crucial to the outcome of our work. Figure 1: Length Distribution of Articles and Summaries in Training Set 2 \fFigure 2: Length Distribution of Filtered Articles and Summaries (Combined Word Count \u2264100) Table 1: Number of Parameters and Input Token Limits for GPT Models (Approximately 1.5 Tokens per Word) Models Parameters Input Tokens GPT2-Medium 345 million 1,024 Eleuther-Neo 2.7 billion 2,048 LLaMa2-7B 7 billion 2,048 LLaMa2-13B 13 billion 4,096 3.2 ELearn Efficient In-Context Learning We use two simple templates to investigate the impact of templates on few-shot learning. Figure 3 illustrates the case for one-shot learning. The first template, called \"NONE,\" utilizes a single space to separate the support article, support summary, and the test article. The second template, known as \"TL;DR\" (Too Long;Didn\u2019t Read), utilizes \" TL;DR: \" to differentiate between the article and summary (Please note that there intentionally exists a space before and after \"TL;DR:\" in most occurrences, while in the last occurrence of \"TL;DR:\", there is only one space before \"TL;DR:\" and no space after the colon. This formatting choice has been made to facilitate word generation using a language model.). Additionally, a single space is used to separate the support sample from the test sample. For clarity, these separators are highlighted in green in the figure. Figure 3: Templates for One-Shot Learning: \"none\" vs \"TL;DR\" Figure 3 presents an example of a one-shot learning template. An interesting aspect to explore is the impact of different numbers of support examples in the prompt. When selecting examples, one approach is to randomly choose article-summary pairs from the training set, which generally provides diversified support examples. Another approach is to use retrieve similar pairs to a given testing article in the prompt, which may result in examples concentrated around specific content or topics. Furthermore, it is important to consider the size of language models, as it directly relates to memory usage and can potentially influence in-context learning. 3.3 EFit Efficient Fine-Tuning In LLaMa2 [16], the transformer block plays a crucial role in the transformer architecture. It comprises of two main sub-layers: a self-attention layer and a feed-forward network. To construct a Transformer model, multiple Transformer blocks are repeated (32 for LLaMa2-7b and 40 for LLaMa2-13b) and stacked together. Each block processes the output of the previous block, allowing the LLaMa2 model to capture both local and global dependencies in the input sequence. However, due to the large size of the model and limited GPU resources, one approach for parameter-efficient fine-tuning is to selectively choose a specific transformer block layer, such as the first layer, to fine-tune the pre-trained weight matrix \ud835\udc4a0 \u2113\u2208R\ud835\udc511\u00d7\ud835\udc512 to a new arbitrary weight matrix \ud835\udc4a\ud835\udc53\ud835\udc61 \u2113 while freezing the remaining block layers. Another approach for parameter-efficient fine-tuning is to employ LoRA (Low-Rank Adaptation). This technique freezes the pretrained model weights and introduces trainable rank decomposition matrices into each layer of the Transformer architecture. By doing so, the number of trainable parameters for downstream tasks is significantly reduced [9]. Mathematically, LoRA imposes constraints on the fine-tuned parameter space:\ud835\udc4a\ud835\udc53\ud835\udc61 \u2113 = \ud835\udc4a0 \u2113+\ud835\udc34\ud835\udc35\u22a4, where \ud835\udc34\u2208R\ud835\udc511\u00d7\ud835\udc5dand \ud835\udc35\u2208R\ud835\udc512\u00d7\ud835\udc5dare low rank matrices and \ud835\udc5d<< \ud835\udc511,\ud835\udc512. With LoRA, the number of parameters being fine-tuned for a single layer is (\ud835\udc511 + \ud835\udc512) \u00d7 \ud835\udc5d. The original number of parameters for the single layer is \ud835\udc511 \u00d7\ud835\udc512. Therefore, the ratio of parameters fine-tuned by LoRA to the original parameters is: (\ud835\udc511 + \ud835\udc512) (\ud835\udc511 \u00d7 \ud835\udc512) \u00d7 \ud835\udc5d= ( 1 \ud835\udc511 + 1 \ud835\udc512 ) \u00d7 \ud835\udc5d Let\u2019s take the query projection matrix (q_proj) of the self-attention layer of LLaMa2-7b as an example. The matrix has dimensions of 4,096 x 4,096, with \ud835\udc511 = 4, 096 and \ud835\udc512 = 4, 096. By applying LoRA with a rank parameter of \ud835\udc5d= 16 (where \ud835\udc5d<< \ud835\udc511 and \ud835\udc5d<< \ud835\udc512), we achieve a reduction ratio of 0.0078, indicating significant parameter reduction. LoRA proves to be most effective in saving parameters when \ud835\udc5dis much smaller than both \ud835\udc511 and \ud835\udc512. Furthermore, inspired by the idea of retrieval augmented generation (RAG) [10], we incorporate the selection of relevant support examples during the prompting and fine-tuning stages. This is accomplished by performing semantic search to retrieve top pairs that are similar to each individual testing article. By adopting this approach, the fine-tuning examples can be more targeted and aligned with the specific content or topics covered in the testing articles. Another alternative is to randomly select article-summary pairs, which introduces a broader range of examples for the fine-tuning process. This random selection provides diverse instances, enhancing the fine-tuned model\u2019s robustness and adaptability. 3.4 ELearnFit Combine ELearn and EFit Both the ELearn and EFit approaches, discussed earlier, have the potential to independently improve model performance. ELearn is preferable when there are few annotated examples available, whereas EFit may be more suitable when numerous examples are accessible. In practice, annotations are costly and often limited to a small number of examples per task. Moreover, training models 3 \fwith a large amount of data necessitates substantial GPU resources and time. To address these issues, we propose an approach called ELearnFit, which combines ELearn and EFit by first fine-tuning and then prompting the model. Since both ELearn and EFit have multiple parameters to optimize independently, we employ a heuristic approach. This involves selecting optimal parameters from the ELearn optimization process and then incorporating the optimal parameters from the EFit optimization process. By doing this, we effectively manage computational resources and time constraints while striving for the best parameter settings. For these experiments, prompting is conducted via random sampling of support examples in the prompt to be fed to the pre-trained model. On the other hand, fine-tuning is performed over ten iterations, with data randomly sampled from the training set without replacement in each iteration. Both the randomly sampled examples in the prompt for ELearn and the fine-tuning process for EFit introduce variability in the fine-tuning process. To comprehensively evaluate and analyze the robustness and performance of ELearn, EFit, and ELearnFit, we investigate which component contributes more to the variation. This investigation is crucial for understanding the stability and reliability of each approach across different trials and conditions. Ultimately, it will help us identify the most robust strategy that consistently delivers strong performance in the presence of variability. 4 Experiments All experiments are run on the Azure ML platform, harnessing the computational capabilities of A100 GPUs equipped with 80 gigabytes of high-bandwidth memory. This technological foundation provide the ideal setting for a series of groundbreaking investigations. In order to ensure a systematic exploration and refinement of the parameters, we conduct all experiments in a sequential manner. Through employing a heuristic sequential approach, we efficiently manage computational resources and time constraints while striving for optimal parameter settings. Subsection 4.1 focuses on ELearn, and compares the results of varying LLM model size, prompt templates, and few-shot learning paradigms. Subsection 4.2 delves into the impact of selective samples for prompting on ELearn. Subsection 4.3 shifts to EFit, and explores the effectiveness of parameter-efficient fine-tuning through two distinct approaches: selective layer finetuning and LoRA algorithms. Subsection 4.4 sheds light on the insights gleaned from selective training samples for EFit. Subsection 4.5 analyzes the impact of combining the capabilities of ELearn and EFit, resulting in ELearnFit, and highlighting the potential for synergy between these techniques. Lastly, Subsection 4.6 compares the robustness of the various models. 4.1 Investigate ELearn by Analyzing the Influence of Model Size, Templates, and Few-shot Learning In this experiment, we compare four representative open-source GPT models: Eleuther-Neo, GPT2-medium, LLaMa2-7b, and LLaMa213b, explore the influence of two prompt templates (none and TL;DR), and vary the number of examples in the prompt. The results are illustrated in Figure 4, where the x-axis represents the number of examples in the prompt and the y-axis represents the Rouge-1 score. Our findings suggest that increasing the number of examples in the prompt leads to improved model performance. Notably, in the case of GPT-2 models, the zero-shot performance exceeds the one-shot performance. These findings align with previous studies conducted by [3, 18] on datasets such as LAMBADA, HellaSwag, PhysicalQA, and RACE-m, which reported similar observations in relation to GPT-3. Additionally, we observe that utilizing a straightforward prompt structure, specifically \"TL;DR\" (depicted in red), facilitates the model\u2019s learning process. This simplified format enables faster pattern recognition in comparison to the none template (depicted in black). Furthermore, focusing on the four models and examining their performance with the \"TL;DR\" template (depicted in red) in Figure 4, it becomes evident that LLaMa2-7b and LLaMa213b outperform gp2-medium and Eleuther-Neo. This finding suggests that the larger models, LLaMa2-7b and LLaMa2-13b, possess superior capabilities in handling the summarization task, signifying their suitability for this specific application. Figure 4: Comparison of Four Language Models with Fewshot Learning using Two Templates 4.2 Enhance ELearn via Selective Samples during Prompting To further improve the perforamcne of ELearn, inspired by the idea of RAG for prompting, we utilize semantic search to retrieve support article-summary samples that are contextually relevant to each testing article, which enables ELearn to learn from these samples in prompts and potentially generate more accurate responses. In this experiment, we broaden the range of support samples used in the prompt by including the entire filtered dataset, excluding the samples designated for testing. The outcomes obtained using this expanded scope align closely with those achieved using the original support samples from training set, so we solely showcase the results obtained from the latter (the complete filtered dataset, excluding the 125 testing samples) in this paper. Note that the order of prompting may potentially lead to different performance results as compared to random prompt ordering. Research conducted by 4 \f[11] demonstrates that in the QA problem, the location of relevant information within the language model\u2019s input context follows a U-shaped performance curve. Moreover, the 7B Llama-2 models are biased towards recent information, performing best when it is located at the end of the input context. However, exploring the impact of prompt order for news summerization is beyond the scope of this research paper. Figure 5 depicts that the utilization of selective samples during few-shot learning does not significantly affect the performance of the model. One potential explanation for this outcome could be that our straightforward implementation is incapable of capturing the extensive range of topics encompassed in news articles. As a result, the support samples may not adequately represent the diverse range of subjects covered by the articles in the test dataset. Figure 5: An Evaluation of In-Context Learning Methods: Comparing Random Samples vs. Selective Samples in Prompts 4.3 Investigate EFit We explore the effectiveness of parameter-efficient fine-tuning using two approaches: LoRA (LoRA4, LoRA16, and LoRA32 algorithms) and selective layers. Figure 6 shows the results of the various fine-tuned models with LoRA as well as the models fine-tuned on specific layers (while freezing the remaining layers). The results suggest that increasing the number of training examples for fine-tuning generally leads to improved performance. When there is only one support example, all algorithms perform similarly. However, with a larger number of support examples (e.g., 8 and 64), fine-tuning the first layer and fine-tuning with LoRA16 results in significantly better performance. Furthermore, when the number of support examples is limited (e.g., 8), fine-tuning the first layer of a LLaMa2-7b often yields weaker results compared to fine-tuning with LoRA16. This is because LoRA16 makes slight modifications to each layer of the LLaMa2-7b, allowing it to adapt more effectively to a small number of examples. However, as the number of support examples increases (e.g., 64), fine-tuning the first layer of the LLaMa2-7b shows improved performance compared to fine-tuning with LoRA16. This is because fine-tuning the first layer allows the LLaMa2-7b to learn task-specific patterns and relationships more directly, leveraging the increased amount of training data. Additionally, fine-tuning with LoRA16 outperforms both LoRA4 and LoRA32. This suggests that the decomposed weight matrix with a rank of 16 is better suited for representing features learned from news articles compared to ranks 4 and 32. Finally, the model where only the last layer is fine-tuned performs the worst, suggesting that the pre-trained and fine-tuned data sets do not fully overlap. As a result, fine-tuning the lower-level, granular features proves more effective in improving performance as compared to focusing on high-level features, given an adequate number of support examples. These findings suggest that fine-tuning the first layer of LLMs has the most impact. Figure 6: Parameter-Efficient Fine-tuning using LoRA and Selective Layer Approaches (Please note that the x-axis is logarithmically scaled for values of the number of support examples greater than 4). In practice, annotated examples may not be readily available so we investigate the impact of sample size on model performance. In Figure 7, we observe that the Rouge-1 score reaches a local maximum around 64 training examples. Beyond that point, the performance exhibits fluctuations as the number of examples continues to increase. This finding suggests that 64 training examples could potentially represent a \"sweet spot\" for fine-tuning. 4.4 Enhance EFit via Selective Training Samples during Fine-tuning To enhance the performance of EFit, we draw inspiration from the concepts of selecting relevant samples in the prompting phase. In this experiment, for each testing sample, we select the top 1 or top 2 most similar training samples from the entire filtered dataset, excluding the 125 testing samples. We then fine-tune the model using these selected samples. Table 2 shows the results. When fine-tuning the first layer of LLaMa2-7b, using the more similar samples during fine-tuning did not impact model performance. On the other hand, when the model is fine-tuned LoRA16, using the more similar samples led to slightly improved performance. Interestingly, the improved results under LoRA16 are comparable to the results under the model with the 5 \fFigure 7: Impact of Number of Training Examples on Finetuning the First Layer of LLaMa2-7b (note that the x-axis has been logarithmically scaled). Table 2: Comparison of Rouge-1 (%) between EFit with Random Samples and Selective Samples EFit Sampling LLaMa2-7b, Finetuned First layer LLaMa2-7b, LoRA16 Random Sample 36.32 32.43 Top 1 Selective Sample 35.36 36.16 Top 2 Selective Samples 36.62 34.38 fine-tuned first layer. This suggests that the LoRA16 model may benefit from having more relevant samples during fine tuning. 4.5 ELearnFit Optimize LLM by Combining ELearn and EFit We now look to combine the ELearn and EFit approaches to gain the benefits of both better prompting and fine-tuning. In this experiment, we focus on the TL;DR template in the prompt and two finetuned models (fine-tune the first layer of LLaMa2-7b or LLaMa2-7b with LoRA16). During each testing phase, we first fine-tune LLaMa2-7b and then apply few-shot in-context learning using different numbers (referred to as shots) of support examples (e.g., 0, 1, 2, 4, and 8 shots). The examples for in-context learning were randomly selected from the training set and incorporated into the prompts. The results, as depicted in Figure 8 and Figure 9, indicate that when there are limited annotations available for fine-tuning LLaMa27b, 4-shot learning leads to superior performance when compared to the results using less shots. Interestingly, both 4-shot and 8shot learning exhibit similar performance levels. However, this performance gap disappears when there are enough examples for fine-tuning and the results with different shot learnings converge. This suggests that few-shot learning has a lesser impact when a model is effectively fine-tuned with an adequate number of examples. Said another way, having more examples in the prompt can compensate for smaller sample sizes during the fine-tuning process. Similar to our investigation of selecting relevant samples in fewshot learning for ELearn, we now test whether this approach would Figure 8: Fine-tuning LLaMa2-7b with LoRA16 and Applying Few-shot In-context Learning Figure 9: Fine-tuning the First Layer of LLaMa2-7b and Applying Few-shot In-context Learning benefit ELearnFit during its few-shot learning phase. Figures 10 and 11 show the results of applying for ELearnFit after fine-tuning the first layer of LLaMa2-7b and fine-tuning with LoRA16, respectively. It is worth mentioning that when the number of training examples for fine-tuning is zero, it signifies pure in-context learning. Consistent with our findings in Section 3.2, these results suggest that randomly sampled examples offer a wider range of styles for the LLMs to effectively learn the summarization task. On the other hand, selective sampling faces challenges in capturing the desired diversity. Furthermore, it is worthwhile noting that when the model undergoes fine-tuning with LoRA16 and has an adequate number of examples (e.g., 64 examples), selective sampling demonstrates a slight improvement in overall model performance. We now use semantic search to identify the most similar training samples to fine-tune the model. Figure 12 shows that fine-tuning the first layer, using selective samples in training and 4-shot learning during incontext learning exhibits slightly inferior performance as compared to the proposed combined approach, which involves 64 examples for 6 \fFigure 10: Comparing In-Context Learning Approaches: Random Sampling vs. Selective Sampling during Prompting, Following Fine-tuning the First Layer of LLaMa2-7b Figure 11: Comparing In-Context Learning Approaches: Random Sampling vs. Selective Sampling during Prompting, Following Fine-tuning LLaMa2-7b with LoRA16 fine tuning and 4 examples in prompting. However, it outperforms the ELearnFit approach with 1or 2-shot learning. Additionally, as depicted in Figure 13, when fine-tuning with LoRA16, the combination of selective samples in training, and few-shot learning with four selective samples during in-context learning yields the overall best result. One possible explanation is that the use of selective samples for finetuning for prompting together could potentially enhance the effectiveness of finetuning LLaMA2-7b with LoRA16. This proposition finds support in the comparison between Figure 8 and Figure 9. Specifically, when evaluating the performance of the 4-shot learning scenarios, an increase in the number of examples for finetuning from 8 to 64 results in a degradation in performance for the former, as depicted in Figure 8. In contrast, the latter exhibits a stable performance, as illustrated in Figure 9. Figure 12: Comparing Fine-tuning the First Layer of LLaMa27b: Random Sampling vs. Selective Sampling in Training Set, and Random Sampling vs. Selective Sampling during Prompting Figure 13: Comparing Fine-tuning LLaMa2-7b with LoRA16: Random Sampling vs. Selective Sampling in Training Set, and Random Sampling vs. Selective Sampling during Prompting 4.6 Robustness Checks In our experiment, fine-tuning is performed over ten iterations. In each iteration, data are randomly sampled from the training set without replacement, introducing variability in the fine-tuning process. We now assess the robustness of three approaches: ELearn, EFit, and ELearnFit. The descriptions for each model are detailed in Table 3. Figure 14 presents the results obtained from five repeated trials for each approach. The x-axis represents the nth trial, while the y-axis displays the Rouge-1 score. While we were limited to five trials due to computational constraints, additional trials could be conducted to further assess the robustness of these approaches. This experimental setup allowed us to gain insights into the performance of each approach under varying conditions and to compare their effectiveness in different scenarios. 7 \fTable 3: Model Description for Robustness Comparison Model In-context Learning Fine-tuning ELearn 4 Shots EFit_first First Layer w/ 64 Examples EFit_LoRA6 LoRA16 w/ 64 Examples ELearnFit_first 4 Shots First Layer w/ 64 Examples ELearnFit_LoRA16 4 Shots LoRA16 w/ 64 Examples Table 4: Performance Details for Robustness Comparison Model Mean Standard Deviation ELearn 0.2962 0.0303 EFit_first 0.3465 0.0039 EFit_LoRA16 0.3274 0.0029 ELearnFit_first 0.3441 0.0086 ELearnFit_LoRA16 0.3273 0.0053 Table 4 reveals that in-context learning exhibits greater variability across trials compared to the other two approaches. This is evident from the higher standard deviation observed in the ELearn results. In contrast, both EFit_first and ELearnFit_first demonstrated similar performance, although ELearnFit_first had twice the standard deviation of EFit_first. A similar observation can be made for ELearnFit_LoRA16 and EFit_LoRA16. These findings further suggest that fine-tuning offers more stable performance than incontext learning. Additionally, when the number of samples for fine-tuning is limited, the combined approach ELearnFit yields consistent and reliable performance across different trials, highlighting its potential for enhancing robustness. Figure 14: Robustness Comparison of ELearn, EFit and ELearnFit Limitations In this paper, we primarily directed our attention to the LLaMa2-7b model, a formidable language model consisting of 7 billion parameters. Assuming that each parameter occupies a modest 4 bytes of memory, the estimated total memory requirement for this model is approximately 27.34 gigabytes, calculated as follows: Total Memory Size = 7 \u00d7 109 \u00d7 4 bytes/(10242) \u224827.34 gigabytes (1) where: 1 kilobyte (KB) = 1024 bytes 1 megabyte (MB) = 1024 kilobytes Similarly, the total memory requirements for the LLaMa2-13b and LLaMa2-70b models are approximately 51 gigabytes and 274 gigabytes, respectively. Due to limited resources on A100 GPUs, which offer up to 80 gigabytes of high-bandwidth memory, and the substantial computation time required for each experiment, we primarily focus on optimizing ELearn and EFiT with the LLaMa2-7b model in this paper. However, we believe that the insights gained from this research work can be readily extended to larger language models such as LLaMa2-70b, especially when coupled with more powerful GPU resources. Conclusion News summarization has become increasingly important as the volume of information has exploded. In our research, we explore different techniques to enhance news summaries. Under prompting (ELearn), we demonstrate that using larger models, adding more prompts, and utilizing simple templates improve performance. We also show that fine-tuning (EFit) enhances performance, especially when the first layer of models is fine-tuned. Surprisingly, for both prompt engineering and fine-tuning, leveraging more relevant samples does not improve performance. This is likely due to the fact that news articles are very diverse, and retrieving highly relevant samples during prompting or fine-tuning may result in over-learning, resulting in the model\u2019s failure to adequately capture the wide range of topics covered in the test dataset. Finally, we show that our combined model (ELearnFit) produces the best performance, particularly for situations where there are few annotated samples. In practice, our research suggests that a fine-tuned model (especially on the first layer) coupled with diverse examples during prompting, yields optimal performance for news summarization."
16
+ }
title_10K/test_title_short_2405.02730v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02730v1",
3
+ "title": "U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers",
4
+ "abstract": "Diffusion Transformers (DiTs) introduce the transformer architecture to\ndiffusion tasks for latent-space image generation. With an isotropic\narchitecture that chains a series of transformer blocks, DiTs demonstrate\ncompetitive performance and good scalability; but meanwhile, the abandonment of\nU-Net by DiTs and their following improvements is worth rethinking. To this\nend, we conduct a simple toy experiment by comparing a U-Net architectured DiT\nwith an isotropic one. It turns out that the U-Net architecture only gain a\nslight advantage amid the U-Net inductive bias, indicating potential\nredundancies within the U-Net-style DiT. Inspired by the discovery that U-Net\nbackbone features are low-frequency-dominated, we perform token downsampling on\nthe query-key-value tuple for self-attention and bring further improvements\ndespite a considerable amount of reduction in computation. Based on\nself-attention with downsampled tokens, we propose a series of U-shaped DiTs\n(U-DiTs) in the paper and conduct extensive experiments to demonstrate the\nextraordinary performance of U-DiT models. The proposed U-DiT could outperform\nDiT-XL/2 with only 1/6 of its computation cost. Codes are available at\nhttps://github.com/YuchuanTian/U-DiT.",
5
+ "authors": "Yuchuan Tian, Zhijun Tu, Hanting Chen, Jie Hu, Chao Xu, Yunhe Wang",
6
+ "published": "2024-05-04",
7
+ "updated": "2024-05-04",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers",
15
+ "main_content": "Introduction Thanks to the attention mechanism that establishes long-range spatial dependencies, Transformers [32] are proved highly effective on various vision tasks including image classification [13], object detection [5], segmentation [37], and image restoration [6]. DiTs [24] introduce full transformer backbones to diffusion, which demonstrate outstanding performance and scalability on image-space and latent-space generation tasks. Recent follow-up works have demonstrated the promising prospect of diffusion transformers by extending their applications to flexible-resolution image generation [22], realistic video generation [2], et cetera. Interestingly, DiTs have discarded the U-Net architecture [26] that is universally applied in manifold previous works, either in pixel [17; 11] or latent space [25]. The use of isotropic architectures in DiTs is indeed successful, as scaled-up DiT models achieve supreme performance. However, the abandonment of the widely-applied U-Net architecture by DiTs and their improvements [16; 8; 22] on latent-space image generation tasks triggers our curiosity, because the U-Net inductive bias is always believed to help denoising. Hence, we rethink deploying DiTs on a canonical U-Net architecture. In order to experiment with the combination of U-Net with DiT, we first propose a naive DiT in U-Net style (DiT-UNet) and compare it with an isotropic DiT of similar size. Results turn out that DiT-UNets are merely comparable to DiTs at similar computation costs. From this toy experiment, it \u2217Equal Contribution. \u2020Corresponding Author. Preprint. Under review. arXiv:2405.02730v1 [cs.CV] 4 May 2024 \f101 102 Transformer GFLOPs 10 20 30 40 50 60 70 FID-50K DiT SiT SiT-LLAMA U-DiT (Ours) Figure 1: Comparing U-DiTs with DiTs and their improvements. We plot FID-50K versus denoiser GFLOPs (in log scale) after 400K training steps. U-DiTs could achieve better performance than its counterparts. 200 400 600 800 Training Iterations (K) 0 10 20 30 40 50 60 FID-50K DiT-B/2 DiT-L/2 DiT-XL/2 U-DiT-B U-DiT-L Figure 2: The performance of U-DiTs and DiTs of various size. U-DiTs perform consistently better than DiTs with the increase of training steps. The marker size represents the computation cost of the model qualitatively. is inferred that the inductive bias of U-Net is not fully leveraged when U-Nets and plain transformer blocks are simply combined. Hence, we rethink the self-attention mechanism in DiT-UNet. The backbone in a latent U-Net denoiser provides a feature where low-frequency components dominate [27]. The discovery implies the existence of redundancies in backbone features: the attention module in the U-Net diffuser should highlight low-frequency domains. As previous theories praised downsampling for filtering high-frequency noises in diffusion [35], we seek to leverage this natural low-pass filter by performing token downsampling on the features for self-attention. Unlike previous transformer works [15; 38; 28] that downsample key-value pairs only, we radically downsample the query-key-value tuple altogether, such that self-attention is performed among downsampled latent tokens. It is surprising that when we incorporate self-attention with downsampled tokens into DiT-UNet, better results are achieved on latent U-Net diffusers with a significant reduction of computation. Based on this discovery, we scale U-Nets with downsampled self-attention up and propose a series of State-of-the-Art U-shaped Diffusion Transformers (U-DiTs). We conduct manifold experiments to verify the outstanding performance and scalability of our U-DiT models over isotropic DiTs. As shown in Fig. 1 & Fig. 2, U-DiTs could outperform DiTs by large margins. Amazingly, the proposed U-DiT model could perform better than DiT-XL/2 which is 6 times larger in terms of FLOPs. 2 Preliminaries Vision Transformers. ViTs [13] have introduced a transformer backbone to vision tasks by patchifying the input and viewing an image as a sequence of patch tokens and have proved its effectiveness on large-scale image classification tasks. While ViTs adopt an isotropic architecture, some following works on vision transformers [33; 21] propose a pyramid-like hierarchical architecture that gradually downsamples the feature. The pyramid architecture has proved highly effective in classification and other downstream tasks. Vision transformers are also mainstream backbones for denoising models. IPT [6] introduces an isotropic transformer backbone for denoising and other low-level tasks. Some later works [19; 18; 7] follow the isotropic convention, but other denoising works [34; 36] shift to U-Net backbones as their design. The pioneering work of U-ViT [1] and DiT [24] introduces full-transformer backbones to diffusion as denoisers. Recent Advancements in Diffusion Transformers. Following DiTs, some works investigate the training and diffusion [14; 23] strategies of Diffusion Transformers. Other works focus on the design of the DiT backbone. DiffiT [16] introduces a new fusion method for conditions; FiT [22] and VisionLLaMA [8] strengthens DiT by introducing LLM tricks including RoPE2D [30] and SwishGLU. These transformer-based diffusion works agree on adopting isotropic architectures on latents, i.e. the latent feature space is not downsampled throughout the whole diffusion model. The authors of DiT [24] even regard the inductive bias of U-Net as \u201cnot crucial\u201d. 2 \fNoised Latent 32\u00d732\u00d74 Embed Transformer Block Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block .... (a) DiT Noised Latent 32\u00d732\u00d74 Embed Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block Transformer Block Transformer Block (b) DiT-UNet Layer Norm MHSA Layer Norm FFN Noised Latent 32\u00d732\u00d74 Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block Transformer Block Transformer Block (c) U-DiT (Ours) Layer Norm MHSA Layer Norm Embed Downsampler FFN Figure 3: The evolution from the DiT to the proposed U-DiT. Left (a): the original DiT, which uses an isotropic architecture. Middle (b): DiT-UNet, which is a plain U-Net-style DiT. We try this as a simple combination of DiT and U-Net in the toy experiment. Right (c): the proposed U-DiT. We propose to downsample the input features for self-attention. The downsampling operation could amazingly improve DiT-UNet with a huge cut on the amount of computation. U-Nets for Diffusion. From canonical works [17; 29; 11; 25], the design philosophy of U-Net [26] is generally accepted in diffusion. Specifically, Stable Diffusion [25] uses a U-Net-based denoiser on the compressed latent space for high-resolution image synthesis, which is highly successful in manifold generative tasks. Some previous trials on diffusion transformers [4; 16; 9] also adopt U-Net on pixel-space generation tasks; but strangely, they shifted to isotropic DiT-like structures for latent-space diffusion. Despite its popularity in pixel-space diffusion, the U-Net architecture is not widely accepted in recent transformer-oriented works on latent-space diffusion. Motivated by this, we are dedicated to investigating the potential of Transformer-backboned U-Net on latent-space diffusion. 3 Investigating U-Net DiTs in Latent As is recapped, the U-Net architecture is widely adopted in diffusion applications; theoretical evaluations on U-Net denoisers also reveal their advantage, as downsampling U-Net stage transitions could filter noises that dominate high frequencies [35]. The unprecedented desertion of isotropic architectures for latent diffusion transformers is thus counter-intuitive. We are rethinking and elucidating the potentials of transformer-backboned U-Net denoisers in latent diffusion via a toy experiment. A canonical U-Net-style DiT. To start with, we propose a naive Transformer-backboned U-Net denoiser named DiT-UNet by embedding DiT blocks into a canonical U-Net architecture. Following previous U-Net designs, The DiT-UNet consists of an encoder and a decoder with an equal number of stages. When the encoder processes the input image by downsampling the image as stage-level amounts, the decoder scales up the encoded image from the most compressed stage to input size. At each encoder stage transition, spatial downsampling by the factor of 2 is performed while the feature dimension is doubled as well. Skip connections are provided at each stage transition. The skipped feature is concatenated and fused with the upsampled output from the previous decoder stage, replenishing information loss to decoders brought by feature downsampling. Considering the small, cramped latent space (32\u00d7 32 for 256\u00d7256-sized generation), we designate 3 stages in total, i.e. the feature is downsampled two times and subsequently recovered to its original size. In order to fit time and condition embeddings for various feature dimensions across multiscale stages, we use independent embedders for respective stages. In addition, we avoid patchifying the latent, as the U-Net architecture itself downsamples the latent space and there is no need for further spatial compression. 3 \fVia toy experiments, we compare the proposed U-Net-style DiT with the original DiT that adopts an isotropic architecture. In order to align the model with the DiT design, we repeatedly use plain DiT blocks in each stage. Each DiT block includes a self-attention module as the token mixer and a two-layer feed-forward network as the channel mixer. We conduct the experiment by training the U-Net-Style DiT for 400K iterations and compare it with DiT-S/4 which is comparable in size. All training hyperparameters are kept unchanged. It occurs that the U-Net style DiT only gains a limited advantage over the original isotropic DiT. The inductive bias of U-Net is insufficiently utilized. ImageNet 256\u00d7256 Model GFLOPs FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-S/4 1.41 97.85 21.19 13.27 0.26 0.41 DiT-UNet 1.40 93.48 20.41 14.20 0.27 0.42 + Token Downsampling 0.90 89.43 21.36 15.13 0.29 0.44 Table 1: Toy experiments on U-Net-style DiTs. The naive DiT-UNet performs slightly better than the isotropic DiT-S/4; but interestingly, when we apply token downsampling for self-attention, the DiT-UNet performs better with fewer costs. Improved U-Net-style DiT via token downsampling. In seeking to incorporate attention in transformers to diffusion U-Nets better, we review the role of the U-Net backbone as the diffusion denoiser. A recent work on latent diffusion models [27] conducted frequency analysis on intermediate features from the U-Net backbone, and concluded that energy concentrates at the low-frequency domain. This frequency-domain discovery hints at potential redundancies in the backbone: the U-Net backbone should highlight the coarse object from a global perspective rather than the high-frequency details. Naturally, we resort to attention with downsampled tokens. The operation of downsampling is a natural low-pass filter that discards high-frequency components. The low-pass feature of downsampling has been investigated under the diffusion scenario, which concludes that downsampling helps denoisers in diffusion as it automatically \u201cdiscards those higher-frequency subspaces which are dominated by noise\u201d [35]. Hence, we opt to downsample tokens for attention. In fact, attention to downsampled tokens is not new. Previous works regarding vision transformers [15; 38] have proposed methods to downsample key-value pairs for computation cost reduction. Recent work on training-free acceleration of diffusion [28] also applies key-value downsampling on Stable Diffusion models. But these works maintain the number of queries, and thus the downsampling operation is not completely performed. Besides, these downsampling measures usually involves a reduction of tensor size, which could result in a significant loss in information. Different from these works, we propose a simple yet radical token downsampling method for DiTUNets: we downsample queries, keys, and values at the same time for diffusion-friendly self-attention, but meanwhile we keep the overall tensor size to avoid information loss. The procedure is detailed as follows: the feature-map input is first converted into four 2\u00d7 downsampled features by the downsampler (the downsampler design is detailed in Sec. 4.2). Then, the downsampled features are mapped to Q, K, V for self-attention. Self-attention is performed within each downsampled feature. After the attention operation, the downsampled tokens are spatially merged as a unity to recover the original number of tokens. Notably, the feature dimension is kept intact during the whole process. Unlike U-Net downsampling, we are not reducing or increasing the number of elements in the feature during the downsampling process. Rather, we send four downsampled tokens into self-attention in a parallel manner. Self-attention with downsampled tokens does help DiT-UNets on the task of latent diffusion. As shown in Tab. 1, the substitution of downsampled self-attention to full-scale self-attention brings slight improvement in the Fr\u00e9chet Inception Distance (FID) metric despite a significant reduction in FLOPs. Complexity analysis. Apart from the performance benefits, we are aware that downsampled selfattention could save as much as 1/3 of the overall computation cost compared to full-scale selfattention. We conduct a brief computation complexity analysis on the self-attention mechanism to explain where the savings come from. Given an input feature of size N \u00d7 N and dimension d, we denote Q, K, V \u2208RN 2\u00d7d as mapped query-key-value tuples. The complexity of self-attention is analyzed as: 4 \fX = AV |{z} O(N 4D) s.t. A = Softmax \u0000QKT \u0001 | {z } O(N 4D) . In the proposed self-attention on downsampled tokens, four sets of downsampled query-key-value tuples 4\u00d7(Q\u21932, K\u21932, V\u21932) \u2208R( N 2 )2\u00d7d performs self-attention respectively. While each self-attention operation costs only 1/16 of full-scale self-attention, the total cost for downsampled self-attention is 1/4 of full-scale self-attention. 3/4 of the computation costs by self-attention is saved via token downsampling. In a nutshell, we show from toy experiments that the redundancy of DiT-UNet is reduced by downsampling the tokens for self-attention. 4 Scaling the Model Up Based on the discovery in our toy experiment, we propose a series of U-shaped DiTs (U-DiT) by applying the downsampled self-attention (proposed in Sec. 3) and scaling U-Net-Style DiT up. Settings. We adopt the training setting of DiT. The same VAE (i.e. sd-vae-ft-ema) for latent diffusion models [25] and the AdamW optimizer is adopted. The training hyperparameters are kept unchanged, including global batch size 256, learning rate 1e \u22124, weight decay 0, and global seed 0. The training is conducted with the training set of ImageNet 2012 [10]. Apart from the self-attention on downsampling as introduced in the toy experiment (Section 3), we further introduce a series of modifications to U-DiTs, including cosine similarity attention [20; 18], RoPE2D [30; 22; 8], depthwise conv FFN [34; 3; 38], and re-parametrization [12; 31]. The contribution of each modification is quantitatively evaluated in Sec. 6. 4.1 U-DiT at Larger Scales ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-S/2 [24] 6.06 68.40 DiT-S/2\u2217 6.07 67.40 11.93 20.44 0.368 0.559 U-DiT-S (Ours) 6.04 31.51 8.97 51.62 0.543 0.633 DiT-L/4 [24] 19.70 45.64 DiT-L/4\u2217 19.70 46.10 9.17 31.05 0.472 0.612 DiT-B/2 [24] 23.01 43.47 DiT-B/2\u2217 23.02 42.84 8.24 33.66 0.491 0.629 U-DiT-B (Ours) 22.22 16.64 6.33 85.15 0.642 0.639 DiT-L/2 [24] 80.71 23.33 DiT-L/2\u2217 80.75 23.27 6.35 59.63 0.611 0.635 DiT-XL/2 [24] 118.64 19.47 DiT-XL/2\u2217 118.68 20.05 6.25 66.74 0.632 0.629 U-DiT-L (Ours) 85.00 10.08 5.21 112.44 0.702 0.631 Table 2: Comparing U-DiTs against DiTs on ImageNet 256\u00d7256 generation. Experiments with a supermark \u2217are replicated according to the official code of DiT. We compare models trained for 400K iterations with the standard training hyperparameters of DiT. The performance of U-DiTs is outstanding: U-DiT-B could beat DiT-XL/2 with only 1/6 of inference FLOPs; U-DiT-L could outcompete DiT-XL/2 by 10 FIDs. Comparison with DiTs and their improvements. In order to validate the effectiveness of the proposed U-DiT models beyond simple toy experiments, we scale them up and compare them with DiTs [24] of larger sizes. For a fair comparison, we use the same sets of training hyperparameters as DiT; all models are trained for 400K iterations. The results on ImageNet 256\u00d7256 are shown in Tab. 2, where we scale U-DiTs to \u223c6e9, \u223c20e9, \u223c80e9 FLOPs respectively and compare them with DiTs of similar computation costs. 5 \fIt could be concluded from Tab. 2 that all U-DiT models could outcompete their isotropic counterparts by considerable margins. Specifically, U-DiT-S and U-DiT-B could outperform DiTs of comparable size by \u223c30 FIDs; U-DiT-L could outperform DiT-XL/2 by \u223c10 FIDs. It is shocking that U-DiT-B could outcompete DiT-XL/2 with only 1/6 of the computation costs. To present the advantage of our method better, we also include the performance of U-DiTs in an FID-50K versus FLOPs plot (Fig. 1). Apart from DiTs and U-DiTs, we also include other state-of-the-art methods: SiT [23] that proposes an interpolant framework for DiTs, and SiT-LLaMA [8] that combines state-of-the-art DiT backbone VisionLLaMA and SiT. The advantages of U-DiTs over other baselines are prominent in the plot. The results highlight the extraordinary scalability of the proposed U-DiT models. U-DiTs are also performant in generation scenarios with classifier-free guidance. In Tab. 3, we compare U-DiTs with DiTs at cfg = 1.5. For a fair comparison, we train U-DiTs and DiTs for 400K iterations under identical settings. ImageNet 256\u00d7256 Model Cfg-Scale FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-L/2\u2217 1.5 80.75 7.53 4.78 134.69 0.780 0.532 DiT-XL/2\u2217 1.5 118.68 6.24 4.66 150.10 0.794 0.514 U-DiT-B 1.5 22.22 4.26 4.74 199.18 0.825 0.507 U-DiT-L 1.5 85.00 3.37 4.49 246.03 0.862 0.502 Table 3: Generation performance with classifier-free guidance. We measure the performance of U-DiTs and DiTs at 400K training steps with cfg = 1.5. Experiments with a supermark \u2217are replicated according to the official code of DiT. U-DiTs are also performant on conditional generation. Extended training steps. We evacuate the potentials of U-DiTs by extending training steps to 1 Million. Fig. 2 further demonstrate that the advantage of U-DiTs is consistent at all training steps. As training steps gradually goes up to 1 Million, the performance of U-DiTs is improving (Tab. 4). We visualize the process where the image quality is gradually getting better (Fig. 4). Notably, U-DiT-L at only 600K training steps could outperform DiT-XL/2 at 7M training steps without classifier-free guidance. As additionally shown in Fig. 5, U-DiT models could conditionally generate authentic images at merely 1M iterations. U-DiT-B U-DiT-L 200K 400K 600K 800K 200K 400K 600K 800K Figure 4: Quality improvements of generated samples as training continues. We sample from U-DiT models trained for different numbers of iterations on ImageNet 256\u00d7256. More training does improve generation quality. Best viewed on screen. 4.2 Ablations The design of downsampler. The downsampling operation in the proposed U-DiT transforms a complete feature into multiple spatially downsampled features. Based on previous wisdom, we figured out that previous works either directly perform pixel shuffling, or apply a convolution layer before pixel shuffling. While we hold that it is much too rigid to shuffle pixels directly as downsampling, 6 \fImageNet 256\u00d7256 Model Training Steps FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-XL/2 7M 9.62 U-DiT-B 200K 23.23 6.84 64.42 0.610 0.621 U-DiT-B 400K 16.64 6.33 85.15 0.642 0.639 U-DiT-B 600K 14.51 6.30 94.56 0.652 0.643 U-DiT-B 800K 13.53 6.27 98.99 0.654 0.645 U-DiT-B 1M 12.87 6.33 103.79 0.661 0.653 U-DiT-L 200K 15.26 5.60 86.01 0.685 0.615 U-DiT-L 400K 10.08 5.21 112.44 0.702 0.631 U-DiT-L 600K 8.71 5.17 122.45 0.705 0.645 U-DiT-L 800K 7.96 5.21 131.35 0.705 0.648 U-DiT-L 1M 7.54 5.27 135.49 0.706 0.659 Table 4: The performance of U-DiT-B and U-DiT-L models with respect to training iterations. The unconditional generation performance of both models on ImageNet 256\u00d7256 consistently improves as training goes on, where U-DiT-L at 600K steps strikingly beats DiT-XL/2 at 7M steps. ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 Pixel Shuffle (PS) 0.89 96.15 23.90 13.93 0.272 0.389 Depthwise (DW) Conv. + PS 0.91 89.87 20.99 14.92 0.288 0.419 DW Conv. || Shortcut + PS 0.91 89.43 21.36 15.13 0.291 0.436 Table 5: Ablations on the choice of downsampler. We have tried several downsampler designs, and it turns out that the parallel connection of a shortcut and a depthwise convolution is the best fit. We avoid using ordinary convolution (i.e. Conv.+PS) because channel-mixing is costly: conventional convolution-based downsamplers could double the amount of computation. The U-DiT with a conventional downsampler costs as many as 2.22G FLOPs in total. applying convolution is hardly affordable in terms of computation costs. Specifically, ordinary convolutions are costly as extensive dense connections on the channel dimension are involved: using convolution-based downsamplers could double computation costs. As a compromise, we apply depthwise convolution instead. We also add a shortcut that short-circuits this depthwise convolution, which has proved crucial for better performance. The shortcut adds negligible computation cost to the model, and in fact, it could be removed during the inference stage with re-parameterization tricks. The results are shown in Tab. 5. The contribution of each individual modification. In this part, we start from a plain U-Net-style DiT (DiT-UNet) and evaluate the contribution of individual components. Firstly, we inspect the advantage of downsampled self-attention. Recapping the toy experiment results in Sec. 3, replacing the full-scale self-attention with downsampled self-attention would result in an improvement in FID and 1/3 reduction in FLOPs. In order to evaluate the improvement of downsampling via model performance, we also design a slim version of DiT-UNet (i.e. DIT-UNet (Slim)). The DiT-UNet (Slim) serves as a full-scale self-attention baseline that spends approximately the same amount (\u223c0.9GFLOPs) of computation as our U-DiT. As shown in the upper part of Tab. 6, by comparing U-DiT against DiT-UNet (Slim), it turns out that downsampling tokens in DiT-UNet could bring a performance improvement of \u223c18FIDs. Next, we inspect other modifications that further refine U-DiTs (lower part of Tab. 6). Swin Transformer V2 [20] proposes a stronger variant of self-attention: instead of directly multiplying Q and K matrices, cosine similarities between queries and keys are used. We apply the design to our selfattention, which yields \u223c2.5FIDs of improvement. RoPE [30] is a powerful positional embedding method, which has been widely applied in Large Language Models. Following the latest diffusion transformer works [22; 8], we inject 2-dimensional RoPE (RoPE2D) into queries and keys right before self-attention. The introduction of RoPE2D improves performance by \u223c2.5FIDs. Some recent transformer works strengthen MLP by inserting a depthwise convolution layer between two linear mappings [34; 3; 38]. As the measure is proved effective in these works, we borrow it to our 7 \fFigure 5: Generated samples by U-DiT-L at 1M iterations. It is astonishing that U-DiT could achieve authentic visual quality at merely 1 Million training steps. Best viewed on screen. U-DiT model, improving \u223c5FIDs. As re-parametrization during training [12] could improve model performance, we apply the trick to FFN [31] and bring an additional improvement of \u223c3.5FIDs. Above all, based on the components mentioned above, the proposed U-DiTs could outcompete plain DiT-UNets and isotropic DiTs by large margins. ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-UNet (Slim) 0.92 107.00 24.66 11.95 0.230 0.315 DiT-UNet 1.40 93.48 20.41 14.20 0.274 0.415 U-DiT-T (DiT-UNet+Downsampling) 0.91 89.43 21.36 15.13 0.291 0.436 U-DiT-T (+Cos.Sim.) 0.91 86.96 19.98 15.63 0.299 0.450 U-DiT-T (+RoPE2D) 0.91 84.64 19.38 16.19 0.306 0.454 U-DiT-T (+DWconv FFN) 0.95 79.30 17.84 17.48 0.326 0.494 U-DiT-T (+Re-param.) 0.95 75.71 16.27 18.59 0.336 0.512 Table 6: Ablations on U-DiT components. Apart from the toy example in Sec. 3, we further validate the effectiveness of downsampled by comparing the U-DiT with a slimmed version of DiT-UNet at equal FLOPs. Results reveal that downsampling could bring \u223c18FIDs on DiT-UNet. Further modifications on top of the U-DiT architecture could improve 2 to 5 FIDs each. 5 Conclusion In this paper, we lay emphasis on DiTs in U-Net architecture for latent-space generation. Though isotropic-architectured DiTs have proved their strong scalability and outstanding performance, the effectiveness of the U-Net inductive bias is neglected. Thus, we rethink DiTs in the U-Net style. We first conduct an investigation on plain DiT-UNet, which is a straightforward combination of U-Net and DiT blocks, and try to reduce computation redundancy in the U-Net backbone. Inspired by previous wisdom on diffusion, we propose to downsample the visual tokens for self-attention and 8 \fyield extraordinary results: the performance is further improved despite a huge cut on FLOPs. From this interesting discovery, we scale the U-Net architecture up and propose a series of U-shaped DiT models (U-DiTs). We have done various experiments to demonstrate the outstanding performance and scalability of our U-DiTs. Limitations. For lack of computation resources and tight schedule, at this time we could not further extend training iterations and scale the model size up to fully investigate the potential of U-DiTs."
16
+ }
title_10K/test_title_short_2405.02749v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02749v1",
3
+ "title": "Sub-goal Distillation: A Method to Improve Small Language Agents",
4
+ "abstract": "While Large Language Models (LLMs) have demonstrated significant promise as\nagents in interactive tasks, their substantial computational requirements and\nrestricted number of calls constrain their practical utility, especially in\nlong-horizon interactive tasks such as decision-making or in scenarios\ninvolving continuous ongoing tasks. To address these constraints, we propose a\nmethod for transferring the performance of an LLM with billions of parameters\nto a much smaller language model (770M parameters). Our approach involves\nconstructing a hierarchical agent comprising a planning module, which learns\nthrough Knowledge Distillation from an LLM to generate sub-goals, and an\nexecution module, which learns to accomplish these sub-goals using elementary\nactions. In detail, we leverage an LLM to annotate an oracle path with a\nsequence of sub-goals towards completing a goal. Subsequently, we utilize this\nannotated data to fine-tune both the planning and execution modules.\nImportantly, neither module relies on real-time access to an LLM during\ninference, significantly reducing the overall cost associated with LLM\ninteractions to a fixed cost. In ScienceWorld, a challenging and multi-task\ninteractive text environment, our method surpasses standard imitation learning\nbased solely on elementary actions by 16.7% (absolute). Our analysis highlights\nthe efficiency of our approach compared to other LLM-based methods. Our code\nand annotated data for distillation can be found on GitHub.",
5
+ "authors": "Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar, Marc-Alexandre Cote",
6
+ "published": "2024-05-04",
7
+ "updated": "2024-05-04",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
14
+ "gt": "Sub-goal Distillation: A Method to Improve Small Language Agents",
15
+ "main_content": "INTRODUCTION Recently, Large Language Models (LLMs) have found applications in various fields, including multi-task learning, decision making, answering questions, summarizing documents, translating languages, completing sentences, and serving as search assistants. They showcase a remarkable ability to make predictions based on input, enabling their use in generative AI applications to produce content based on input prompts (Devlin et al., 2018; Brown et al., 2020; Rae et al., 2021; Chowdhery et al., 2023; Scao et al., 2022; Patel & Pavlick, 2021; Han et al., 2021; Bommasani et al., 2021). The promising advantage of LLMs is attributed to their training on extensive text datasets, resulting in impressive capabilities. This prior knowledge can be leveraged for action planning to solve tasks in robotics and reinforcement learning (Huang et al., 2022b; Brohan et al., 2023; Liang et al., 2023). Recent works have utilized in-context learning with LLMs to provide actions in autonomous decision-making agents and interactive environments (Mahowald et al., 2023; Yao et al., 2022; Schick et al., 2023; Shen et al., 2023; Nakano et al., 2021; Park et al., 2023; Lin et al., 2023; Brohan et al., 2023). However, the extreme size of LLMs makes them computationally unaffordable for many applications. Moreover, closed-source models like ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023) limit accessibility and reproducibility. Consequently, there is an increasing demand to find approaches that are less computationally intensive while still capitalizing on the knowledge embedded in LLMs. One prevalent technique is the use of Knowledge Distillation (KD) (Bucilu\u02c7 a et al., 2006; Hinton et al., 2015), wherein a smaller model is trained with guidance from a larger model. \u2217Corresponding author: Maryam Hashemzadeh. \u2020equal supervision. 1https://github.com/chandar-lab/SubGoal_Distillation_LLM 1 arXiv:2405.02749v1 [cs.LG] 4 May 2024 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 Through this approach, we can leverage the knowledge in an LLM to train a more compact model with a reduced number of parameters. Navigate_to(kitchen) open door to kitchen go to kitchen Pick_up(thermometer) pick up thermometer Find(metal pot) open cupboard pick up metal pot Fill(metal pot, water) move metal pot to sink activate sink deactivate sink pick up metal pot Focus_on(substance in metal pot focus on substance in metal pot Freeze(water, metal pot) pour metal pot into metal pot pick up metal pot open freezer move metal pot to freezer Monitor_temperature(metal pot, freeze) examine substance in metal pot Annotated Trajectory Task Description: Your task is to change the state of matter of water. First, focus on the substance. Then, take actions that will cause it to change its state of matter. Figure 1: Example of annotating an expert trajectory with sub-goals for a particular variation of task 1-4 (change-the-state-of-matter-of). Looking only at the original trajectory (i.e., ignoring the red rows), we gather the expert ended up changing the state of water to be frozen. The expert had to navigate to the kitchen, find a thermometer and a metal pot, pour water into the pot, place it in the freezer, and continually monitor its temperature until frozen. Each of those milestones (highlighted in red) can be considered a sub-goal, encompassing a sequence of actions. Sub-goals can be shared across different tasks, facilitating generalization. We opted for a format that looks like function calls to encourage reusability (e.g., fill(metal pot, water)). Distilling knowledge from LLMs offers significant advantages, allowing for the training of specialized local models rather than depending on an LLM as a general model. This approach not only enhances privacy, particularly for systems with security-sensitive considerations like co-pilot models, but also provides greater flexibility in tailoring models for specific tasks. Additionally, the use of a smaller model offers the advantage of versatility across diverse applications without size constraints, including device models and mobile apps. Another challenge with LLMs is their susceptibility to hallucinations. This tendency poses a hindrance to their effective execution of long-tail planning, especially in interactive decision-making scenarios. In our research, we leverage the knowledge of LLMs to train an autonomous agent for effective decision-making in complex interactive text environments, utilizing small language models as our policy. Knowledge Distillation facilitates the training of smaller policies, allowing seamless integration of LLM knowledge. To address the challenges at hand, adopting a two-level planning approach proves beneficial for reducing hallucination \u2013 one for high-level reasoning to formulate subgoals and another for low-level action planning to execute each sub-goal. Figure 1 illustrates this concept in the task of freezing water from ScienceWorld (Wang et al., 2022a). The agent\u2019s subtasks involve navigating to the kitchen, finding a thermometer and a metal pot, pouring water into the pot, placing it in the freezer, and continuously monitoring its temperature until frozen. These constitute sub-goals generated by a high-level model, with each sub-goal subsequently executed by a lowlevel model. The generation of sub-goals empowers an autonomous agent to expedite learning for the current task and reuse similar sub-goals in various tasks to have more generalization. The contributions in this work are: \u2022 We employ Knowledge Distillation from an LLM to train a high-level policy capable of generating sub-goals without making assumptions about the specific set of sub-goals. Notably, these sub-goals remain flexible, accommodating various sequences of actions. \u2022 We demonstrate that employing Knowledge Distillation with hierarchical policies surpasses the performance achieved by both standalone imitation learning and its combination with in-context learning. \u2022 We illustrate that this approach is more cost-effective in terms of the number of calls to an LLM compared to other methods utilizing in-context learning. \u2022 We introduce an effective approach instead of using computational requirements of LLM and their restricted number of calls for using in interactive decision making tasks. 2 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 2 RELATED WORK Using LLMs for Action Planning Recent works have demonstrated the ability of LLMs to perform action planning for interactive decision making process without any additional training (Huang et al., 2022a). ReAct (Yao et al., 2022) proposes a way of prompting an LLM with interleave reasoning step and action taking step. That led the resolution of a variety of language-based reasoning and decision-making tasks. This approach empowers the model to construct high-level plans for effective action. Reflexion (Shinn et al., 2023) draws inspiration from reinforcement learning, employing a framework to reinforce language agents through linguistic feedback. At the end of each trial, it uses selfreflection to determine what went wrong with the task and keeps it in a memory. Then it leverages this information for the next trial. Some works use a programmatic LLM prompt structure with available actions and objects in an environment to translate natural language commands into robot policy code via few-shot examples (Liang et al., 2023; Singh et al., 2023). Khot et al. (2022) introduced a decomposed prompting approach wherein a task is broken down into simpler sub-tasks, allowing for recursive handling. Subsequently, these sub-tasks are assigned to sub-task-specific LLMs, with both the decomposer and the sub-task LLMs with their own few-shot prompts. Sun et al. (2023) uses three steps, action mining, plan formulation, and plan execution to decompose a question into a sequence of actions by few-shot prompting of LLMs. In Prasad et al. (2023) tasks are decomposed explicitly by a separate LLM through prompting when an executor is unable to execute a given sub-task. Imitation learning Some works employ imitation learning to train a language model as the agent\u2019s policy, as seen in offline decision transformers (Torabi et al., 2018). The inputs consist of states, actions, and reward-to-go values, which are fed into a transformer. This transformer then predicts actions in an autoregressive manner, utilizing a causal self-attention mask (Chen et al., 2021). Contextual Action Language Model (CALM) (Yao et al., 2020) is another work which uses a fine-tuned language model with oracle data to generate a set of candidate actions which are then passed to a policy network to select the best one. In Nakano et al. (2021), the authors fine-tune GPT-3 to address long-form questions within a web-browsing context. Human feedback is employed as a direct optimization measure for enhancing the quality of answers generated by the model. Knowledge Distillation: Knowledge Distillation (KD) typically falls into two categories: black-box KD and whitebox KD. In black-box KD, only the teacher\u2019s predictions are available for guidance, while in white-box KD, we have access to the teacher\u2019s parameters (Gou et al., 2021). Recently, black-box KD has gained widespread use for finetuning original models using self-instruct techniques, as proposed by Wang et al. (2022b), or for smaller models (Taori et al., 2023; Chiang et al., 2023; Peng et al., 2023) through the utilization of prompt-response pairs generated by LLMs. West et al. (2021) introduces symbolic KD from text rather than logits. This process involves the transfer of knowledge from a large, general model to a more compact commonsense model, facilitated by a commonsense corpus, yielding a commonsense knowledge graph and model. The work by Hsieh et al. (2023) trains a smaller model that outperform LLM using reasoning steps called rationales. They incorporated rationales as informative supervision to train smaller models with less training data. Complex interactive text environments In text-based games, an agent interacts with the environment by reading and writing text while aiming towards the end game or solving a given task. Out of the recent frameworks that deals with generating and interfacing text-based games (C\u02c6 ot\u00b4 e et al., 2018; Hausknecht et al., 2019; Shridhar et al., 2021; Murugesan et al., 2021), we use ScienceWorld (Wang et al., 2022a) which is very complicated by having a large set of objects, actions, and tasks. 3 MODEL In this paper, we propose to train a hierarchical policy by combining KD from an LLM and imitation learning from expert trajectories. This section describes both modules in detail and we refer the reader to Figure 2 for a schematic view. We first formulate the problem as a POMDP (Section 3.1). Next, we describe what knowledge we are distilling from an LLM to guide the agent in accomplishing tasks (Section 3.2). Then, we detail how both the high-level and low-level policies of the hierarchical policy are trained (Section 3.3). 3.1 PROBLEM FORMULATION ScienceWorld (Wang et al., 2022a) can be defined as a partially observable Markov decision process (POMDP), where observations provide information solely on environmental changes induced by the current action. ScienceWorld is 3 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 Action generator Sub-goal generator History observation/score action sub-goal \ufffd\ufffd Action generator Sub-goal generator Your task is to boil water \u2026 ; Time: 1; Score: 0; Completed subtasks are: navigate_to(kitchen), \u2026 ; The current subtask is heat(water, metal pot); Action history: activate sink --> The sink is now activated, \u2026 ; Current environment: This room is called the kitchen. In it, you see: \u2026 ; Current inventory: a thermometer, \u2026 ; Visited rooms: hallway, \u2026 ; What action should you do next? Next action Sub-goal Figure 2: On the left, a schematic view of our approach is shown. There are two modules: the sub-goal generator and action generator. The sub-goal generator provides a sub-goal for the action generator, which predicts the next action given the current sub-goal and history. On the right, the inputs and outputs of both modules are illustrated. The input comprises different parts including task description, completed sub-goal, current sub-goal, a history of recent actions-observations, and more, each highlighted in a distinct color. an interactive text environment meaning all task instructions, observations and actions are expressed in textual form. Importantly, both observations and rewards in this environment are conditioned by the ongoing task. Given a language vocabulary V and an arbitrary maximum number of tokens N, an observation is defined such as o \u2208\u2126\u2282V N, a reward such as r \u2208R and an action as a \u2208A \u2282V N. Finally, a task or goal description is shown by g \u2208G \u2282V N. We formalize the problem as a goal-augmented POMDP M = (S, V, A, \u2126, G, T, R, O, \u03b3) with S the state space, A \u2282V N the action space, \u2126\u2282V N the observation space, G \u2282V N the goal space, T : S \u00d7 A \u00d7 G \u2192S the goal-conditioned transition function, R : S \u00d7 A \u00d7 G \u2192R the goal-conditioned reward function, O : S \u2192V N an (unknown) observation function mapping a state to a textual description and \u03b3 the discount factor. We assume \u03b3 = 1 in our experiments. 3.2 DISTILLING KNOWLEDGE FROM AN LLM The initial step in training our policies is creating a dataset. This dataset should include sub-goals along with their corresponding aligned sequences of actions for each task. To generate sub-goals along with their corresponding aligned sequences of actions we do the following steps. We assume access to a collection of expert trajectories. Then we prompt an LLM with two in-context examples. Each example is composed of a task description, a similar task as the one we wish to annotate, and its expert trajectory. The example also contains a set of sub-goals, with the sequences of actions linked to each sub-goal. Given the two in-context examples and a new task description with its expert trajectory, the LLM is then instructed to generate a response. The response is a set of sub-goals with their associated list of actions. The generated list of actions is used to determine each sub-goal corresponds to which segment of the expert trajectory. It is important to note that these responses are collected only for the training tasks for which we assume having access to expert trajectories. Also, it is important to point out that the LLM is not generating any novel trajectories. Figure 4 illustrates the prompt examples for task 1 \u22121 which is boiling a given substance. To ensure more uniform sub-goals that can generalize across tasks, we opted for a format that looks like function calls. Since that format was shown in the in-context examples, the LLM-generated sub-goals mimic this format as well making them easier to parse. Since the expert trajectories for some tasks can be long (+100 actions), the generated sub-sequence of actions corresponding to each sub-goal may not align exactly with the expert trajectory. Sometimes, it might miss certain actions, while in other instances, it might include additional actions, especially when there are repeated actions in the trajectory. To address this, we use a trajectory alignment process that finds the minimal set of edits to go from the generated trajectory to the expert trajectory according to the Levenshtein distance. For each \u201cremove\u201d edit, i.e. the generated trajectory has superfluous actions, we simply remove those from the generated trajectory. On the other hand, for \u201cadd\u201d edit, i.e. the generated trajectory is missing some actions, we prompt the LLM to generate a new sub-goal for those. An example is shown in Figure 3. 4 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 pick up thermometer open cupboard pick up metal pot move metal pot to sink activate sink deactivate sink pick up metal pot pour metal pot into metal pot open door to kitchen go to kitchen pick up metal pot open freezer move metal pot to freezer Pick_up(thermometer): pick up thermometer Fill(metal pot, water): move metal pot to sink activate sink deactivate sink pick up metal pot open door to bathroom go to bathroom Freeze(water, metal pot): pour metal pot into metal pot open door to kitchen open freezer move metal pot to freezer Missed actions Missed actions Extra actions LLM LLM Remove Generated Trajectory by LLM Expert Trajectory Figure 3: Example of a trajectory generated by the LLM deviating from the provided expert trajectory. In this example, which is for a boiling task, certain actions are omitted in the generated trajectory, indicated in blue in the left box. To address these missing actions, we group them into sequences and prompt the LLM to generate sub-goals for them. If the generated trajectory includes additional actions, such as the green actions in the right box, we simply remove them to align with the expert trajectory. In the resulting annotated dataset, each data point follows the same format as used by Lin et al. (2023) but with the added mention of completed sub-goals and the current sub-goal. Precisely, it corresponds to: \u2022 Input: task description, number of steps, current score, completed sub-goal, current sub-goal, a history of 10 recent actions-observations, current items in the room, inventory, and the visited rooms. \u2022 Target: next action, next sub-goal. 3.3 HIERARCHICAL IMITATION LEARNING With the dataset obtained from distilling knowledge from an LLM, we can now focus on training the policies. Low-level policy: The low-level policy is a language model (LM) which is trained through imitation learning using the annotated dataset. The goal is to have a model much smaller than an LLM so it can fit on a single machine and run faster, ideally below a billion of parameters. This policy learns to predict the next action given the current task description, the 10 previous observation-action pairs, the previous completed sub-goals, and the current sub-goal. We refer to this policy as the action generator. High-level policy: The high-level policy is another LM with a reasonable size. It is trained using the annotated dataset to generate the next sub-goal given the previous sub-goals and a short history, i.e. the last 10 actions and observations. So the high-level policy generates sub-goals while the low-level policy generate actions. Moreover, this policy conditions on the same input information as for the action generator. We call this policy the sub-goal generator. Hierarchical policy: During inference, we first leverage the high-level policy to generate a sub-goal. This generated sub-goal is then fed into the action generator, allowing it to produce the next action aligned with the provided sub-goal. This sequential approach serves as a guiding cue for the action generator, particularly when the trajectory to achieve the goal is complex or long. Moreover, it serves to prevent the action generator from generating actions that might deviate the agent from the correct path, thereby improving the precision and relevance of the actions being generated. 5 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 [Example 1] [Task description] Your task is to boil water. \u2026 [Expert trajectory] Here is the goal path to achieve to the goal:open door to kitchen, go to kitchen, \u2026 provide me with the functional format of high-level sub-tasks to complete this task and their correspondings actions. [sub-goals] 1navigate_to(kitchen) : {'open door to kitchen', 'go to kitchen'} 2pick_up(thermometer):{'pick up thermometer'} 3 find(metal pot):{'open cupboard', 'pick up metal pot'} \u2026 [Example 2] \u2026 [Current task] [Task description] Your task is to boil chocolate. \u2026 [Expert trajectory] Here is the goal path to achieve to the goal: 'open door to hallway', 'go to hallway', 'open door to kitchen', 'go to kitchen', \u2026 provide me with the functional format of high-level sub-tasks to complete this task and their correspondings actions. Does the actions sequence match with the oracle path? If some actions are missed, use them in the prompt. If more actions are added mistakenly, remove them. Finish! Yes No Prompt Generated sub-goals LLM 1navigate_to(hallway) : {'open door to hallway', 'go to hallway'} 2navigate_to(kitchen) : {'open door to kitchen', 'go to kitchen'} 3pick_up(thermometer):{'pick up thermometer'} 4find(metal pot):{'open cupboard', 'pick up metal pot'} 5find(chocolate):{'open oven', 'open freezer', 'open drawer in cupboard', 'open glass jar', 'open drawer in counter', 'open fridge', 'focus on chocolate'} 6\u2026 Figure 4: The figure demonstrates KD to generate sub-goals using an LLM. The LLM is presented with a prompt containing two in-context examples. Each example is composed of a task description in green and an expert trajectory detailing the steps to accomplish that task in blue. It also includes the expected set of sub-goals with their corresponding sequences of actions in red. Following this, we provide a new task description and trajectory, and we let the LLM generate the associated sub-goals and segmented actions. 4 EXPERIMENTS 4.1 ENVIRONMENT We chose ScienceWorld (Wang et al., 2022a) as the environment due to its complexity and the diverse range of tasks it encompasses. This environment is an interactive multi-task text-based game where the agent conducts elementary science experiments in a simulated environment. Each experiment is designed as a separate task. For example, \u201dYour task is to boil water. For compounds without a boiling point, combusting the substance is also acceptable. First, focus on the substance. Then, take actions that will cause it to change its state of matter\u201d. To complete a task, the agent must perform multiple actions and receives the result of each action as an observation and a score. The observations and actions are in text format. An observation describes the changes in the environment, and the score is a numerical value ranging from 0% to 100%, indicating the degree of completion of the current task through the current action. Furthermore, ScienceWorld is a benchmark with 30 distinct tasks spanning 10 science domains which are widely different (Appendix A.4). For instance, in the \u201dChanges of State\u201d task, the agent is required to locate and use heating/freezing sources to alter the state of a substance (e.g., ice or chocolate). Conversely, in a task such as \u201dMendelian Genetics,\u201d the agent is tasked with determining whether a specified trait (e.g., white flower color) is dominant or recessive in a plant. These examples illustrate the substantial diversity across the domains, ranging from physical transformations to genetic analyses, underscoring the broad spectrum of challenges within ScienceWorld. On top of that, ScienceWorld has 10 different locations, more than 200 object types, and 25 action templates which makes the search space very larger for the agent. Each type of task has different variations in which the task objects, the agent\u2019s initial location, and random contents of each room are altered. 4.2 EXPERIMENTAL SETUP The environment has separate sets of variations for train and test. In the test variations, the combinations of objects and conditions are not seen in the train set. Following the experimental setup in (Lin et al., 2023), if the number of variations is more than 10, we consider only the first 10 variations. Our base models for both policies is a pre-trained FLAN-T5-LARGE (Chung et al., 2022) with 700M parameters. For the both polices, we used greedy decoding at inference. We also conduct an ablation study over different model sizes (Figure 5a). For fine-tuning the policies, we use all the training tasks and their variations (3600 games in total) from ScienceWorld. We vary the number of training epochs in function of the size of the models (see Appendix A.3). 6 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 Methods SayCan\u2217 ReAct\u2217 Reflexion\u2217 Swift-only SwiftSage\u2217 Ours Overall Average 25.22 19.76 23.40 46.25 62.22 65.43 Solved Task Types 0/30 0/30 4/30 4/30 2/30 11/30 Short\u2020 37.24 28.95 39.19 79.68 72.81 91.61 Medium 20.06 21.09 14.73 35.80 55.34 62.83 Long 18.66 11.23 16.27 25.36 57.99 45.35 Task 1-1 33.06 3.52 4.22 15.0 58.0 16.22 Task 3-3 99.56 76.19 72.54 59.5 66.9 5.6 Table 1: The table illustrates the overall average score (%) across all test tasks on the ScienceWorld benchmark for SayCan, ReAct, Reflexion, Swift-only, SwiftSage, and our algorithm (last column). The Solved Task Types row represents the number of task types for which an agent manages to solve all the test variations. The table also shows the average scores for tasks with a short, medium, and long length of expert trajectory. The rows Task 1-1 and Task 3-3 display the scores for each of them in which our approach does not work well in comparison with the other methods. The \u2217denotes scores reported from (Lin et al., 2023) which all use ChatGPT (GPT-3.5). 4.3 BASELINE AGENTS We compare our approach with other works that leverage LLMs. Some rely only on prompting such as SayCan, ReAct, and Reflexion, but SwiftSage also do imitation learning. Here is a brief description of each method. SayCan: the LLM initially offers a set of actions along with their respective ranks. Then, a value-based method is employed to re-rank these actions in order to determine the most rewarding action for execution (Brohan et al., 2023). ReAct: the LLM generates actions by incorporating the provided prompt and the history of generated texts. It employs reasoning traces as intermediate thought steps during the action generation to refine a plan for the upcoming steps (Yao et al., 2022). Reflexion: the language agent reflects the task feedback at each trial in the form of text and retains this information within an episodic memory. During the subsequent trial, it leverages the stored memory text to enhance its decisionmaking process (Shinn et al., 2023). SwiftSage: this method comprises two components: Swift, a fine-tuned LM to predict actions, and Sage, a module that queries an LLM for planning when the performance of Swift is inadequate (as determined by some handcrafted rules) (Lin et al., 2023). Swift-only: this is the Swift part of the SwiftSage method which only has the fine-tuned LM to predict the actions. We consider this method as a strong baseline and the most comparable to our approach as it relies on imitation learning without the need for querying an LLM during inference. Note that all baselines use ChatGPT (GPT-3.5) as their LLM. 4.4 RESULTS AND ANALYSIS Main Results: Table 1 compares the performance of the baselines with our approach in the ScienceWorld. The score for each task type is the average score (in percent) obtained for 10 test variations. Our approach demonstrates an overall performance of 65.43%, surpassing Swift-only by 16.71% (33.9% relative increase), and showing a slight improvement over SwiftSage of 3.3% (5.3% relative). Interestingly, our method is able to solve all test variations (i.e., gets an average score of 100%) for 11 out of the 30 task types. In contrast, SwiftSage solves them only for 2 task types, and Swift-only, only for 4 task types. Additionally, we measured the performance of the agents with respect to the length of the tasks (a proxy for task complexity). The length of a task is determined by how many actions was needed by the expert to solve it.2 Following Lin et al. (2023), we group the tasks into three categories: Short when the length is less than 20 actions, Medium when it falls between 20 and 50 (inclusively), and Long if above 50. As shown in Table 1, our approach outperforms 2Expert trajectories for test tasks were not seen during training. 7 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 other methods on short and medium tasks. On long tasks, we outperform all methods except SwiftSage, which has a substantial advantage here: The longer the task, the higher the chance it triggers one of the rules for Sage to take over. As part of the comparison, there are other approaches that do not use a LLM including DRRN (He et al., 2016), KG-A2C (Ammanabrolu & Hausknecht, 2019), CALM (Yao et al., 2020), BC (Torabi et al., 2018), TDT (Chen et al., 2021). The results from (Wang et al., 2022a) show these approaches perform poorly, below 17%, in ScienceWorld. For this reason, we did not include them here and only focus on approaches comparable with us. A key motivation for our approach is cost-effectiveness in terms of LLM queries. During training, we make one query to ChatGPT per task to identify the sub-goals within an expert trajectory. Sometimes mismatches occur between the expert trajectory and the actions assigned to each sub-goal by ChatGPT. When that is the case, we employ dynamic programming, with a maximum of 10 attempts per task. This contrasts with other baseline methods, where LLM is queried for each action, incurring considerably higher costs. Why is it failing on some task types? The performance of our algorithm in some tasks are low, (see Table 5). In Table 1, the scores of two tasks are presented. One contributing factor is the variations in the test are very different from those in the training. For instance, the objects might be very different or the path to complete the task is very different and longer. The main culprit is the sub-goal generator which is not able to generate good sub-goals. As a concrete example (Table 2), in the test variations for task 3-3, the agent needs to go to kitchen and then fill a jug with water. When looking at the transcript, we see the agent is able to go to kitchen but then when it arrives, the sub-goal generator issues a sub-goal which is not relevant, FocusOn(fountain). The agent attempts to focus on the fountain which is a wrong action and the game terminates with a score of 0. Another example is task 1-1 (Table 2) in which the agent should boil a substance. It should first find the substance but since the substance is in a totally different location than those seen during training, the sub-goal generator is not able to generate a good sub-goal for this step. Consequently the agent will do other actions and exhaust all the allocated time steps. Example (task 3-3) Example (task 1-1) With Sub-goal Expert Trajectory With Sub-goal Expert Trajectory NavigateTo(kitchen) NavigateTo(kitchen) go to art studio go to art studio go to art studio go to art studio go to outside go to outside go to outside go to outside go to kitchen go to kitchen go to hallway go to hallway FocusOn(fountain) NavigateTo(bedroom) -focus on fountain move jug to sink -go to bedroom go to workshop activate sink pick up metal potcontaining gallium deactivate sink pick up jug Table 2: Two instances where the performance of our algorithm is low. The first column displays the trajectory generated with sub-goals, while the second column presents the expert trajectory. Sub-goals are highlighted in dark red, accompanied by their corresponding actions, and incorrect actions are marked in red. The impact of scale: We conduct a comparison across various sizes of language models such as FLAN-T5-XL, FLANT5-BASE, and FLAN-T5-SMALL. Additionally, we evaluate T5-3B and T5-LARGE to determine the effectiveness of FLAN-T5 versus T5. The results are illustrated in Figure 5a. In our initial findings, we observed that FLANT5 outperforms T5 significantly. Moreover, our results reveal a positive correlation between the LM size and its performance \u2013 larger models generally yield better results. Intriguingly, we observe that for smaller models (FLANT5-SMALL and FLAN-T5-BASE), not conditioning on sub-goals works slightly better than including them. This might be indicative that the sub-goal generator is not expressive enough to generate meaningful and effective sub-goals which in turn impacts the action generator policy and leads to lower scores. The impact of sub-goals: To study the impact of the sub-goal generator\u2019s size on the overall performance, we try pairing different sizes of sub-goal generator while limiting the action generator to be small. In Figure 5b, the average scores exhibit an upward trajectory. This can be attributed to the larger sub-goal generators producing more accurate and relevant sub-goals, subsequently empowering the action generator to generate more correct actions. See Table 6 for a complete breakdown of the score per task type and per model size. 8 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 (a) (b) Figure 5: a) Average scores across different model sizes for FLAN-T5 and T5. For T5 model, X-Large refers to T5-3B. The larger models work better and FLAN-T5 performs also better than T5. Dashed lines represent models that are not conditioning on any sub-goals (\u201cno sg\u201d) and equivalent to Swift-only. b) Average scores across different sizes of sub-goal generator while the action generator is kept to be base (blue) or small (green). Having larger sub-goal generators can significantly boost performance of small action generators. Random Semi-random first 10 steps each first 10 steps each 39.1% 37.6% 6.4% 53.1% 43.3% 14.2% Table 3: Average performance for randomly generated sub-goals. Sub-goals are selected randomly (or semi-randomly) at either the first step, every 10 steps, or each step. To further demonstrate the importance of the sub-goal, we generated random sub-goals and then fed them to the action generator. That yield an average score of 6.4%, indicating that the action generator do condition on the sub-goals, subsequently, it cannot solve the tasks effectively. We conducted an additional experiment by altering the arguments of the sub-goals, as they have a functional format. If the argument corresponds to a location, we replaced it with another from the environment, and if it is an object, we replaced it with a randomly chosen object available at that step of the game. We named this approach semi-random sub-goals. The result for this experiment is 14.2%, showing an increase in performance compared to the random sub-goals. Table 3 shows the average scores and Table 9 shows the score for each task. Recovery from noisy sub-goals: We also assess the performance when both the action and sub-goal generators have been exposed to noisy sub-goals. More specifically, we consider two settings: applying noise 1) only at the first step, or 2) every 10 steps. In the first setting, the first sub-goal is (semi-)randomly selected, while the subsequent sub-goals are generated using the FLAN-T5-LARGE sub-goal generator. In the second experiment, a sub-goal is (semi-)randomly selected every 10 steps instead of using the sub-goal generator for all steps. Table 3 shows the overall scores for both settings and a breakdown per task types is presented in Table 10. In both scenarios, semi-random selection (53.1% and 43.3%) yields better results, as it closely resembles the subgoals generated by the sub-goal generator. Some tasks achieve a score of 100, indicating successful recovery from noisy sub-goals. While overall scores are lower compared to using the FLAN-T5-LARGE sub-goal generator, it is still higher than using Swift only in the first setting and closely approaching it in the second setting (Appendix A.10). Generalization on heldout task types: We select one or two task types from each science domain (see highlighted ones in Table 4) to train the action and sub-goal models. Then, we assessed their performance on the rest of the task types. We compared our algorithm against the Swift-only baseline. The average total scores are 40.63% with subgoals vs. 36.56% for Swift-only. For unseen tasks, the scores are 27.72% with sub-goals vs. 15.25% for Swift-only. This suggests that using sub-goals helps improve generalization across unseen tasks. The scores for each task are presented in Table 11. 9 \fPublished at 3rd Conference on Lifelong Learning Agents (CoLLAs), 2024 5 DISCUSSION AND LIMITATION In contrast to SwiftSage, which relies on interactive usage of the ChatGPT API to handle planning, our approach makes use of a trained sub-goal generator to guide the action generator. Moreover, our framework empowers the agent to retrieve a nearly optimal trajectory by supplying the appropriate sub-goal. Nevertheless our framework has significantly reduced the frequency of API calls, which are both expensive and not universally accessible. ReAct, Reflexion, and SwiftSage require human annotations to correct sub-goals and predict a reasonable action. However in our approach, we do not need human help to predict sub-goals or provide precise prompts. Generalization: In this work, our focus is on optimizing performance within the environment, and there might be a potential limitations when transitioning to entirely different scenarios. If we test it in a distinct environment, the performance may not be optimal, given the fine-tuning with data specific to the ScienceWorld environment. It\u2019s acknowledged that for generalization across diverse scenarios, an LLM may perform better, given its capacity to handle a broader range of inputs and contexts. Goal Modification: When the agent encounters challenges in solving the current sub-goal, it will often find itself cycling through the same sub-goal for several steps. Consequently, the action generator repeats a sequence of actions mirroring recent ones. Sometimes the sub-goal generator will adjust the sub-goal slightly based on the input and that can be enough to get unstuck. Ideally, we would like to avoid being stuck for several steps and learn to modify the sub-goal in the right way. One strategy involves online learning, where the controller is updated based on the reward from the environment. However, this approach carries the risk of catastrophic forgetting, necessitating additional measures such as loss modification and regularization to mitigate this risk. Another approach could involve incorporating an LLM alongside the controller. If the controller fails to produce effective actions, the LLM can suggest alternative sub-goals. This might have the risk of poor sub-goals and hallucinations which rewards might help but it is still challenging in such a sparse environment. 6 CONCLUSION We introduce a straightforward yet highly effective approach for tackling complex text-based environments. Our framework leverages the knowledge of an LLM to extract sub-goals. A hierarchical policy of two LMs proposed: a high-level policy predicts a sub-goal, and a low-level policy, by using the predicted sub-goal, generates elementary actions. Through extensive experiments across 30 task types in ScienceWorld, our approach demonstrates increase performance compared to state-of-the-art baselines, including standard imitation learning and SwiftSage. As future directions for this work, we aim to delve into further exploration of goal modification strategies when the agent encounters challenges in solving the current sub-goal. This could involve breaking down or transforming a sub-goal into a more achievable form. Another venue for future research involves extending this approach to a multimodule environment. In such scenarios, the sub-goal generator could leverage each module as an independent source to generate diverse and context-specific sub-goals. Exploring strategies for goal modification and online learning is another avenue we are keen to pursue. ACKNOWLEDGMENTS Special thanks are due to Prasanna Parthasarathi for his invaluable insights, thoughtful brainstorming, and engaging discussions in the project. Also, thank you to Xingdi Yuan for initial discussions around knowledge distillation with LLM. Sarath Chandar is supported by the Canada CIFAR AI Chairs program, the Canada Research Chair in Lifelong Machine Learning, and the NSERC Discovery Grant. This research was enabled mostly by compute resources provided by Mila (mila.quebec) and partially by Microsoft."
16
+ }
title_10K/test_title_short_2405.02791v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02791v1",
3
+ "title": "Efficient Text-driven Motion Generation via Latent Consistency Training",
4
+ "abstract": "Motion diffusion models have recently proven successful for text-driven human\nmotion generation. Despite their excellent generation performance, they are\nchallenging to infer in real time due to the multi-step sampling mechanism that\ninvolves tens or hundreds of repeat function evaluation iterations. To this\nend, we investigate a motion latent consistency Training (MLCT) for motion\ngeneration to alleviate the computation and time consumption during iteration\ninference. It applies diffusion pipelines to low-dimensional motion latent\nspaces to mitigate the computational burden of each function evaluation.\nExplaining the diffusion process with probabilistic flow ordinary differential\nequation (PF-ODE) theory, the MLCT allows extremely few steps infer between the\nprior distribution to the motion latent representation distribution via\nmaintaining consistency of the outputs over the trajectory of PF-ODE.\nEspecially, we introduce a quantization constraint to optimize motion latent\nrepresentations that are bounded, regular, and well-reconstructed compared to\ntraditional variational constraints. Furthermore, we propose a conditional\nPF-ODE trajectory simulation method, which improves the conditional generation\nperformance with minimal additional training costs. Extensive experiments on\ntwo human motion generation benchmarks show that the proposed model achieves\nstate-of-the-art performance with less than 10\\% time cost.",
5
+ "authors": "Mengxian Hu, Minghao Zhu, Xun Zhou, Qingqing Yan, Shu Li, Chengju Liu, Qijun Chen",
6
+ "published": "2024-05-05",
7
+ "updated": "2024-05-05",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Efficient Text-driven Motion Generation via Latent Consistency Training",
16
+ "main_content": "Introduction Synthesizing human motion sequences under specified conditions is a fundamental task in robotics and virtual reality. Research in recent years has explored the text-to-motion diffusion framework [1, 2, 3] to generate realistic and diverse motions, which gradually recovers the motion representation from a prior distribution with multiple iterations. These works show more stable distribution estimation and stronger controllability than traditional single-step methods (e.g., GANs [4] or VAEs [5, 6]), but at the cost of a hundredfold increase in computational burden. Such a high-cost sampling mechanism is expensive in time and memory, limiting the model\u2019s accessibility in real-time applications. To mitigate inference cost, previous text-to-motion diffusion frameworks try to trade off between fidelity and efficiency from two perspectives: i) mapping length-varying and high-dimensional original motion sequences into well-reconstructed and low-dimension motion latent representations[3, 7] to reduce data redundancy and complexity, and ii) utilizing skip-step sampling strategy [3, 8] to minimize expensive and repetitive function evaluation iterations. The first perspective inspired by the excellent performance of the latent diffusion model in text-to-image synthesis, they introduce the variational autoencoder with Kullback-Leibler (KL) divergence constraints as motion representation extractor. However, unlike image data support that contains more than ten million samples, the high cost of motion capture limits the number of samples for the text-based motion generation task. As a example, the largest current human motion dataset contains no more than fifteen thousand samples after employing data augmentation. Simultaneous arXiv:2405.02791v1 [cs.CV] 5 May 2024 \foptimization of reconstruction loss and KL divergence loss, which are adversarial targets, is significantly challenging in the presence of limited training resources. To ensure high reconstruction performance, previous state-of-the-art models usually set the KL divergence weights low enough, which results in low regularity of motion representations. Such low-regularity and continuous motion representations suffer redundancy and low robustness. It can be mitigated by a sufficiently numerous repetitive function evaluation iterations, but seriously harms the generative performance in the context of extremely few sampling steps. The second perspective follows from the recently well-established diffusion solvers, which can be categorized as training-free methods and training-based methods. Previous study confirms that the forward diffusion process corresponds to an inverse diffusion process without a stochastic term and is known as the probabilistic flow ordinary differential equation (PF-ODE) [9]. Training-free methods constructed different discrete solvers for the special form of the PF-ODE, achieving almost a 20-fold performance improvement. These works effectively compress the sampling steps to 50-100 steps, but the fidelity of the ODE solution results is lower when the number of iterations is much smaller due to the complexity of the probability distribution of the motion sequences and the cumulative error of the discrete ODE sampling. It is still a significant gap in computational effort compared to traditional single-step motion generation models. Training-based methods usually rely on model distillation or trajectory distillation for implementation, and one promising approach is known as the consistency model. It impose constraints on the model to maintain the consistency of the output on the same PF-ODE trajectory, thus achieving a single-step or multiple-step generative mapping from the prior distribution to the target distribution. Typical PF-ODE trajectory generation methods are consistency distillation, which generates trajectories with pre-trained diffusion models, or consistency training, which simulates trajectories with the unbiased estimation of ground truth. The former relies on well-trained diffusion models as foundation models. Training these models from scratch is computationally expensive and time-consuming. Less costly consistency training frameworks avoid additional pre-trained models, but also suffer poor generation performance and even training collapse due to redundant and irregular latent representations. Moreover, existing consistency training frameworks have not sufficiently explored conditional PF-ODE trajectory. It results in vanilla consistency-training-based models without significant advantages over well-established multi-step diffusion samplers using classifier-free guidance. Upon the above limitations, we propose a Motion Latent Consistency Training (MLCT) framework with generates high-quality motions with no more than 5 sampling steps. Following the common latent space modeling paradigm, our motivation focuses on constructing low-dimensional and regular motion latent representations, as well as exploring the simulation of conditional PF-ODE trajectories with the consistency training model in the absence of pre-trained models. Specifically, the first contribution of this paper is to introduce a pixel-like latent autoencoder with quantization constraints, which aggregates motion information of arbitrary length to multiple latent representation tokens via self-attention calculation. It differs significantly from the widely used variational representations in that the former is bounded and discrete while the latter is unbounded and continuous. We restrict the representation boundaries with the hyperbolic tangent (Tanh) function and forces the continuous representation to map to the nearest predefined clustering center. Compared to the black-box control strategy of fine-tuning the KL divergence weights, our approach trades off the regularity and reconstruction performance of the motion latent representations more controllably via designing finite dimensional discrete latent representation space. In addition, previous practice demonstrates that the boundedness of the representations contributes to sustaining stable inference in classifier-free guidance (CFG) techniques. The second contribution of this paper is to explore a one-stage conditionally guided consistency training framework. The main insight is to consider unbiased estimation based on ground truth motion representations as the simulation of a conditional probability gradient and to propose an online updating mechanism for the unconditional probability gradient. To the best of our knowledge, this is the first application of classifier-free guidance to consistency training. Since it is utilized for generating trajectories, the denoiser does not need to be double computationally expensive in the derivation to get better conditional generation results. We evaluate the proposed framework on two widely-used datasets: KIT and HumanML datasets. The results of our 1, 3 and 5 number of function evaluations (NFE) generation are shown in Figure 1, along with the differences in FID metrics with existing methods. Extensive experiments indicate the effectiveness of MLCT and its components. The proposed framework achieves state-of-the-art performance in motion generation only in around 5 steps. To sum up, the contributions of this paper are as follows: \u2022 We explore a pixel-like motion latent representation relying on quantization constraints which is highly regular, well-reconstruction and bounded. \u2022 We introduce classifier-free guidance in consistency training for the first time. It is beneficial to realize more controllable motion generation as well as more stable training convergence. \u2022 Our proposed MLCT achieves state-of-the-art performance on two challenge datasets with extremely less sampling steps. 2 \f1 NFE 3 NFE 5 NFE 1 NFE 3 NFE 5 NFE Figure 1: Our model achieves better FID metrics with less inference time and allows for the generation of high-quality human motions based on textual prompts in around 5 NFE. The color of humans darkens over time. 2 Related Work Human motion generation. Human motion generation aims to synthesize human motion sequence under specified conditions, such as action categories [10, 11], audio [12, 13], and textual description [14, 2, 3]. In the past few years, numerous works have investigated motion generation from various generative frameworks. For example, VAE-based models [15, 16, 5] represent the motion as a set of Gaussian distributions and constrain its regularity with KL divergence. Such constraint allows it to reconstruct the motion information from the standard normal distribution, yet its results are often ambiguous. GAN-based methods [17, 4] achieve better performance by bypassing direct estimation of probabilistic likelihoods via the adversarial training strategy, but the adversarial property makes their training often unstable and prone to mode collapse. Some multi-step generative methods have emerged recently with great success, such as auto-regressive [18, 19] and diffusion methods [1, 2, 3]. In particular, the latter is gradually dominating the research frontiers due to its stable distribution estimation capability and high-quality sampling results. Motiondiffuse [1] and MDM [2] were the pioneers in implementing diffusion frameworks for motion generation. MLD [3] realizes the latent space diffusion, which significantly improves the efficiency. M2DM [7] represents motion as discrete features and diffusion processes in finite state space with state-of-the-art performance. Some recent work [8] has focused on more controlled generation with equally excellent results. These works validate the outstanding capabilities of the motion diffusion framework and receive continuous attention. Efficient diffusion sampling. Efficient diffusion sampling is the primary challenge of diffusion frameworks oriented to real-time generation tasks. DDIM [20] relaxes the restriction on Markov conditions in the original diffusion framework and achieves a 20 times computational efficiency improvement. Score-based method [9] from the same period relates the diffusion framework to a stochastic differential equation and notes that it has a special form known as the probability flow ODE. This is a milestone achievement. It guides the following works either to steer a simplified diffusion process through a specially designed form of ODE [21, 22, 23], or to skip a sufficiently large number of sampling steps via the more sophisticated higher-order ODE approximation solution strategy [24]. In addition to the above work, the diffusion process can be executed in lower dimensional and more regular latent spaces, thus reducing the single-step computational burden [25]. While these works have proven effective in computer vision, they have received only finite reflections in motion diffusion frameworks. Previous state-of-the-art methods such as MLD [3] and GraphMotion [8] have utilized VAE-based representations and DDIM sampling strategies. Precise and robust motion representation and efficient motion diffusion design remain an open problem. Consistency model. Consistency modeling is a novel and flexible diffusion sampling framework that allows the model to make trade-offs between extreme few steps and generation quality. Latent consistency models extend consistency distillation methods to the latent representation space, saving memory spend and further improving inference efficiency. Subsequently, VideoLCM further applies consistency distillation to video generation. Recent approaches have also investigated the application of Lora and control net to consistency modeling with impressive results. These methods rely on a strong teacher model as the distillation target, which trained from scratch requires not only a large dataset support but also a lot of computational resources. To reduce the training cost, ICM further explores and improves consistency training methods to obtain similar performance to consistency distillation without pre-trained models. However, it is 3 \fstill limited to the original pixel representation space of fixed dimensions and is applied to variance-explosion ODE frameworks. Consistency training methods for broader diffusion strategies in the latent representation space lack further exploration. 3 Preliminaries In this section, we briefly introduce diffusion and consistency models. 3.1 Score-based Diffusion Models The diffusion model [26] is a generative model that gradually injects Gaussian noise into the data and then generates samples from the noise through a reverse denoising process. Specifically, it gradually transforms the data distribution pdata(x0) into a well-sampled prior distribution p(xT ) via a Gaussian perturbation kernel p(xt|x0) = N(xt|\u03b1tx0, \u03c32 t I), where \u03b1t and \u03c3t are specify noise schedules. Recent studies have formalized it into a continuous time form, described as a stochastic partial differential equation, dxt = f(t)xtdt + g(t)dwt, (1) where t \u2208[\u03f5, T], \u03f5 and T are the fixed positive constant, wt denotes the standard Brownian motion, f and g are the drift and diffusion coefficients respectively with follow from, f(t) = d log \u03b1t dt , g2(t) = d\u03c32 t dt \u22122d log \u03b1t dt \u03c32 t . (2) Previous work has revealed that the reverse process of Eq. 1 shares the same marginal probabilities with the probabilistic flow ODE: dxt = [f(t)xt \u22121 2g2(t)\u2207xt log p(xt)]dt, (3) where \u2207x log p(xt) is named the score function, which is the only unknown term in the sampling pipeline. An effective approach is training a time-dependent score network S\u03b8(xt, t) to estimate \u2207x log p(xt) based on conditional score matching, parameterized as the prediction of noise or initial value in forward diffusion. Further, Eq. 3 can be solved in finite steps by any numerical ODE solver such as Euler [9] and Heun solvers [27]. 3.2 Consistency Models Theoretically, the inverse process expressed by Eq. 3 is deterministic, and the consistency model (CM) [23] achieves one-step or few-step generation by pulling in outputs on the same ODE trajectory. It is more formally expressed as, S\u03b8(xt, t) = S\u03b8(xt\u2032, t\u2032) \u2248S\u03b8(x\u03f5, \u03f5) \u2200t, t\u2032 \u2208[\u03f5, T], (4) which is known as the self-consistency property. To maintain the boundary conditions, existing consistency models are commonly parameterized by skip connections, i.e., S\u03b8(xt, t) := cskip(t)x + cout(t) \u02c6 S\u03b8(xt, t) (5) where cskip(t) and cout(t) are differentiable functions satisfied cskip(\u03f5) = 1 and cout(\u03f5) = 0. For stabilize training, the consistency model maintaining target model S\u2212 \u03b8 , trained with the exponential moving average (EMA) of parameter \u03b3, that is \u03b8\u2212\u2190\u03b3\u03b8\u2212+ (1 \u2212\u03b3)\u03b8. The consistency loss can be formulated as, Lcm(\u03b8, \u03b8\u2212) = Ex,t \u0002 d \u0000S\u03b8(xtn+1, tn+1), S\u03b8\u2212(\u02c6 xtn, tn) \u0001\u0003 (6) where d(\u00b7, \u00b7) is a metric function such as mean square or pseudo-huber metric, and \u02c6 xtn is a one-step estimation from xtn+1 with ODE solvers applied in Eq. 3. 4 Motion Latent Consistency Training Framework In this section, we discuss two critical targets. The first is encoding motions with arbitrary lengths into low-dimensional and regularized latent representations of motions to align all motion dimensions. The second is introducing the conditional PF-ODE into less cost consistency training framework for few-steps and high-quality latent representation sampling. To this end, we propose a Motion Latent Consistency Training (MLCT) framework, as shown in Figure 2. It consists of an autoencoder with quantization constraints, which is used to learn various motion representations in low-dimensional and regularized latent spaces (details in Section 4.1), and a denoising network, which is used to capture the corresponding latent state distributions and to implement few-step sampling (details in Section 4.2). 4 \fMotion Latent Representation Motion Feature ... ... ... ... Latent Representation Noise ... Time Text Transformer Block Embedding Embedding ... ... Quantized Conditional PF-ODE Trajectories Skip Connection Clamp Quantization Constraints Conditional Trajectories Simulation Conditional Target Unconditional Target Figure 2: Our Motion Consistency model can achieve high-quality motion generation given a text prompt with around 5 steps. The color of humans darkens over time. E D S S S xt x\u03f5 xT x\u03f5 xt\u2032 x\u03f5 x\u03f5 xt\u2032 xt xT dxt = f(t)xtdt + g(t)dwt dxt = [f(t)xt \u22121 2g2(t)\u2207xt log p(xt)]dt Consistency Property: S(xT , T, c) \u2248S(xt\u2032, t\u2032, c) \u2248S(xt, t, c) \u2248x\u03f5, where \u2200t, t\u2032 \u2208[\u03f5, T] 4.1 Encoding Motion as Quantized Latent Representation We construct an autoencoder G = {E, D} with transformer-based architecture to realize encoding and reconstructing between motion sequences x and latent motion representations z. The core insight is that each dimension of z is sampled from a finite set M of size 2l + 1 as follow, M = {zi; \u22121, \u2212j/l, \u00b7 \u00b7 \u00b7 , 0, \u00b7 \u00b7 \u00b7 , j/l, \u00b7 \u00b7 \u00b7 , 1}l j=0. (7) To this end, we denote z \u2208Rn,d as n learnable tokens with d dimension, aggregating the motion sequence features via attention computation. Inspired by recent quantitative work [28], we employ a hyperbolic tangent (tanh) function on the output of the encoder E to constrain the boundaries of the representation, and then quantize the result by a rounding operator R. Furthermore, the gradient of quantized items is simulated by the previous state gradient to backpropagate the gradient normally. The latent representations z are sampled by follow format, z = R \u0010 l \u00b7 tanh(E(x)) \u0011 /l. (8) The standard optimization target is to reconstruct motion information from z with the decoder D, i.e., to optimize the l1 smooth error loss, Lz = Ex h d \u0010 x, D(z) \u0011 + \u03bbjd \u0010 J (x), J (D(z)) \u0011i , (9) where J is a function to transform features such as joint rotations into joint coordinates, and it is also applied in MLD [3] and GraphMotion [8]. \u03bbj is a balancing term. Compared with the traditional VAEs, the optimization target Eq. 9 does not contain a divergence adversarial term. A well-trained autoencoder G output bounded and regular motion latent representation, which in turn improves the solution space of the denoising network, and experimentally we found that this improvement is important for the convergence of consistent training. 5 \f4.2 Few Step Motion Generation via Consistency Training For conditional motion generation, Class-Free Guidance (CFG) is crucial for synthesizing high-fidelity samples in most successful cases of motion diffusion models, such as MLD or GraphMotion. Previous work introduced CFG into the consistency distillation, demonstrating the feasibility of the consistency model on conditional PF-ODE trajectories. However, they rely on powerful pre-trained teacher models, which not only involve additional training costs but performance is limited by distillation errors. Therefore, we are motivated to simulate CFG more efficiently from the original motion latent representation following the consistency training framework to alleviate the computational burden. The diffusion stage of MLCM begins with the variance preserving schedule [9] to perturbed motion latent representations x\u03f5 = z with perturbation kernel N(xt; \u03b1(t)x0, \u03c32(t)I), \u03b1(t) := e\u22121 4 t2(\u03b21\u2212\u03b20)\u22121 2 t\u03b20, \u03c3(t) := p 1 \u2212e2\u03b1(t). (10) The consistency model S\u03b8 has been constructed to predict x\u03f5 from perturbed xt in a given PF-ODE trajectory. To maintain the boundary conditions that S\u03b8(x\u03f5, \u03f5, c) = x\u03f5, we employ the same skip setting for Eq. ?? as in the latent consistency model (LCM), which parameterized as follow: S\u03b8(xt, t, c) := \u03b72 (10t)2 + \u03b72 \u00b7 xt + 10t p (10t)2 + \u03b72 \u00b7 e S\u03b8(xt, t, c), (11) where e S\u03b8 is a transformer-based network and \u03b7 is a hyperparameter, which is usually set to 0.5. Following the selfconsistency property (as detail in Eq. 4), the model S\u03b8 has to maintain the consistency of the output at the given perturbed state xt with the previous state e xt\u2212\u2206t on the same ODE trajectory. The latter can be estimated via DPM++ solver: e xt\u2212\u2206t \u2248\u03c3t\u2212\u2206t \u03c3t \u00b7 xt \u2212\u03b1t \u00b7 (\u03b1t\u2212\u2206t \u00b7 \u03c3t \u03c3t\u2212\u2206t \u00b7 \u03b1t \u22121) \u00b7 x\u03a6 \u03f5 , (12) where x\u03a6 \u03f5 is the estimation of x\u03f5 under the different sampling strategies. In particular, x\u03a6 \u03f5 can be parameterized as a linear combination of conditional and unconditional latent presentation prediction following the CFG strategy, i.e., x\u03a6 \u03f5 (xt, t, c) = (1 + \u03c9) \u00b7 F\u03b8(xt, t, c) \u2212\u03c9F\u03b8(xt, t, \u2205), (13) where F\u03b8(\u00b7) is well-trained and x\u03f5-prediction-based motion diffusion model. It is worth noting that x\u03f5 can be utilized to simulate F\u03b8(xt, t, c) as used in the vanilla consistency training pipeline. Furthermore, F\u03b8(xt, t, \u2205) can be replaced by S\u03b8(xt, t, \u2205) with online updating. Thus Eq. 13 can be rewritten as: x\u03a6 \u03f5 (xt, t, c) = (1 + \u03c9) \u00b7 x\u03f5 \u2212\u03c9S\u03b8(xt, t, \u2205). (14) The optimization objective of the consistency model S\u03b8 is that, Lc = Ex,t h 1 \u2206td \u0010 S\u03b8(xt, t, c), S\u03b8\u2212(\u02c6 xt\u2212\u2206t, t \u2212\u2206t, c) \u0011 + \u03bbcd \u0010 S\u03b8(xt, t, \u2205), x\u03f5 \u0011i , (15) where d(x, y) = p (x \u2212y)2 + \u03b32 \u2212\u03b3 is pseudo-huber metric, \u03b3 is a constant, \u03bbc is a balancing term. The target network S\u03b8\u2212is updated after each iteration via EMA. 5 Experiments 5.1 Datasets and Metrics Datasets. We evaluate the proposed framework on two mainstream benchmarks for text-driven motion generation tasks, which are the KIT [29] and the HumanML3D [5]. The former contains 3,911 motions and their corresponding 6,363 natural language descriptions. The latter is currently the largest 3D human motion dataset comprising the HumanAct12 [15] and AMASS [30] datasets, containing 14,616 motions and 44,970 descriptions. Evaluation Metrics. Consistent with previous work, we evaluate the proposed framework in four parts. (a) Motion quality: we utilize the frechet inception distance (FID) to evaluate the distance in feature distribution between the generated data and the real data. (b) Condition matching: we first employ the R-precision to measure the correlation between the text description and the generated motion sequence and record the probability of the first k = 1, 2, 3 matches. Then, we further calculate the distance between motions and texts by multi-modal distance (MM Dist). (c) Motion diversity: we compute differences between features with the diversity metric and then measure generative diversity in the same text input using multimodality (MM) metric. (d) Calculating burden: we first use the number of function evaluations (NFE) to evaluate generated performance with fewer steps sampling. Then, we further statistics the average sampling time (AST) of a single sample. 6 \fTable 1: Comparisons to state-of-the-art methods on the HumanML test set. We repeat all the evaluations 20 times and report the average with a 95% confidence interval. \"\u2191\" denotes that higher is better. \"\u2193\" denotes that lower is better. \"\u2192\" denotes that results are better if the metric is closer to the real motion. \u2020 denotes that classifier-free guidance is utilized, causing a double NFE. Method R-Precision \u2191 FID \u2193 MM-Dist\u2193 Diversity\u2192 MModality\u2191 NFE\u2193 Top-1 Top-2 Top-3 Real 0.511\u00b1.003 0.703\u00b1.003 0.797\u00b1.002 0.002\u00b1.000 2.974\u00b1.008 9.503\u00b1.065 TEMOS[6] 0.424\u00b1.002 0.612\u00b1.002 0.722\u00b1.002 3.734\u00b1.028 3.703\u00b1.008 8.973\u00b1.071 0.368\u00b1.018 T2M[5] 0.457\u00b1.002 0.639\u00b1.003 0.740\u00b1.003 1.067\u00b1.002 3.340\u00b1.008 9.188\u00b1.002 2.090\u00b1.083 MDM [2] 0.320\u00b1.005 0.498\u00b1.004 0.611\u00b1.007 0.544\u00b1.044 5.566\u00b1.027 9.559\u00b1.086 2.799\u00b1.072 1000 MD [1] 0.491\u00b1.001 0.681\u00b1.001 0.782\u00b1.001 0.630\u00b1.001 3.113\u00b1.001 9.410\u00b1.049 1.553\u00b1.042 1000 MLD\u2020 [3] 0.481\u00b1.003 0.673\u00b1.003 0.772\u00b1.002 0.473\u00b1.013 3.196\u00b1.010 9.724\u00b1.082 2.413\u00b1.079 100 GraphMotion\u2020[8] 0.504\u00b1.003 0.699\u00b1.002 0.785\u00b1.002 0.116\u00b1.007 3.070\u00b1.008 9.692\u00b1.067 2.766\u00b1.096 300 M2DM [7] 0.497\u00b1.003 0.682\u00b1.002 0.763\u00b1.003 0.352\u00b1.005 3.134\u00b1.010 9.926\u00b1.073 3.587\u00b1.072 100 Our 0.460\u00b1.001 0.655\u00b1.002 0.760\u00b1.006 0.232\u00b1.007 3.238\u00b1.008 9.658\u00b1.065 3.506\u00b1.008 5 Table 2: Comparisons to state-of-the-art methods on the KIT test set. The meaning of the markers is the same as in Tab. 1. Method R-Precision \u2191 FID \u2193 MM-Dist\u2193 Diversity\u2192 MModality\u2191 NFE\u2193 Top-1 Top-2 Top-3 Real 0.424\u00b1.005 0.649\u00b1.006 0.779\u00b1.006 0.031\u00b1.004 2.788\u00b1.012 11.08\u00b1.097 TEMOS[6] 0.353\u00b1.006 0.561\u00b1.007 0.687\u00b1.005 3.717\u00b1.051 3.417\u00b1.019 10.84\u00b1.100 0.532\u00b1.034 T2M[5] 0.370\u00b1.005 0.569\u00b1.007 0.693\u00b1.007 2.770\u00b1.109 3.401\u00b1.008 10.91\u00b1.119 1.482\u00b1.065 MDM [2] 0.164\u00b1.004 0.291\u00b1.004 0.396\u00b1.004 0.497\u00b1.021 9.191\u00b1.022 10.85\u00b1.109 1.907\u00b1.214 1000 MD [1] 0.417\u00b1.004 0.621\u00b1.004 0.739\u00b1.004 1.954\u00b1.062 2.958\u00b1.005 11.10\u00b1.143 0.730\u00b1.013 1000 MLD\u2020 [3] 0.390\u00b1.008 0.609\u00b1.008 0.734\u00b1.007 0.404\u00b1.027 3.204\u00b1.027 10.80\u00b1.117 2.192\u00b1.071 100 GM\u2020,\u2021[8] 0.429\u00b1.007 0.648\u00b1.006 0.769\u00b1.006 0.313\u00b1.013 3.076\u00b1.022 11.12\u00b1.135 3.627\u00b1.113 300 M2DM [7] 0.416\u00b1.004 0.628\u00b1.004 0.743\u00b1.004 0.515\u00b1.029 3.015\u00b1.017 11.417\u00b1.970 3.325\u00b1.370 100 Our 0.433\u00b1.007 0.655\u00b1.006 0.783\u00b1.006 0.408\u00b1.013 2.831\u00b1.018 11.179\u00b1.085 1.23\u00b1.037 5 5.2 Implementation Details Model Configuration. The motion autoencoder {E, D} and the score network S are both the transformer architecture with long skip connections [31], which is also used in MLD [3]. Specifically, both the encoder E and decoder D contain 7 layers of transformer blocks with input dimensions 256, and each block contains 3 learnable tokens. The size of the finite set M is set as 2001, i.e. l = 1000. The score network S contains 15 layers of transformer blocks with input dimensions 512. The frozen CLIP-ViT-L-14 model [32] is used to be the text encoder. It encodes the text to a pooled output w \u2208R1,256 and then projects it as text embedding to sum with the time embedding before the input of each block. Train Configuration. For diffusion time horizon [\u03f5, T] into N \u22121 sub-intervals, we set \u03f5 is 0.002, T is 1, N is 1000. We follow the consistency model [23] to determine ti = (\u03f51/\u03c1 + i\u22121 N\u22121(T 1/\u03c1 \u2212\u03f51/\u03c1))\u03c1, where \u03c1 = 2. For balance training, we set \u03bbj as 0.001. All the proposed models are trained with the AdamW optimizer with a learning rate of 10\u22124 on a single RTX 4090 GPU. The size of each mini-batch is 64 and 128 for the autoencoder and denoising network, and the training process has been iterated with 1500 and 2000 epochs for the autoencoder and denoising network. 5.3 Comparisons to State-of-the-art Methods The test results of HumanML and KIT are shown in Tab. 1 and Tab. 2, respectively. Our framework achieves the state-of-the-art generation performance. Compared to existing motion diffusion generation frameworks with more than 50-1000 iterations (e.g., MDM, MotionDiffuse, and MLD), our approach reduces the computational burden by more than tenfold without severely degrading the quality of damage generation. Remarkably, our inference pipeline is very concise, with no tricks such as additional text preprocessing as used in GraphMotion. Sampling in fewer steps also has 7 \fReal MDM MLD T2M-GPT Our Figure 3: Qualitative analysis of our model and previous models. We provide three textual prompts for the motion visualization results. We achieve better motion generation performance to match some text conditions with fewer NFE. not significantly reduced diversity and multi-modality metrics, which remain competitive. Fig. 3 shows the comparison of the visualization results with the previous model. 5.4 Ablation Study Table 3: Ablation study of our framework with more generation metrics under different guidance parameters. The meaning of the markers is the same as in Tab. 1. Dataset w R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 0 0.742\u00b1.006 0.717\u00b1.028 3.051\u00b1.021 2.496\u00b1.065 0.5 0.771\u00b1.006 0.504\u00b1.021 2.885\u00b1.023 1.935\u00b1.044 1 0.775\u00b1.005 0.494 \u00b1.019 2.831\u00b1.021 1.844\u00b1.049 1.5 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 2 0.777\u00b1.006 0.518\u00b1.016 2.799\u00b1.023 1.612\u00b1.041 Effectiveness of each component. We explore the generative performance of the classifier-free guidance technique under different representations, and the results are reported in Fig. 4. When the guidance coefficient w equals to 0, the model degenerates into a vanilla consistency model. We discover that increasing various degrees of classifier-free guidance accelerates consistency training convergence and improves generation quality. The pixel-discrete motion representation via the quantized autoencoder has better convergence ability generation performance compared to the continuous motion representation. In particular, under the same consistency training parameters, we have not observed significant gains in generation quality from variational constraints compared to the vanilla autoencoder. We further discuss more comprehensive generation metrics at different guidance parameters and the results are reported in Tab. 3. As the guidance parameters increase, controllability and generation quality gradually improve, with a corresponding decrease in diversity. In contrast to the larger guidance parameters employed in the traditional diffusion framework 8 \f500 1000 1500 2000 Epoch ( = 0.0) 0 2 4 6 HumanML3D FID 500 1000 1500 2000 Epoch ( = 0.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.0) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 2.0) 0 2 4 6 FID Auto-Encoder Variational Auto-Encoder Quantized Auto-Encoder 500 1000 1500 2000 Epoch ( = 0.0) 0 2 4 6 KIT FID 500 1000 1500 2000 Epoch ( = 0.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.0) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 1.5) 0 2 4 6 FID 500 1000 1500 2000 Epoch ( = 2.0) 0 2 4 6 FID Figure 4: Ablation study of the quantized autoencoder employed in our framework with the conventional variational autoencoder and the vanilla autoencoder under different guidance parameters. We repeat all evaluations 3 times at each 50 epoch and report the average values. (which can usually be set to 7), we find that there is no contribution to the generation quality starting from w greater than 2 in the consistency training framework. Table 4: Ablation study of different number of token and sizes of representation finite set. The meaning of the markers is the same as in Tab. 1. Dataset Token l R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 2 100 0.770\u00b1.006 0.599\u00b1.025 2.870\u00b1.020 1.656\u00b1.043 2 500 0.774\u00b1.005 0.550\u00b1.019 2.829\u00b1.018 1.769\u00b1.021 2 2000 0.775\u00b1.005 0.428\u00b1.016 2.844\u00b1.019 1.645\u00b1.045 4 1000 0.781\u00b1.003 0.489\u00b1.021 2.823\u00b1.021 1.859\u00b1.044 6 1000 0.781\u00b1.004 0.465\u00b1.021 2.821\u00b1.019 1.839\u00b1.055 2 1000 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 Ablation study on the different model hyperparameters. In Tab. 4, we test the model performance with different hyperparameters. Consistent with the findings of MLD, increasing the number of tokens does not remarkably increase the generation quality. Appropriately increasing the size of the finite set 2l + 1 is beneficial in improving the generation results, and such gain is no longer significant when l is larger than 1000. Table 5: Ablation study of different number of function evaluations. Dataset NFE R-Precision Top-3 \u2191 FID \u2193 MM-Dist \u2193 MModality \u2191 KIT 1 0.777\u00b1.005 0.567\u00b1.002 2.865\u00b1.013 1.424\u00b1.040 3 0.781\u00b1.005 0.409\u00b1.014 2.812\u00b1.019 1.598\u00b1.037 5 0.783\u00b1.006 0.411\u00b1.019 2.809\u00b1.019 1.648\u00b1.040 8 0.783\u00b1.006 0.400\u00b1.015 2.810\u00b1.017 1.667\u00b1.051 10 0.786\u00b1.006 0.395\u00b1.015 2.795\u00b1.019 1.663\u00b1.049 Ablation study on the different sampling steps. Our generation results at different sampling steps are further shown in Tab. 5. We have excellent results with fewer sampling steps, but when the number of sampling steps is increased to more than 15, the increased number of sampling steps does not result in a quality payoff. It is a common problem with consistency training. 9 \f5.5 Time Cost Table 6: Comparison of inference time with previous sota models. Method MDM MLD T2M-GPT GraphMotion Our (NFE 5) Our (NFE 3) AST (s) 7.5604 0.0786 0.2168 0.5417 0.0141 0.0098 The consistency training method we use does not require prior training of the diffusion model, so training is inexpensive and is available on just a single 4090. On the HumanML dataset, we train the encoder in 15 hours and the denoiser in 12 hours. Benefiting from the consistency sampling strategy, our inference time is also more than tenfold less than existing models. A more detailed time comparison is reported in Tab. 6. 6 Conclusion In this paper, we propose a motion latent consistency Training framework, called MLCT, for high-quality, few-step sampling. It encodes motion sequences of arbitrary length into representational tokens with quantization constraints and constrains the consistency of outputs on the same ODE trajectory to realize the latent diffusion pipeline. Inspired by classifier-free guidance, we propose a method called consistent trajectory offset for fast convergence of consistent training. We validate our model and each of its components through extensive experiments and achieve the best trade-off between performance and computational burden in a very small number of steps (around 10). Our approach can provide a reference for subsequent latent consistency model training for different tasks. Limitation and Future Work. Our work still has some directions for improvement. First, we aim at less-step motion generation and lack a discussion on fine-grained motion control. Fortunately, our proposed method is a generalized diffusion model training framework with fewer sampling steps. Some recent common textual controllers (such as graphmotion) can be integrated into the current work. Second, we note that consistent training fails to yield higher sampling quality after increasing the number of steps compared to common diffusion frameworks. How to overcome this difficulty is our main subsequent work."
17
+ }
title_10K/test_title_short_2405.02801v2.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02801v2",
3
+ "title": "Mozart's Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models",
4
+ "abstract": "In recent years, AI-Generated Content (AIGC) has witnessed rapid\nadvancements, facilitating the generation of music, images, and other forms of\nartistic expression across various industries. However, researches on general\nmulti-modal music generation model remain scarce. To fill this gap, we propose\na multi-modal music generation framework Mozart's Touch. It could generate\naligned music with the cross-modality inputs, such as images, videos and text.\nMozart's Touch is composed of three main components: Multi-modal Captioning\nModule, Large Language Model (LLM) Understanding & Bridging Module, and Music\nGeneration Module. Unlike traditional approaches, Mozart's Touch requires no\ntraining or fine-tuning pre-trained models, offering efficiency and\ntransparency through clear, interpretable prompts. We also introduce\n\"LLM-Bridge\" method to resolve the heterogeneous representation problems\nbetween descriptive texts of different modalities. We conduct a series of\nobjective and subjective evaluations on the proposed model, and results\nindicate that our model surpasses the performance of current state-of-the-art\nmodels. Our codes and examples is availble at:\nhttps://github.com/WangTooNaive/MozartsTouch",
5
+ "authors": "Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, Shuchang Liu",
6
+ "published": "2024-05-05",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.SD",
9
+ "cats": [
10
+ "cs.SD",
11
+ "cs.AI",
12
+ "eess.AS"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Multi AND Modal AND LLM",
16
+ "gt": "Mozart's Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models",
17
+ "main_content": "INTRODUCTION In recent years, the intersection of artificial intelligence (AI) and creative arts has witnessed remarkable advancements [2], leading to the emergence of novel techniques and systems capable of producing music[1, 3, 24], images[21\u201323], and other forms of artistic expression[19] in a wide range of industries. As the remarkable advancements in Artificial Intelligence for Generative Composition (AIGC), there is a growing belief that it heralds a new era in AI and will have a substantial influence across the globe. arXiv:2405.02801v2 [cs.SD] 7 May 2024 \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu However, current music generation models, when tasked with image-to-music synthesis, encounter notable limitations. These models often struggle to accurately capture the ambiance and underlying emotions conveyed by the visual input. While they may produce music that aligns with the visual elements, the nuanced details and subtle cues present in the image are frequently lost in translation. This shortfall hampers the ability of existing systems to truly evoke the intended atmosphere and sentiment of the imagery, thereby limiting their effectiveness in multi-modal creative endeavors. It is evident that there exists a gap in the current stateof-the-art models concerning their proficiency in leveraging visual cues to inform the musical composition process. Natural language serves as a powerful intermediary, demonstrating significant potential in bridging across different sensory modalities. Designed to interact directly with human, Large language models (LLMs) are typically comprised of a vast number of parameters and trained on extensive datasets, granting them powerful comprehension and reasoning capabilities.[8] Harnessing these advantages, researchers have employed LLMs to achieve semantic understanding across multiple modalities. Despite the significant strides made in AI-driven creativity, a compelling question arises: How can we harness the formidable capabilities of LLMs to empower multi-modal tasks such as imageto-music synthesis? This inquiry serves as the focal point of our investigation, wherein we seek to elucidate the seamless integration of LLMs into the process of generating music inspired by visual contents. In this paper, we present Mozart\u2019s Touch, a multi-modal music generation framework that harnesses the power of Large Language Models (LLMs) and pre-trained models to generate music based on visual information. An overview of the architecture is depicted in Figure 1. Mozart\u2019s Touch offers multiple advantages for image-to-music generation: By leveraging the deep understanding and generalizable knowledge of Large Language Models (LLMs) to interpret visual elements accurately, it differs from previous multi-modal end-to-end music generation methods (e.g. CoDi [26] and M2UGen [10]). Unlike traditional approaches, it requires no training of music generation models or fine-tuning LLMs, conserving computational resources and ensuring efficiency. Moreover, Mozart\u2019s Touch utilizes clear, interpretable prompts for greater transparency during the whole process, which improves overall framework explainability. Our contributions are summarized as follows: \u2022 We introduce the Mozart\u2019s Touch framework, an innovative integration of Large Language Models (LLMs) for multimodal music generation. Departing from traditional end-toend paradigms, this framework harnesses the power of LLMs to synthesize music aligned with visual inputs. \u2022 We offer a new perspective on leveraging LLMs for multimodal generation tasks. Our framework showcases a novel application of LLMs in text-to-music generation , demonstrating the potential of LLMs in understanding and bridging different sensory modalities and empowering creative processes. \u2022 We assess Mozart\u2019s Touch on the imageand video-to-audio dataset MUImage and MUVideo [11] , utilizing both objective and subjective metrics. Comparative evaluation results show that our approach outperforms existing state-of-theart methods. This experiment demonstrates the effectiveness of our framework and its potential as a new baseline benchmark for future works in the domain. 2 RELATED WORK 2.1 Multi-modal Large Language Model (MLLM) Due to the prevalence of researches in Large Language Models(LLM), the combination of LLM and models in other modalities has also been a rising research hot spot, leading to the new field of MLLM. According to this survey [27] , the key applications of MLLM includes Multi-modal Instruction Tuning (M-IT), Multi-modal InContext Learning (M-ICL), Multi-modal Chain of Thought (M-CoT), and LLM-Aided Visual Reasoning (LAVR). For Mozart\u2019s Touch, we employ Modality Bridging technology, utilizing natural language as an intermediary medium and leveraging LLM to bridge the modality gap. VideoChat-Text [15], for example, is an end-to-end chatcentric video understanding system, which uses pre-trained vision models to extract visual information such as actions and enriches the descriptions using a speech recognition model, which are all represented as textual information as a bridge. 2.2 Image Captioning Image captioning, which is the process of generating descriptive text (captions) that accurately and relevantly capture the content of an image, is a typical multi-modal task requiring both abilities of visual understanding and natural language generation. [25] The field of image captioning has seen significant advancements, such as CLIP [20] and BLIP [14] model. CLIP is developed by OpenAI that has revolutionized the way computers understand images and text, which efficiently learns visual concepts from natural language supervision. The main idea of CLIP is to align texts and images in the feature domain without predetermined labels for specific object categories by training on a large corpus of image-text pairs collected from the Internet. BLIP is another multi-modal framework which transfers flexibly to both vision-language understanding and generation tasks. To pre-train a unified model with both understanding and generation capabilities, they propose multi-modal mixture of encoder-decoder (MED) and achieve great performance across multiple tasks, such as image captioning. 2.3 Multi-Modal Music Generation The advent of Transformer and diffusion models has promoted the development of music generation models. Many impressive works emerged in recent years, such as MusicLM [1], MusicGen [3] , Noise2Music [9] and AudioLDM 2 [17] . MusicLM and MusicGen both consist of autoregressive decoder to generate music. MusicLM can generate high-quality music based on descriptive text such as emotions, styles and instruments. Noise2Music and AudioLDM 2 use diffusion models to generate music based on text that transcends fine-grained semantics and can reach deeper emotions. However, these works above all take text or audio as input to generate music, ignoring other modality information, such as image \fMozart\u2019s Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models MM\u201924, October 28 November 1, 2024, Melbourne, Australia. and video. Notable exceptions include the CoDi [26] and M2UGen [11], which allow inputs with more modalities. CoDi(Composable Diffusion) can generate output modalities in parallel from any combination of input modalities. It first use individual modality-specific diffusion models for images, videos, audio, and texts respectively to build a shared multimodal space, and then uses Latent Alignment [4] to achieve joint multi-modal generation. M2UGen is an LLMbased multi-modal music understanding and generation framework. It consists of multi-modal feature encoders, multi-model understanding adapters, bridging LLM, and generation modules to process inputs from multiple modalities such as text, images, and videos, and generate corresponding music. 3 MOZART\u2019S TOUCH Mozart\u2019s Touch is a collaborative multi-modal AIGC framework structured into a sequential integration of three core modules: a Multi-modal Captioning Module, a LLM Understanding & Bridging Module based on LLMs and Music Generation Module. The overall architecture is illustrated in Figure 1. 3.1 Multi-modal Captioning Module The Multi-modal Captioning Module is responsible to encode and understand users\u2019 input, providing textual descriptions for multimodality. This module employs state-of-the-art techniques ViT [5] and BLIP [14] model to analyze images and videos and generate descriptive captions. When users input images and videos without prompting, Our framework can also performs well to generate music that aptly complements the theme. However, in consideration of customization, we also permit users to input textual prompts to guide the music generation process. 3.1.1 Image Captioning Process. For image inputs, we leverage the capabilities of Vision Transformer (ViT) and BLIP-base modules, implemented by the clipinterrogator, to analyze and generate descriptions of the images. This process involves interpreting the visual content of an image \ud835\udc3c and converting it into a image caption description \ud835\udc37caption. Given an input image \ud835\udc3c, the framework generates a caption description \ud835\udc37caption : \ud835\udc37caption = \ud835\udc53BLIP(\ud835\udc3c) (1) where \ud835\udc53BLIP denotes the BLIP model to convert images into descriptive texts. The generated image caption description \ud835\udc37caption serves as input for the subsequent process. 3.1.2 Video Process. For video inputs, we employ a two-step process to handle and interpret the content. Initially, Video-BLIP2-Preprocessor tool is used to sample frames from the video \ud835\udc49, generating a set of frames {\ud835\udc39\ud835\udc56}. Each frame \ud835\udc39\ud835\udc56is then processed to generate a textual description \ud835\udc37\ud835\udc56using the BLIP model, similar to the image process. This process can be formulated as: {\ud835\udc37\ud835\udc56} = {\ud835\udc53BLIP(\ud835\udc39\ud835\udc56)} (2) where \ud835\udc53BLIP denotes the BLIP model to convert frames into descriptive texts. Subsequently, to synthesize a video caption description \ud835\udc37caption of the entire video, we aggregate the frame descriptions {\ud835\udc37\ud835\udc56} and process them through Large Language Models (LLMs) to interpret and condense the video\u2019s visual and thematic content into a coherent textual representation. This process can be represented as: \ud835\udc37caption = \ud835\udc53LLM({\ud835\udc37\ud835\udc56}|\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc63\ud835\udc56\ud835\udc51\ud835\udc52\ud835\udc5c) (3) where \ud835\udc53LLM denotes the LLM to integrate and interpret the set of frame descriptions into a single video description \ud835\udc37caption . The prompt used in this process is shown in Table 1. Table 1: Prompt template used to integrate the set of frame descriptions into video description. Role Content system You are about to process a sequence of captions, each corresponding to a distinct frame sampled from a video. Your task is to convert these captions into a cohesive, well-structured paragraph. This paragraph should describe the video in a fluid, engaging manner and follows these guidelines: avoiding semantic repetition to the greatest extent, and giving a description in less than 200 characters. This video caption description \ud835\udc37caption then serves as the input for subsequent process, similar to the image captioning process. 3.2 LLM Understanding & Bridging Module LLM Understanding & Bridging Module plays a pivotal role in the transition from visual to auditory art forms. It is tasked with converting the image/video-descriptive caption text, generated by the Multi-modal Captioning Module, into prompts which are useful in musical generation. This conversion leverages the capabilities of Large Language Models (LLMs) to interpret the underlying mood, themes, and elements conveyed in the textual descriptions of images or videos. Why we undertake the step of LLM-Bridge Module? This is because we contend that although multi-modal caption description have already been presented by Multi-modal Captioning Module, the problems of heterogeneous representations among different modalities still remain unsolved. For example, image captioning model (such as BLIP) intend to generate textual representations which lean more towards describing visual attributes (e.g. appearance, shape, etc.) while for music generation models (e.g. MusicGen), input descriptions that describe musical styles, moods and genres can lead to a better generation of music. From this prospective, we propose LLM Understanding & Bridging Module to align the two types of descriptions mentioned above. To enhance the specificity and relevance of the generated music, the module also optimizes the prompts with additional constraints aimed at music generation. This includes specifying the music genre and incorporating several few-shot examples provided by MusicGen. The optimization process ensures that the final musicdescriptive prompt \ud835\udc37music not only reflects the mood and theme indicated by the input visuals but also adheres to the stylistic and genre-specific guidelines necessary for generating contextually \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu appropriate music pieces. Two type of \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52, for image and video input separately, are shown in Table 2 and 3 The process can be formulated as below. Given an visual descriptive caption \ud835\udc37caption, the module generates a corresponding music-descriptive prompt \ud835\udc37music : \ud835\udc37music = \ud835\udc53LLM(\ud835\udc37caption|\ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52) (4) where \ud835\udc53LLM denotes the LLM to transform the descriptive texts into a coherent musical prompt that encapsulates the intended mood, themes, and potentially, the genre of the music to be generated, with the help of \ud835\udc43\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc4f\ud835\udc5f\ud835\udc56\ud835\udc51\ud835\udc54\ud835\udc52. Table 2: Prompt template for image-to-music generation. Role Content system Convert in less than 200 characters this image caption to a very concise musical description with musical terms, so that it can be used as a prompt to generate music through AI model, strictly in English. If user provides prompt, give priority to information provided by user. You need to speculate the mood of the given image caption and add it to the music description. You also need to specify a music genre in the description such as pop, hip hop, funk, electronic, jazz, rock, metal, soul, R&B etc. user a city with a tower and a castle in the background, a detailed matte painting, art nouveau, epic cinematic painting, kingslanding assistant A grand orchestral arrangement with thunderous percussion, epic brass fanfares, and soaring strings, creating a cinematic atmosphere fit for a heroic battle. user a group of people sitting on a beach next to a body of water, tourist destination, hawaii assistant Pop dance track with catchy melodies, tropical percussion, and upbeat rhythms, perfect for the beach By invoking LLMs through API, the model is able to distinguish semantic nuances with high accuracy while ensuring its lightweight nature. This capability not only fosters streamlined processing but also facilitates seamless deployment of model services on servers with constrained computational resources. 3.3 Music Generation Module The Music Generation Module utilizes the pre-trained model MusicGenmedium [3] to generate music pieces based on the music-descriptive prompts provided by the LLM Understanding & Bridging Module. MusicGen is designed to produce high-quality music compositions while accommodating various musical styles and preferences. By integrating MusicGen into the Mozart\u2019s Touch framework, we ensure that the generated music aligns closely with the intended mood and theme extracted from the input visuals. Table 3: Prompt template for video-to-music generation. Role Content system Convert in less than 200 characters this video caption to a very concise musical description with musical terms, so that it can be used as a prompt to generate music through AI model, strictly in English. You need to speculate the mood of the given video caption and add it to the music description. You also need to specify a music genre in the description such as pop, hip hop, funk, electronic, jazz, rock, metal, soul, R&B etc. user Two men playing cellos in a room with a piano and a grand glass window backdrop. assistant Classical chamber music piece featuring cello duet, intricate piano accompaniment, the rich harmonies blend seamlessly in an elegant and refined setting, creating a symphonic masterpiece. user A man with guitar in hand, captivates a large audience on stage at a concert. The crowd watches in awe as the performer delivers a stellar musical performance. assistant Rock concert with dynamic guitar riffs, precise drumming, and powerful vocals, creating a captivating and electrifying atmosphere, uniting the audience in excitement and musical euphoria. Given a music-descriptive prompt \ud835\udc37music, the Music Generation Module generates a music piece \ud835\udc40: \ud835\udc40= \ud835\udc53MusicGen(\ud835\udc37music) (5) where \ud835\udc53MusicGen represents the MusicGen model to transform the music prompt into music composition audio. It encapsulates the complex process of interpreting the prompts and translating them into musical elements such as melody, harmony, rhythm, and texture, ensuring that the generated music pieces accurately reflect the intended mood and themes conveyed by the input visuals. 4 EXPERIMENTS In this section, we assess the image-to-music and video-to-music generation capacities of Mozart\u2019s Touch, with the discussion of two evaluation datasets MUImage and MUVideo, and the evaluation metrics utilized. The result of evaluation shows our current state-ofthe-art performance in the task of multi-modal music generation. 4.1 Evaluation Dataset To assess our framework\u2019s performance of image-to-music generation, we utilize the MUImage dataset proposed by M2UGen [10]. MUImage is assembled by obtaining music samples from the AudioSet [6] with corresponding images, which contains 9,966 musicimage pairs in total. We sampled 2,500 music-image pairs randomly from MUImage as our evaluation dataset. \fMozart\u2019s Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models MM\u201924, October 28 November 1, 2024, Melbourne, Australia. Table 4: Objective comparison of models for image-to-music generation. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 M2UGen 9.166 1.870 0.556 CoDi 6.674 1.821 0.525 Mozart\u2019s Touch 4.625 1.169 0.753 For video-to-music generation task, we utilize the MUVideo dataset, which is also proposed by M2UGen. We adopted a construction method similar to that of the image-to-music generation task, yielding a corpus of 2,500 music-video pairs for evaluating video-to-music generation task. 4.2 Evaluation metrics For both tasks, we utilize the Frechet Audio Distance (FAD)[12], Kullback-Leibler divergence (KL) and ImageBind Rank (IB Rank)[7] as the evaluation metrics. FAD is a reference-free evaluation metric for music enhancement algorithms. A low score of FAD indicates a high quality of generated music. KL scores measure the labels between the original and the generated music. When the KL score is low, the generated audios are expected to share similar distributions with the reference music. For these two metrics, we utilize the official implementation in PyTorch, where FAD score is supported by the VGGish model. IB Rank[7] is introduced by M2UGen, to assess the alignment between the image/video modality and the generated music. Firstly, we use the Image-Bind model to obtain embeddings for the images/videos and the generated music, then calculate their cosine similarity scores and give them a score based on their ranking. For IB Rank, High score represents a relatively high ranking among the baselines. 4.3 Baselines and Details For both tasks, we compare Mozart\u2019s Touch with two baselines: CoDi[26] and M2UGen[10]. We use open-source CoDi model and M2UGen checkpoint files to run inference. Our framework runs on one NVIDIA RTX 3090 24GB GPU, and two baselines run on one NVIDIA V100 32GB GPU to load the whole models. 4.4 Performance Comparison Table 4 presents the performance of our framework, Mozart\u2019s Touch, and two baseline models in image-to-music generation. The results highlight significant improvements in both the quality and relevance of the music generated by our framework. Moreover, Mozart\u2019s Touch surpasses prior state-of-the-art models despite its simpler architecture. Table 5 shows the results of video-to-music generation. For this task, we observed that Mozart\u2019s Touch still outperforms other models, indicating that our two-step captioning strategy is also highly effective. 4.5 Subjective Evaluation Although we achieve exceptional performance in the objective evaluation, we also believe that quantitative evaluation method Table 5: Objective comparison of models for video-to-music generation. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 M2UGen 9.047 1.878 0.552 CoDi 5.055 1.195 0.494 Mozart\u2019s Touch 4.339 1.048 0.787 Table 6: Subjective comparison of models for image-to-music generation. The best results are made bold. Model OVL\u2191 REL\u2191 CoDi 2.95 3.24 M2UGen 3.77 3.02 Mozart\u2019s Touch 3.74 3.76 Ground Truth\u2217 3.88 4.08 Table 7: Ablation study on image-to-music generation task. The best results are made bold. Model \ud835\udc39\ud835\udc34\ud835\udc37\ud835\udc63\ud835\udc54\ud835\udc54\u2193 KL\u2193 IM Rank\u2191 Mozart\u2019s Touch 4.625 1.170 0.757 w/o LUBM 3.741 1.121 0.743 has great limitations for music generation tasks. The metrics above can effectively measure the quality and relevance of the generated music, but fall short in the understanding of creativity and human feelings, as supported by previous research [18]. Following previous similar works [13, 18], the generated samples are rated based on i) overall quality (OVL); and ii) relevance to the input image (REL). Both OVL and REL metrics have a Likert scale [16] between one and five, where a larger number indicates better performance. In this case, We conduct the subjective evaluation involving 125 participants, taking image-to-music generation as example. Totally 75 questions are created for the subjective evaluation, which are randomly sampled from our evaluation dataset. Each question contains a video with the input image as the visual part and generated (or ground truth) music as the audio. 20 audios are sampled from ground truth, 20 from M2UGen, 20 from Mozart\u2019s Touch, and 15 from CoDi. Each questionnaire comprises ten randomly selected questions. Upon subsequent validation by our team, all 75 questions are covered by the total 125 questionnaires. The subjective evaluation result is presented in Table 6. While our method slightly underperforms in terms of the metrics for overall quality (OVL) when compared to M2UGen, the result shows that there is a notable enhancement in the metric of relevance (REL) to input image, which is consistent with our target to generate corresponding music that aligns the image well. 4.6 Ablation Studies To demonstrate the effectiveness of LLM bridging modality, we conducted a further ablation experiment, comparing the performance \fMM\u201924, October 28 November 1, 2024, Melbourne, Australia. Tianze Xu, Jiajun Li, Xuesong Chen, Xinrui Yao, and Shuchang Liu of the original system with and without (w/o) the LLM Understanding & Bridging Module (LUBM) in the task of iamge-to-music generation. As indicated in the table 7, the framework without LUBM achieves higher scores in the FAD and KL metrics, the two metrics measure the similarity between ground truth and generated audios, rather than the similarity between different modalities. On the other side, the framework with LUBM performs better in IB Rank metric. This metric utilizes the ImageBind model to encode multi-modal information uniformly, thereby evaluating the similarity between input modality information and generated audio, aligning more closely with the objectives of evaluating multi-modal music generation. Therefore, we believe that there is no clear superiority or inferiority between the Mozart\u2019s Touch framework with and without LUBM. This once again emphasizes that quantitative evaluation may not always be the best approach for assessing the multi-modal music generation tasks. 4.7 Case Study In this part, we conduct a case study to analyze how our LLM Understanding & Bridging Module (LUBM) mitigates the problem of heterogeneous representations among different modalities. By showcasing some representative comparative examples in Figure 2, We demonstrate that the absence of the LUBM does indeed have adverse effects on the generation results. The first example illustrates a portrait of Bach. Some keywords in the original image description disturb the generation of corresponding music, as they focus on the attributes of image instead of that of music. The second example illustrates an anime girl from a visual novel game Atri: My Dear Moments. This example shows that insufficiency of music attributions may also mislead the generation of music in a quite different way. 5 CONCLUSION This paper introduces Mozart\u2019s Touch, a lightweight multi-modal music generation framework that seamlessly integrates LLMs with pre-trained models together. Experiments and researches demonstrate the framework\u2019s capability to perform multi-modality understanding and captioning, multi-modal representations bridging, and music generation, resulting in highly aligned music based on the corresponding multi-modal inputs. Our future work will aim to refine our prompting strategy for enhanced alignment with multi-modal inputs, conduct further evaluation experiments on our LLM Understanding & Bridging Module, and integrate recent advancements actively into our framework. Meanwhile, we will maintain its lightweight characteristic of our framework, ensuring its user-friendliness and expanding its accessibility to a broader scenario."
18
+ }
title_10K/test_title_short_2405.02816v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02816v1",
3
+ "title": "Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization",
4
+ "abstract": "This paper introduces Stochastic RAG--a novel approach for end-to-end\noptimization of retrieval-augmented generation (RAG) models that relaxes the\nsimplifying assumptions of marginalization and document independence, made in\nmost prior work. Stochastic RAG casts the retrieval process in RAG as a\nstochastic sampling without replacement process. Through this formulation, we\nemploy straight-through Gumbel-top-k that provides a differentiable\napproximation for sampling without replacement and enables effective end-to-end\noptimization for RAG. We conduct extensive experiments on seven diverse\ndatasets on a wide range of tasks, from open-domain question answering to fact\nverification to slot-filling for relation extraction and to dialogue systems.\nBy applying this optimization method to a recent and effective RAG model, we\nadvance state-of-the-art results on six out of seven datasets.",
5
+ "authors": "Hamed Zamani, Michael Bendersky",
6
+ "published": "2024-05-05",
7
+ "updated": "2024-05-05",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.IR",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
16
+ "gt": "Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization",
17
+ "main_content": "INTRODUCTION Most machine learning systems, including large generative models, are self-contained systems, with both knowledge and reasoning encoded in model parameters. However, these models do not work effectively for tasks that require knowledge grounding [46], especially in case of non-stationary data where new information is actively being produced [47, 52]. As suggested by Zamani et al. [52], this issue can be addressed when machine learning systems Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA \u00a9 2024 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0431-4/24/07. https://doi.org/10.1145/3626772.3657923 are being enhanced with the capability of retrieving stored content. For example, in retrieval-augmented generation (RAG), as a special case of retrieval-enhanced machine learning (REML) [52], systems consume the responses provided by one or more retrieval models for the purpose of (text) generation [21, 22]. RAG models demonstrate substantial promise across various applications, including open-domain question answering [16, 21, 53], fact verification [44], dialogue systems [5, 42, 48], and personalized generation [36, 37]. Many prior studies on RAG use off-the-shelf retrieval models. For instance, Nakano et al. [25] used APIs from a commercial search engine for text generation. Glass et al. [9], on the other hand, used a term matching retrieval model. Neural ranking models trained based on human annotated data have also been used in the literature [12, 21]. There also exist methods that only optimize the retrieval model and keep the language model parameters frozen [40]. A research direction in this area argues that optimizing retrieval models in RAG should depend on the downstream language model that consumes the retrieval results. This is also motivated by the findings presented by Salemi and Zamani [38] on evaluating retrieval quality in RAG systems. There exist solutions based on knowledge distillation [13] or end-to-end optimization based on some simplifying assumptions [35]. One of these assumptions is marginalization via top \ud835\udc58approximation [10, 21]. In more details, they first retrieve the top \ud835\udc58documents using off-the-shelf retrieval models, e.g., BM25 [34], and optimize retrieval models by re-scoring them, i.e., re-ranking, and feeding the documents to the downstream language model one-by-one independently [21]. This is far from reality as RAG models often consume multiple documents. This paper introduces Expected Utility Maximization for RAG\u2013a novel framework for end-to-end RAG optimization by relaxing these simplifying assumptions. This approach takes a utility function, which can be any arbitrary evaluation metric for the downstream generation task, such as exact match, BLEU [26], and ROUGE [23]. A major challenge in end-to-end optimization of RAG systems is that ranking and top \ud835\udc58selection is a non-differentiable process. Hence, this prevents us from using gradient descent-based methods for optimization. We address this issue by casting retrieval as a sampling without replacement process from the retrieval score distribution, which is approximated using the straight-through Gumbel-top-k approach. This stochastic approach\u2014called Stochastic RAG\u2014adds a Gumbel noise to the unnormalized retrieval scores and uses softmax to approximate argmax [17, 18]. Stochastic RAG can be applied to any RAG application. We evaluate our models using seven datasets from a wide range of applications, ranging from open-domain question answering to fact verification to slot-filling for relation extraction as well as dialogue systems. We apply our optimization method to FiD-Light [12], which arXiv:2405.02816v1 [cs.CL] 5 May 2024 \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Hamed Zamani and Michael Bendersky is the best performing system on six out of these seven datasets, according to the knowledge-intensive language tasks (KILT) leaderboard as of Feb. 1, 2024.1 Our results demonstrate significant improvements on all these datasets. 2 EXPECTED UTILITY MAXIMIZATION FOR STOCHASTIC RAG Each RAG system consists of two main components: a text generation model \ud835\udc3a\ud835\udf03parameterized by \ud835\udf03and a retrieval model \ud835\udc45\ud835\udf19 parameterized by \ud835\udf19that retrieves documents from a large document collection\ud835\udc36. The text generation model consumes the retrieval results returned by the retrieval model. End-to-end optimization of RAG systems is challenging. This is mainly because retrieving top \ud835\udc58documents and feeding them to the generation model is not a differentiable process [52], thus one cannot simply employ gradientbased optimization algorithms for end-to-end optimization of these models. In this section, we introduce stochastic expected utility maximization for end-to-end optimization of retrieval-augmented models. Let \ud835\udc47= {(\ud835\udc651,\ud835\udc661), (\ud835\udc652,\ud835\udc662), \u00b7 \u00b7 \u00b7 , (\ud835\udc65\ud835\udc5b,\ud835\udc66\ud835\udc5b)} be a training set containing \ud835\udc5bpairs of \ud835\udc65\ud835\udc56(an input text) and \ud835\udc66\ud835\udc56(the ground truth output text). Let\ud835\udc48denote a utility function that takes the output generated by the RAG system \u02c6 \ud835\udc66and the ground truth output \ud835\udc66and generates a scalar value. The utility function can be any arbitrary metric, including but is not limited to, exact match, term overlap F1, BLEU, and ROUGE. We assume (1) the higher the utility value, the better, (2) the utility function is bounded within the [0, 1] range, and (3) \ud835\udc48(\ud835\udc66,\ud835\udc66) = 1. We define RAG Expected Utility as follows: RAG Expected Utility = 1 \ud835\udc5b \u2211\ufe01 (\ud835\udc65,\ud835\udc66)\u2208\ud835\udc47 \u2211\ufe01 \u02c6 \ud835\udc66\u2208Y \ud835\udc48(\ud835\udc66, \u02c6 \ud835\udc66)\ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) (1) where Y the output space, i.e., all possible output texts. In some models, the output space is limited, for instance in fact verification, the output space is often binary: the given candidate fact is often true or false. In other situations, such as free-form text generation, the output space is unlimited. To make sure that expected utility calculation is tractable, we would need to approximate the above equation by sampling from the unlimited space Y. We will explain how such samples can be obtained at the end of this section. The probability of generating any given output \u02c6 \ud835\udc66in a RAG system can be modeled as: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66, d|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\ud835\udc5d(d|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = \u2211\ufe01 d\u2208\ud835\udf0b\ud835\udc58(\ud835\udc36) \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) (2) where \ud835\udf0b\ud835\udc58(\ud835\udc36) denotes all permutations of \ud835\udc58documents being selected from the retrieval collection \ud835\udc36. The first step in the above equation is obtained using the law of total probability, the second step is obtained using the chain rule, and the third step is obtained due to the fact that the probability of a result list d being retrieved is independent of the text generation model \ud835\udc3a\ud835\udf03. 1https://eval.ai/web/challenges/challenge-page/689/leaderboard. Note that considering all permutations in \ud835\udf0b\ud835\udc58(\ud835\udc36) is expensive and impractical for large collections, thus we can compute an approximation of this equation. We do such approximation through a stochastic process. We rewrite Equation (2) as follows: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65;\ud835\udc3a\ud835\udf03, \ud835\udc45\ud835\udf19) = Ed\u223c\ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) [\ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)] (3) where |d| = \ud835\udc58. Inspired by the seq2seq models [43], we compute \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\u2014the component in Equation (2)\u2014as follows: \ud835\udc5d( \u02c6 \ud835\udc66|\ud835\udc65, d;\ud835\udc3a\ud835\udf03) = | \u02c6 \ud835\udc66| \u00d6 \ud835\udc56=1 \ud835\udc5d( \u02c6 \ud835\udc66\ud835\udc56| \u02c6 \ud835\udc66<\ud835\udc56,\ud835\udc65, d;\ud835\udc3a\ud835\udf03) = exp \u00a9 \u00ad \u00ab | \u02c6 \ud835\udc66| \u2211\ufe01 \ud835\udc56=1 log\ud835\udc5d( \u02c6 \ud835\udc66\ud835\udc56| \u02c6 \ud835\udc66<\ud835\udc56,\ud835\udc65, d;\ud835\udc3a\ud835\udf03)\u00aa \u00ae \u00ac (4) where \u02c6 \ud835\udc66\ud835\udc56denotes the \ud835\udc56th token in \u02c6 \ud835\udc66and \u02c6 \ud835\udc66<\ud835\udc56denotes all tokens \u02c6 \ud835\udc661, \u02c6 \ud835\udc662, \u00b7 \u00b7 \u00b7 , \u02c6 \ud835\udc66\ud835\udc56\u22121. The next step is to estimate \ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) in Equation (3), which represents the probability of retrieving the result list d in response to input \ud835\udc65using the retrieval model \ud835\udc45\ud835\udf19. Most retrieval models score each query-document pair independently and then sort them with respect to their relevance score in descending order. Therefore, the probability of a document list being produced by \ud835\udc45\ud835\udf19can be modeled as a sampling without replacement process. In other words, assume that the retrieval model \ud835\udc45\ud835\udf19produces a retrieval score \ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\u2208R for any document \ud835\udc51\u2208\ud835\udc36. Sampling without replacement probability of a document list is then computed as: \ud835\udc5d(d|\ud835\udc65;\ud835\udc45\ud835\udf19) = |d| \u00d6 \ud835\udc56=1 \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) 1 \u2212\u00cd\ud835\udc56\u22121 \ud835\udc57=1 \ud835\udc5d(\ud835\udc51\ud835\udc57|\ud835\udc65;\ud835\udc45\ud835\udf19) (5) where document-level probabilities \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) can be computed using the softmax operation: \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udc65;\ud835\udc45\ud835\udf19) = exp (\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\ud835\udc56) \u00cd \ud835\udc51\u2208\ud835\udc36exp (\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51) (6) This iterative process of document sampling is non-differentiable, and thus cannot be simply used in gradient descent-based optimization approaches. To address both of these problems, Kool et al. [17, 18] recently introduced Ancestral Gumbel-Top-\ud835\udc58sampling. This approach creates a tree over all items in the sampling set and extends the Gumbel-Softmax sampling approach [24] to sampling without replacement. According to [17], independently perturbing each individual document score with Gumbel noise and picking the top \ud835\udc58documents with the largest perturbed values will generate a valid sample from the Plackett-Luce distribution. Gumbel perturbation itself can be done efficiently by simply drawing a sample \ud835\udc48\u223cUniform(0, 1), as Gumbel(0, \ud835\udefd) \u223c\u2212\ud835\udefdlog(\u2212log(\ud835\udc48)) [24]. \u02dc \ud835\udc5d(\ud835\udc51\ud835\udc56|\ud835\udf19,\ud835\udf03) = exp(\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51\ud835\udc56+ \ud835\udc3a\ud835\udc51\ud835\udc56) \u00cd \ud835\udc51\u2208\ud835\udc36exp(\ud835\udc60\ud835\udf19 \ud835\udc65\ud835\udc51+ \ud835\udc3a\ud835\udc51) (7) where \ud835\udc3a\ud835\udc51denotes the gumbel noise added for scoring document \ud835\udc51. We use straight-through gumbel-top-k, in which the top \ud835\udc58elements are selected from the above distribution using the arg max operation in the forward path, however, the softmax distribution is \fStochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA used in the backward path for computing the gradients. For more information on straight-through gumbel-softmax, refer to [14, 28]. Gumbel-top-k has been used in IR systems too. For instance, Zamani et al. [51] used the gumbel-top-k trick to optimize re-ranking models conditioned on the first stage retrieval models. Selecting Y. In Equation (1), Y denotes the output space, which can be unlimited for free-form text generation tasks, hence computationally intractable. In such cases, we need to estimate RAG Expected Utility by sampling from the output space. A uniformly random sample can give us an unbiased estimation, however, most random samples are completely unrelated to the input, so they can be easily discriminated from the ground truth output. Inspired by work on hard negative sampling for training ranking models [31, 49], at every \ud835\udc41= 10, 000 training steps, we run the RAG model that is being trained on the training inputs that will be used in the next \ud835\udc41steps and use beam search to return 100 most probable outputs. We randomly sample \ud835\udc5a= 10 of these outputs to form Y. We then made sure that for every pair (\ud835\udc65,\ud835\udc66) in the training set for the next \ud835\udc41steps,\ud835\udc66is included in Y, otherwise we randomly replace one of the sampled outputs in Y with \ud835\udc66. The reason for doing this is to make sure that our sample contains the ground truth output, ensuring that the model learns to produce higher probability for the ground truth output. Preparing Y for the next \ud835\udc41training steps would also enable us to pre-compute utility values\ud835\udc48(\ud835\udc66, \u02c6 \ud835\udc66) : \u2200\u02c6 \ud835\udc66\u2208Y, ensuring an efficient optimization process for RAG Expected Utility Maximization (see Equation (1)). 3 EXPERIMENTS 3.1 Data We use the Natural Questions (NQ) [19], TriviaQA [15], HotpotQA [50], FEVER [45], T-REx [7], zsRE [20], and Wizard of Wikipedia (WoW) [6] datasets from the KILT [29] benchmark. Due to the unavailability of ground truth labels for test set, our experiments are conducted on the publicly accessible validation sets. As the retrieval corpus, we employ the Wikipedia dump provided with the KILT benchmark2 and adhere to the preprocessing steps outlined by Karpukhin et al. [16], where each document is segmented into passages, each constrained to a maximum length of 100 words. The concatenation of the article title and passage text is used as a document. Note that the KILT benchmark furnishes document-level relevance labels (called Provenance) for its datasets, and these are employed for evaluating retrieval performance. In line with our preprocessing method outlined in this paper, we define all passages within a positive document as positive passages for our evaluation. For evaluating our models, we follow the standard KILT evaluation setup [29] by focusing on KILT-score metrics. KILT-scores combine R-Precision (\ud835\udc45\ud835\udc43) obtained by the retrieval results and the quality of the generated output text that is evaluated using any arbitrary metric \ud835\udc40(such as EM, Accuracy, or F1). For a query set \ud835\udc44, KILT-scores are computed as follows: KILT-M = 1 |\ud835\udc44| \u2211\ufe01 \ud835\udc5e\u2208\ud835\udc44 {\ud835\udc45\ud835\udc43(p, d) == 1} \u2217\ud835\udc40(\ud835\udc66, \u02c6 \ud835\udc66) (8) 2Retrieval corpus: https://dl.fbaipublicfiles.com/ur/wikipedia_split/psgs_w100.tsv.gz where d is the retrieval results produced by the retrieval model, p is the provenance label set provided by KILT, \ud835\udc66is the ground truth output, and \u02c6 \ud835\udc66is the generated text. Note that there is only one provenance label per query in most KILT datasets. FEVER and HotPotQA are the only exceptions. 12% of queries are associated with more than one supporting document in FEVER and all queries in HotPotQA (which focuses on multi-hop question answering) are associated with two documents. KILT-scores only evaluates the generated text if R-Precision is 1. This means that it does not solely focus on the quality of the generated text, but also makes sure that relevant supporting documents are provided. We adopt the metrics recommended by the KILT benchmark, namely Exact Match (KILTEM) for NQ, TriviaQA, and HotpotQA, Accuracy (KILT-AC) for FEVER, and F1-score (KILT-F1) for the WoW dataset. 3.2 Experimental Setup We apply the proposed optimization framework to a state-of-the-art RAG model on the KILT benchmark (i.e., FiD-Light, according to the KILT leaderboard) [29]. Therefore, we follow the experimental setup of Hofst\u00e4tter et al. [12] for FiD-Light. That means we used multi-task relevance sampled training set from the authors earlier work in [11] and trained a dense retrieval model, which is pretrained on the MSMARCO passage retrieval data [2]. Given that the datasets in our experiments focuses on relatively short-text generation tasks, and since all passages are less than or equal to 100 tokens, we set the input token limit for both query and passage combined at 384 tokens and for the output at 64 tokens. For training, we use a batch size of 128 with up to 40 retrieved passages, and a learning rate of 10\u22123 with the Adafactor optimizer [39]. We trained our models for 50,000 steps. We cut the learning rate by half for the large language models (i.e., T5-XL). During decoding, we use beam search with a beam size of 4. All our experiments are based on the T5X framework [33] on TPUs using T5v1.1 as the language model backbone [32]. For each dataset, we use the official KILT-score metric as the utility function for optimization (Equation (1)). 3.3 Results To evaluate the effectiveness of the RAG Expected Utility Maximization framework, we compare our model with the best performing entries in the KILT leaderboard (as of February 1, 2024) according to the official KILT-score metrics. These methods use a wide range of techniques to address these issues including dense retrieval methods followed by BART or T5 for generation, generative retrieval models, retrieval and reranking models, pre-trained large language models without augmentation, etc. These methods and their corresponding references are listed in Table 1. For the sake of space, we do not list their underlying methods here. The performance of these methods is obtained from the KILT leaderboard. We use FiD-Light as the main baseline in this paper, as it produces state-of-the-art results on six out of seven datasets and the proposed optimization method is applied to FiD-Light. FiD-Light is a simple extension of the Fusion-in-Decoder architecture that generates the document identifier of relevant documents in addition to the output text and uses then at inference for re-ranking the input result list. According to the results presented in Table 1, employing stochastic expected \fSIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Hamed Zamani and Michael Bendersky Table 1: Comparing our models with top performing entries in the KILT leaderboard according to KILT-scores, as of February 1, 2024. The results are reported on the blind KILT test sets. Model Open Domain QA Fact Slot Filling Dialog NQ HotpotQA TriviaQA FEVER T-REx zsRE WOW KILT-EM KILT-EM KILT-EM KILT-AC KILT-AC KILT-AC KILT-F1 RAG [21] 32.7 3.2 38.1 53.5 23.1 36.8 8.8 DPR + FiD [30] 35.3 11.7 45.6 65.7 64.6 67.2 7.6 KGI [8] 36.4 \u2013 42.9 64.4 69.1 72.3 11.8 Re2G [10] 43.6 \u2013 57.9 78.5 75.8 \u2013 12.9 Hindsight [27] \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 13.4 SEAL + FiD [4] 38.8 18.1 50.6 71.3 60.1 73.2 11.6 Re3val [41] 39.5 24.2 51.3 73.0 \u2013 \u2013 13.5 GripRank [1] 43.6 \u2013 58.1 \u2013 \u2013 79.9 14.7 PLATO [3] \u2013 \u2013 \u2013 \u2013 \u2013 \u2013 13.6 FiD-Light (T5-Base, \ud835\udc58= 64) 45.6 25.6 57.6 80.6 76.0 81.1 11.9 FiD-Light (T5-XL, \ud835\udc58= 8) 51.1 29.2 63.7 84.5 76.3 84.0 13.1 Stochastic RAG with FiD-Light (T5-Base, \ud835\udc58= 64) 46.2 27.3 59.7 81.3 76.9 82.8 12.8 Stochastic RAG with FiD-Light (T5-XL, \ud835\udc58= 8) 53.0 31.1 64.7 84.8 78.3 87.0 14.2 Figure 1: Sensitivity of Stochastic RAG with FiD-Light XL to the number of samples for estimating Equation (3). utility maximization leads to improvements in all datasets. Comparing against state-of-the-art baselines from the KILT leaderboard, our approach presents the best performing result in all datasets except for Wizard of Wikipedia, where only one method, named GripRank, performs slightly better than our best performing system. Note that in another dataset (i.e., zsRE), our methods outperform GripRank by a large margin. The last two rows in Table 1 present the results for the same model with different sizes for the downstream language model. T5Base contains 220 million parameters, while T5-XL is a language model with 3 billion parameters. We observe that both model sizes benefit from applying stochastic expected utility maximization. As expected, the larger model exhibits a better performance. That said, the performance difference between the Base and XL size models is not consistent across datasets. For instance, we observe substantial relative improvements on Natural Questions (i.e., 14.5%), while improvements on T-REx are smaller (i.e., 1.8%). To provide a deeper analysis of the Stochastic RAG performance, we vary the number of samples we take for estimating Equation (3). For the sake of visualization, we only present the results for a QA, a fact verification, and a slot-filling dataset in Figure 1. We observe that the model is robust with respect to the different number of samples. That said, sometimes we observe slight improvement as we increase the sample size (e.g., on TriviaQA). 4 CONCLUSIONS AND FUTURE WORK This paper presented a novel optimization framework for end-toend optimization of retrieval-augmented generation models. The framework maximizes stochastic expected utility, where the utility can be any arbitrary evaluation metric appropriate for the downstream generation task. Without loss of generality, we applied this optimization approach to FiD-Light as an effective RAG model and observed substantial improvements on seven diverse datasets from the KILT benchmark. We demonstrate that the proposed approach advances state-of-the-art results on six out of seven datasets on the blind test sets provided by the benchmark. Our results suggest that language models of different sizes (220M parameters and 3B parameters) benefit from such end-to-end optimization. This work solely focuses on relatively short text generation. In the future, we aim at studying the impact of Stochastic RAG on long text generation and exploring various utility functions that can be defined in RAG optimization. Furthermore, the stochastic nature of Stochastic RAG can be used to increase the diversity of generated outputs in RAG systems. This is quite important in scenarios where multiple outputs are generated by RAG systems for collecting human feedback. ACKNOWLEDGMENTS We thank the reviewers for their invaluable feedback. This work was supported in part by the Center for Intelligent Information Retrieval, in part by NSF grant number 2143434, in part by the Office of Naval Research contract number N000142212688, and in part by an award from Google. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. \fStochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA"
18
+ }
title_10K/test_title_short_2405.02844v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02844v1",
3
+ "title": "SMCD: High Realism Motion Style Transfer via Mamba-based Diffusion",
4
+ "abstract": "Motion style transfer is a significant research direction in multimedia\napplications. It enables the rapid switching of different styles of the same\nmotion for virtual digital humans, thus vastly increasing the diversity and\nrealism of movements. It is widely applied in multimedia scenarios such as\nmovies, games, and the Metaverse. However, most of the current work in this\nfield adopts the GAN, which may lead to instability and convergence issues,\nmaking the final generated motion sequence somewhat chaotic and unable to\nreflect a highly realistic and natural style. To address these problems, we\nconsider style motion as a condition and propose the Style Motion Conditioned\nDiffusion (SMCD) framework for the first time, which can more comprehensively\nlearn the style features of motion. Moreover, we apply Mamba model for the\nfirst time in the motion style transfer field, introducing the Motion Style\nMamba (MSM) module to handle longer motion sequences. Thirdly, aiming at the\nSMCD framework, we propose Diffusion-based Content Consistency Loss and Content\nConsistency Loss to assist the overall framework's training. Finally, we\nconduct extensive experiments. The results reveal that our method surpasses\nstate-of-the-art methods in both qualitative and quantitative comparisons,\ncapable of generating more realistic motion sequences.",
5
+ "authors": "Ziyun Qian, Zeyu Xiao, Zhenyi Wu, Dingkang Yang, Mingcheng Li, Shunli Wang, Shuaibing Wang, Dongliang Kou, Lihua Zhang",
6
+ "published": "2024-05-05",
7
+ "updated": "2024-05-05",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Mamba",
14
+ "gt": "SMCD: High Realism Motion Style Transfer via Mamba-based Diffusion",
15
+ "main_content": "INTRODUCTION Motion style transfer is a significant research direction in multimedia applications. The objective is to transpose the style from the style reference onto the content motion while conserving the motion content. As such, the generated motion can possess features from both the content and style motion, thus enabling the swift switching between different styles for a digital humanoid\u2019s identical motion, as depicted in Figure 1. Employing this technology can dramatically enrich and heighten the realism of digital human motion. It is being broadly adapted into various multimedia contexts such as movies, games, the Metaverse and so on. Traditional methods for motion style transfer [1, 12, 25] mainly adopt a generation framework based on GAN [7]. However, GAN training is known to suffer from instability and convergence issues, arXiv:2405.02844v1 [cs.CV] 5 May 2024 \fPreprint, 2024, Conference Paper Ziyun Qian, et al leading to difficulties in generating high-fidelity, natural motion sequences. On the contrary, the diffusion framework process during training tends to be more stable and is typically easier to converge. Therefore, to address the aforementioned problems, we adopt the diffusion model as our generative framework and consider style motion sequences a diffusion condition for the first time. Consequently, we propose the Style Motion Conditioned Diffusion (SMCD) Framework. This framework is capable of learning motion detail features and style variations more comprehensively, generating motions with content and style motion characteristics, thereby achieving more realistic and natural motion style transfer. However, upon the proposition of the SMCD framework, we discover it failed to effectively extract the temporal information of the motion sequences, leading to the generation of disordered motion. To address this problem, we are inspired by the Mamba [8] model and thus propose the Motion Style Mamba (MSM) module. The MSM module effectively captures sequence temporal information utilizing the Selection Mechanism, preserving long-term temporal dependencies within a motion sequence. We are the first researchers to introduce the Mamba [8] model to the field of motion style transfer. Additionally, since we propose a new framework for motion style transfer, suitable loss functions to aid in training are currently lacking. In light of this, we specially design the Diffusion-based Content Consistency Loss and Diffusion-based Style Consistency Loss, tailoring them to the characteristics of our proposed SMCD Framework. These loss functions are utilized to constrain the content and style of the generated motions, and achieve better results. In the experiment section, we carry out extensive comparative tests using other methods. Visual effects and quantifiable indicators show that the motions generated by the proposed SMCD framework possess higher naturality and realism. Furthermore, it maintains the original motion style while generating various motions, such as walking, running, and jumping. In summary, the main contributions of this paper can be summarized as follows: \u2022 We propose a new motion style transfer framework, SMCD, for the first time, considering style motion sequences as conditions for diffusion to generate motions. \u2022 We first utilize the Mamba model [8] in the field of motion style transfer, and propose the MSM module. This module is designed to extract the temporal information of motion sequences better, thereby maintaining long-term dependencies in the time sequence of motion sequences. \u2022 Due to the lack of loss functions that fully adapt to our SMCD framework, we propose the Diffusion-based Content Consistency Loss and Diffusion-based Style Consistency Loss to assist in training for the first time, enabling the model to achieve improved results. \u2022 We conduct extensive experiments to evaluate our framework. The results indicate that our proposed SMCD framework surpasses the effects of state-of-the-art methods in terms of visual effects and quantitative indicators. 2 RELATED WORKS Motion Style transfer. Motion style transfer is a significant research area in multimedia applications. Early methods [3, 29] utilize handcrafted feature extraction to design different motion styles. These approaches, however, are inefficient and incapable of quickly generating large-scale stylized motions. Later, some methods [18, 34] attempt to employ machine learning for motion style transfer. However, these methods typically require a paired dataset for training, meaning they need a human avatar to perform the same motion using different styles, such as running in both a happy and a sad state, with nearly similar steps. Such an intricate process limited the creation of large-scale paired motion datasets. In recent years, specific methods [1, 4, 12, 25] borrow techniques from image style transfer, utilizing deep learning structures for digital human motion style transfer. These methods do not require paired training datasets and achieve sound motion style transfer effects. However, most adopt a Generative Adversarial Network (GAN) [7] based generation framework. GAN [7] training is known to suffer from instability and convergence issues, which results in difficulties in generating realistic, high-fidelity motion sequences. To resolve these problems, we propose a diffusion-based motion style transfer framework. Furthermore, we are the first to consider style motion as a condition within diffusion, allowing a more comprehensive learning of content and style features within a motion sequence. This results in a more realistic, more natural motion style transfer. Diffusion Generative Models. Diffusion consists of both a forward process and a reverse process, forming a Markovian architecture that reverses predetermined noise using neural networks and learns the underlying distribution of data. The researchers highly favor the diffusion model for its excellent performance in various research areas, such as image generation [22, 24, 30], video generation [9], reinforcement learning [13], 3D shape generation [45], and more, benefiting from the advances in learning-based technologies [35\u201341]. Compared to GANs [7] and VAEs [15], the diffusion model exhibits promising quality not only in image tasks but also in motion generation. The work [43] is the first text-based motion diffusion model that achieves body part-level control using fine-grained instructions. Tevet et al. [26] introduce a motion diffusion model, operating on raw motion data, and learn the relationship between motion and input conditions. The method [44] presents a retrievalaugmented motion diffusion model, leveraging additional knowledge from retrieved samples for motion synthesis. The research [33], in contrast to traditional diffusion models, devised a spatialtemporal transformer-based architecture as the core decoder, diverging from the conventional Unet backbone, to introduce diffusion into human motion prediction. Kim et al. [14] combine improved DDPM [19] and Classifier-free guidance [11] integrating diffusionbased generative models into the motion domain. The method [28] utilizes a Transformer-based diffusion model, couples with the Jukebox, to provide motion generation and editing suitable for dance. The effort [5] employs a 1D U-Net with cross-modal transformers to learn a denoising function, synthesizing long-duration motions based on contextual information such as music and text. Flaborea et al. [6] focus on the multimodal generation capability of diffusion models and the improved mode-coverage capabilities of diffusive techniques, applying them to detect video anomalies. However, \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper among the numerous diffusion-based frameworks, no work currently incorporates style motion as a condition and applies it to motion style transfer. 3 METHODOLOGY Pose Representation. We categorize the motion sequence input into the Style Motion Conditioned Diffusion (SMCD) framework into two types based on function. The first type, content motion sequence mc \u2208\ud835\udc454\ud835\udc3d\u00d7\ud835\udc41, has \ud835\udc41poses, each pose mci has 4\ud835\udc3ddimensions, i.e., mc = {\ud835\udc8eci}\ud835\udc41 \ud835\udc56=1. Similarly, the second type, style motion sequence \ud835\udc8fs \u2208R3\ud835\udc3d\u00d7\ud835\udc47, also has \ud835\udc41poses, each pose nsi has 3\ud835\udc3ddimensions, i.e., ns = {nsi}\ud835\udc41 \ud835\udc56=1. The content motion sequence \ud835\udc8ec can be represented using joint rotations with a source style c \u2208S. In contrast, the style motion sequence \ud835\udc8fs can be inferred from the relative motion of joint rotations to infer style, hence represented using joint rotations, with a target style s \u2208S. Here, S denotes the collection of all styles, \ud835\udc3d= 21 signifies the number of joints in the human skeleton. The objective of the SMCD framework is to generate a motion sequence that simultaneously possess the content characteristics of mc and the style features of ns, hence achieving motion style transfer. 3.1 Style Motion Conditioned Diffusion Framework A majority of current motion style transfer methodologies [2, 12, 25] predominantly adopt a generative framework based on GAN [7]. However, during training, GAN is prone to instability and convergence issues, often resulting in disorganized, chaotic motion sequences that struggle to embody a realistic, high-level natural motion style. On the contrary, the diffusion framework process during training tends to be more stable and is typically easier to converge. Therefore, to address the highlighted problems, we adopt a diffusion model as our generative framework. To ensure that the diffusion framework can learn the details of motion characteristics and style variations more comprehensively, we innovatively consider the style motion sequence ns as the condition C \u2208R\ud835\udc51\u00d7\ud835\udc41 for diffusion. Consequently, we propose the Style Motion Conditioned Diffusion (SMCD) Framework, achieving a more realistic and high-fidelity motion style transfer. We utilize the definition of diffusion delineated in DDPM [10], considering the forward diffusion process as a Markov noising process. By perpetually infusing Gaussian noise into the motion sequence m0 \u2208R\ud835\udc51\u00d7\ud835\udc41, we disrupt the motion sequence, thus obtaining {mt}T t=0, i.e., the full motion sequence at noising step t, where the m0 \u2208R\ud835\udc51\u00d7\ud835\udc41is drawn from the data distribution. This forward noising process can be defined as follows: \ud835\udc5e(mt | m0) \u223cN \u0010\u221a\u00af \ud835\udefc\ud835\udc61m0, (1 \u2212\u00af \ud835\udefc\ud835\udc61) I \u0011 , (1) where \u00af \ud835\udefc\ud835\udc61\u2208(0, 1) are monotonic decreasing constants, when approximating to 0, we can approximate mT \u223cN (0, \ud835\udc3c). We set timesteps T = 1000. 3.2 Motion Style Mamba Architecture Upon introducing the SMCD framework, the observation shows that the framework exhibited suboptimal performance in extracting temporal information from motion sequences, resulting in somewhat chaotic outcomes. Drawing inspiration from the Mamba model proposed by Gu et al. in reference [8], we propose the Motion Style Mamba (MSM) module to address this issue. This module employs a Selection Mechanism to more effectively capture the temporal dynamics of motion sequences, thereby preserving the long-term dependencies within the sequence and enhancing the efficacy of motion style transfer. To the best of our knowledge, we are the first to introduce the Mamba model for motion style transfer. The Motion Style Mamba (MSM) module primarily embeds independent temporal information into motion sequences. Prior to the input of motion sequences into the MSM module, it is requisite to subject the input motion sequences and temporal steps to the following processing procedures: Seq \ud835\udc47= \ud835\udc43\ud835\udc38\u0000concat \u0000\ud835\udc40\ud835\udc3f\ud835\udc43(\ud835\udc47), Linear \u0000\ud835\udc5b\ud835\udc60\u0001 , Linear \u0000\ud835\udc5a\ud835\udc50\u0001\u0001\u0001 , (2) where the temporal step size denotes as T, undergoes a projection through a multi-layer perceptron (MLP) comprising two linear layers succeeded by an activation layer, thereby mapping it into a continuous vector space. This process results in forming a latent vector that is amenable to manipulation by the Motion Style Mamba (MSM) module. \ud835\udc8f\ud835\udc60\u2208R3\ud835\udc3d\u00d7\ud835\udc47denotes to style motion sequence, \ud835\udc8e\ud835\udc84\u2208R4\ud835\udc3d\u00d7\ud835\udc41denotes to content motion sequence. Once processed through a linear layer, the two components are concatenated to form an augmented motion sequence. Upon undergoing positional encoding, this sequence is transformed into SeqT, which serves as the input for the Motion Style Mamba (MSM) module. Within the MSM module, the Mamba Block [8] undertakes the pivotal role of mapping temporal information via the temporal step size T onto both the content motion sequence and the style motion sequence while modulating the significance of the temporal information. Inside the Mamba Block, SeqT initially passes through a residual structure equips with an InstanceNorm (IN) layer, followed by feature extraction via Causal Conv1D [31]. The Causal Conv1D ensures that the value of each output is contingent solely upon its preceding input values. Moreover, the Selection Scan constitutes the core component of the Mamba Block, enabling the model to selectively update its internal state based on the current characteristics of the input data. This further refines to focus on temporal information, facilitating the capture of the temporal dependencies within the motion sequence. Utilizing the Selection Scan allows for a high degree of temporal alignment between the content motion and style motion, thereby circumventing the rigidity that may arise from asymmetrical motion sequences in the final output. The following formula can delineate the structure of the Mamba Block: \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e0 \ud835\udc60= LN \u0000Seq\ud835\udc47 \u0001 , (3) \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56 \ud835\udc60= LN \u0010 IN \u0010 \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56\u22121 \ud835\udc60 \u0011\u0011 + IN \u0010 \u03a6 \u0010 \ud835\udf07 \u0010 IN \u0010 \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56\u22121 \ud835\udc60 \u0011\u0011\u0011\u0011 , (4) \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4eres = LN \u0000Seq\ud835\udc47 \u0001 + \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc41 \ud835\udc60, (5) \fPreprint, 2024, Conference Paper Ziyun Qian, et al Linear Linear T MSM Style Motion \u2026 \u2026 Seq MLP \u2026 PE \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udfcf\ud835\udfcf \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udfd0\ud835\udfd0 \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udc8f\ud835\udc8f \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udfd0\ud835\udfd0 \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udc8f\ud835\udc8f ... Content Motion \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udfcf\ud835\udfcf \u2026 \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udfd0\ud835\udfd0 \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udc8f\ud835\udc8f ... ... Predicted Motion MSM Noisy Motion T Style Motion Diffuse 0 \u2192T-1 Style Motion T 1 MSM MSM Style Motion 1 \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce Diffuse 0 \u21921 ... ... ... ... \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce Figure 2: (Left) Overview of the Style Motion Conditioned Diffusion (SMCD) framework. The model inputs a content motion sequence mc with N poses in a noising step \ud835\udc61, as well as \ud835\udc61itself, and a style motion sequence \ud835\udc8fs considered as condition C. The Motion Style Mamba (MSM) module predicts the stylized motion m0 in each sampling step. (Right) Sampling MSM. Given the \ud835\udc8fs as condition C, we sample random noise mT at the dimensions of the desired motion, then iterate from T=1000 to 1. In each step \ud835\udc61, MSM predicts stylized motion m0 and diffuses it back to mT-1. where LN is the linear layer, IN is an Instance Normalization layer. \u03a6 is Selective Scan module, \ud835\udf07denotes to Causal Conv1D layer [31], \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56 \ud835\udc60denotes the Mamba Block corresponding to the ith iteration of the cyclic process. Especially, \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e0 \ud835\udc60denotes the input presented to the Mamba Block. \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4eres represents the output from the residual network that incorporates the Mamba Block as a constitutive element. After the Mamba Block structure facilitates the integration, the temporal information and motion sequences are consolidated and fed into a Multi-Head Attention (MHA) mechanism. This is further followed by the passage through a residual network augmented with a Position-wise Feed-Forward Network, which enhances the efficacy of the style transfer process. \ud835\udf0e= IN (LN ( Mamba res )) + \ud835\udc40\ud835\udc3b\ud835\udc34(LN ( Mamba res )) , (6) where \ud835\udf0erefers to the output of the residual network that includes the integration of MHA. The ultimate output of the MSM module \ud835\udc40\ud835\udc40\ud835\udc46\ud835\udc40can be articulated through the following equation: \ud835\udc40\ud835\udc40\ud835\udc46\ud835\udc40= \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udf0e) + IN(\ud835\udf0e), (7) where \ud835\udc39\ud835\udc39\ud835\udc41denotes the Position-wise Feed-Forward Network. 3.3 Training Objectives Our objective is to synthesize a motion sequence of length N that embodies both the characteristics of content motion and style motion under the given condition c in style motion sequence ns \u2208 \ud835\udc453\ud835\udc3d\u00d7\ud835\udc47. We model distribution \ud835\udc5d( m0 | C) as the reversed diffusion Mamba Block * N Input Linear Linear Wise position FFN Instance Norm Instance Norm MSM Block Linear MHA K Q V Linear Causal Conv1D Selective Scan Linear Instance Norm predicted motion Figure 3: Architecture of Motion Style Mamba (MSM) Module. process of iteratively cleaning mT. To better handle lengthy motion sequences and enhance computational efficiency, we propose the Motion Style Mamba (MSM) module. After noise mt, noising step t, and motion condition C are fed into the MSM module, we can directly predict the original motion sequence b m0, i.e., b m0 = MSM ( mt, t, C) = MSM ( mt, t, ns), without having to predict noise \ud835\udf16\ud835\udc61as the research [10] (see Figure 2 right). \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper Furthermore, we introduce the simple loss proposed by Ho et al. [10] to encourage the predicted motion sequence b m0 to be as consistent as possible with the original motion sequence m0: Lsimple = \ud835\udc38m0,\ud835\udc61\u223c[1,\ud835\udc47] h \u2225m0 \u2212MSM (mt, t, ns)\u22252 2 i . (8) Additionally, in light of the unique characteristics of the style motion conditioned diffusion framework proposed in this paper, we specially designe the Diffusion-based Content Consistency Loss (Eq.9) and Diffusion-based Style Consistency Loss (Eq.10). Diffusion-based Content Consistency Loss. When the inputted content motion sequence mc and style motion sequence ns share the same style (c=s), it would undoubtedly be ideal for the resulting generated motion to closely resemble content motion mc, regardless of the content of style motion ns. Due to the lack of loss functions that fully adapt to our SMCD framework, taking the above observation into account, we propose the Diffusion-based Content Consistency Loss under the style motion conditioned diffusion framework for the first time, aiming to constrain the motion content. In each iteration, two motion sequences with the same content are randomly selected from the dataset M to serve as the style motion and content motion, respectively. Subsequently, the Diffusion-based Content Consistency Loss is computed using the following formula: Ldcc = Emc,nc\u223cM \u2225\ud835\udc40\ud835\udc46\ud835\udc40(mc, t, nc) \u2212mc\u22251 . (9) Two fundamental differences exist between our loss function and the Content Consistency Loss proposed by Aberman et al. [2] : (1) Our loss function is diffusion-based, and the timestep t can control the forward noising process based on motion. (2) The style motion in our loss function acts as a condition for diffusion, aligning more closely with the overall framework of this paper. Diffusion-based Style Consistency Loss. Following the same line of thinking as the Diffusion-based Content Consistency Loss, we also propose the Diffusion-based Style Consistency Loss for the first time. In each iteration, we randomly select two motion sequences with the same style from the dataset M as the style motion and content motion, respectively. The motion generated should be closer to the style motion ns. We calculate the Diffusionbased Style Consistency Loss using the following formula: Ldsc = Enc,ns\u223cM \u2225\ud835\udc40\ud835\udc46\ud835\udc40(nc, t, ns) \u2212ns\u22251 . (10) Geometric losses. Geometric losses are also frequently adopted in motion generation [20, 23, 27, 28] to enhance the physical realism of the motion, prompting the model to generate more naturally coherent motions. We employ three expected geometric losses, which control (1) positions, (2) foot contact, and (3) velocities. Lpos = 1 \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \r \r \r\ud835\udc39\ud835\udc3e \u0010 mi 0 \u0011 \u2212\ud835\udc39\ud835\udc3e \u0010 b mi 0 \u0011\r \r \r 2 2 , (11) Lfoot = 1 \ud835\udc41\u22121 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r \u0010 \ud835\udc39\ud835\udc3e \u0010 mi+1 0 \u0011 \u2212\ud835\udc39\ud835\udc3e \u0010 b mi 0 \u0011\u0011 \u00b7 \ud835\udc53\ud835\udc56 \r \r \r 2 2 , (12) Lvel = 1 \ud835\udc41\u22121 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r \u0010 mi+1 0 \u2212mi 0 \u0011 \u2212 \u0010 b mi+1 0 \u2212b mi 0 \u0011\r \r \r 2 2\u2032 (13) where \ud835\udc39\ud835\udc3e(\u00b7) is the forward kinematic function that converts joint angles into joint positions, and the \ud835\udc56superscript denotes the motion frame index. \ud835\udc53\ud835\udc56\u2208{0, 1}\ud835\udc3dis the binary foot contact mask for each frame \ud835\udc56, indicating whether the foot is in contact with the ground. It is set according to the binary ground truth data and mitigates foot sliding by offsetting the velocity when contact occurs. Our total training loss function is a combination of the above six losses: Ltotal = Lsimple + Ldcc + Ldsc + Lpos + Lvel + Lfoot . (14) 4 EXPERIMENT In this section, we conduct extensive experiments comparing the method presented in this paper with state-of-the-art methods in terms of visual effects and quantitative metrics. Subsequently, we also test the effectiveness of the SMCD framework in performing motion style transfer to unseen style to assess the model\u2019s generalizability in practical applications. Ultimately, we conduct extensive ablation experiments to validate the effectiveness of each component within the SMCD framework. 4.1 Implementation Details We train and test based on the Xia dataset [34]. This dataset\u2019s Motion clips include 8 motion styles and 5 motion contents. We reduce the original 120fps motion data to 60fps and obtain approximately 1500 motion sequences in total. Our framework is implemented in PyTorch and trains on an NVIDIA A800, with a batch size of 512, using the AdamW optimizer [17]. The training process takes about 10 hours each time. 4.2 Visual Effect Comparison We qualitatively compare the visual effects in motion style transfer from three aspects: style expressiveness, content preservation, and motion realism. This comparison involves our proposed SMCD framework, the method proposed by Aberman et al. [1] and StyleERD [25]. Due to the scarcity of open-source papers in the field of motion style transfer, our comparison is limited to the two methods mentioned above. The content motion and style motion adopted in the experiments originate from the dataset proposed by Xia et al. [34] Under ideal circumstances, the model should be capable of transferring the style of the style motion to the content motion while preserving the content of the content motion. Hence, the generated motion sequence should embody content and style motion characteristics. As seen in Figure 4, we conduct three sets of motion style transfers. The results show that the motions generated by our SMCD framework can more realistically reflect the style while retaining the original content, demonstrating higher style expressiveness and content preservation. On the other hand, the frameworks [1] and [25] struggle to transfer the motion style effectively. Regarding motion realism, motions generated by our SMCD framework are more realistic. In contrast, the other two methods exhibit flaws at the ankles, shoulders, and other areas, as highlighted in red boxes in Figure 4. \fPreprint, 2024, Conference Paper Ziyun Qian, et al Input style Input content Aberman et al. Style-ERD Ours Old walk into neutral style Proud walk into sexy style Strutting run into old style \u4e0d\u7528\u586b \u4e0d\u7528\u586b \u4e0d\u7528\u586b Figure 4: A comparative visual representation of the SMCD framework with the methods proposed by Aberman et al. [1] and Style-ERD [25]. The image depicts the flaws in the generated motions, denoted by red boxes. 4.3 Quantitative Evaluation Inspired by MoDi [21], we adopt the following metrics to evaluate our framework quantitatively: \u2022 FID (Fr\u00e9chet Inception Distance): This metric measures the difference between the distribution of motions generated in the latent space and real motions to evaluate the quality of generated motions. The lower the FID score, the smaller the distribution difference between the generated and real motions, indicating a higher quality of the motion generated. \u2022 KID (Kernel Inception Distance): Similar to FID, it utilizes convolution to extract motion features when calculating the distance between feature statistical data. Compared with FID, the KID score is more sensitive to the local structure and details of generated motions. A lower KID score indicates a higher quality of the generated motion. \u2022 Diversity: Evaluate the degree of diversity of the generated movements. The higher the value, the more diverse the movements generated, indicating better generation outcomes. We conduct quantitative comparison experiments on the Xia dataset [34], as demonstrated by the results in Table 1. The quantitative comparison results on the BFA dataset [2] can be seen in the supplementary material. Due to the limited availability of publicly accessible datasets in motion style transfer, we only compare these two mainstream datasets. Table 1 reveals that our proposed SMCD framework surpasses the baseline [2, 25] on most metrics, achieving optimal results. This success stems from our SMCD framework and MSM module, which excel in learning content and style motion features and fusing them effectively. At the same time, these elements \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper Table 1: A quantitative comparison with State-of-the-art methods on the Xia dataset [34]. The best scores are emphasized in bold. Method FID\u2193 KID\u2193 Diversity\u2191 Aberman et al. [2] 19.405 0.953 2.639 Style-ERD [25] 17.682 0.869 2.595 Ours 16.676 0.768 2.602 maintain the long-term dependencies in temporal sequence within the motion sequence, leading to the generation of more realistic motion sequences. 4.4 Generalizability Our model is capable of extracting styles from any given motion clip. However, in practical applications within the multimedia field, motion style transfer models will likely encounter style categories outside the training dataset. At times like this, whether the model can transfer styles from unseen styles determines its generalization and usability. To compare the generalizability of our proposed SMCD framework with other methods, we train the model on the Xia dataset [34], which does not include angry label motions. Then, we conduct tests on a dataset that included angry style motions. The results, as shown in Figure 5, illustrate that when faced with an unseen motion style angry, our SMCD framework can still learn its characteristics. Our framework achieve better motion style transfer effects than [1] and [25]. The other two methods that participate in the comparison exhibited flaws when transferring unseen styles, as indicated by the red boxes in Figure 5. The results of the generalizability comparison indicate that our framework is more generalizable and practical. Its ability to perform more effectively in various multimedia fields, such as movies, games, and the Metaverse, distinguishes it from other methods. 4.5 Ablation Studies In order to verify the necessity of each component in our model, we conduct extensive ablation experiments, removing the MSM module, the loss functions Lsimple , Ldcc, Ldsc respectively to train the model, and then utilize the same evaluation metrics as quantitative evaluation for validation. As shown in Table 2, the removal of any one component significantly degrades all evaluation metrics of the SMCD framework, with the most noticeable drop in performance for motion style transfer when the MSM module is removed. In addition, we also present the motion effect diagram generated by the model after removal, as illustrated in Figure 6. It can be observed that the motion has many flaws, and it does not effectively reflect the style of the motion. The results of the ablation experiment also affirm the effectiveness of each component in our SMCD framework; they all play integral roles and are indispensable. To further compare the motion style transfer performance of our proposed MSM module with other modules, we substitute the MSM module for four modules: STGCN [42], Transformer Encoder [32], iTransformer [16], and Mamba [8], and retrain the framework for comparative experiments. We leverage the same evaluation metrics Table 2: Ablation experiments on various components of the SMCD framework. The best scores are highlighted in bold. Setting FID\u2193 KID\u2193 Diversity\u2191 Ours w/o Lsimple 17.546 0.831 2.158 Ours w/o Ldcc 22.410 1.168 2.473 Ours w/o Ldsc 20.294 1.030 1.931 Ours w/o MSM 23.330 1.458 1.433 Ours 16.676 0.768 2.602 Table 3: Comparison results between the MSM module and other modules. The best scores are highlighted in bold. Module FID\u2193 KID\u2193 Diversity\u2191 STGCN [42] 21.119 1.021 2.269 Transformer [32] 18.977 0.952 2.080 iTransformer [16] 19.177 0.862 2.392 Mamba [8] 20.962 0.925 2.579 MSM(Ours) 16.676 0.768 2.602 as mentioned above to assess the performance. As shown in Table 3, our MSM module outperform all other modules on all quantitative evaluation metrics, fully demonstrating its superiority in achieving a better motion style transfer effect. We hypothesize that this success is due to the MSM module\u2019s superior ability to capture the temporal information and stylization characteristics of motion sequences, thereby effectively transferring styles while maintaining the long-term dependencies within the sequence. Due to space limitations, more ablation experiment results will be demonstrated in the supplementary materials. 4.6 User study In addition to the qualitative and quantitative comparisons, we conduct a user study to perceptually evaluate the realism, style expressiveness, and content preservation of our style transfer results. As detailed below, we recruite 50 volunteers to respond to a questionnaire consisting of three types of questions. In this part, we assess the realism of the generated motions. Two motions depicting the same type of content and style (such as a depressed walk) are presented to the volunteers. The motions originated from three different sources: (1) our original Xia dataset [34], (2) results generated by method [2], (3) results generated by StyleERD [25], and (4) results generated by our framework. Note that (2), (3), and (4) are all generated using similar inputs. Participants are asked, \"Which motion above looks more like actual walking?\" and must choose one of the four motion sources. Table 4 presents the realism ratios for each method in generating motions. It is easy to find out that 85.2% of our results are judged as realistic, closely resembling the proportion in the real Xia dataset [34]. Notably, this ratio is significantly higher than method [2] with 15.1% and Style-ERD [25] with 28.7%. Content Preservation and Style Transfer. This part compares our style transfer results with those generated by Aberman et al. [2] and Style-ERD [25] regarding content preservation and style \fPreprint, 2024, Conference Paper Ziyun Qian, et al Input unseen style Input content Aberman et al. Style-ERD Ours Neutral run Angry style Neutral run into angry style Neutral run into angry style Neutral run into angry style Childlike style Angry walk Angry walk into childlike style Angry walk into childlike style Angry walk into childlike style Figure 5: Illustration of Unseen Styles. Training on datasets [34] without the angry style, then testing conventionally to evaluate their generalizability when dealing with an unseen style. Red boxes highlight flaws in the generated motions. Input style Input content Ours \ud835\udc98\ud835\udc98/\ud835\udc90\ud835\udc90\u2112\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85 Ours \ud835\udc98\ud835\udc98/\ud835\udc90\ud835\udc90\u2112\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85 Ours (Full) Angry walk Neutral style Angry walk into neutral style Angry walk into neutral style Angry walk into neutral style Neutral style Sexy walk Sexy walk into neutral style Sexy walk into neutral style Sexy walk into neutral style Figure 6: The motion generated by the model trained post-removal of Ldcc and Ldsc. Red boxes highlight flaws in the generated motions. Table 4: The user study for realism ratios. Xia dataset [34] Aberman et al. [2] Style-ERD [25] Ours 88.9% 15.1% 28.7% 85.2% transfer. Volunteers are presented with a content input, a style input, and the results of motion style transfer from three models. They are initially asked to choose which model\u2019s motion content is closer to the input content, followed by selecting which model\u2019s motion style is closer to the input style. The results of the user study are shown in Table 5. The findings indicate that our method achieve the best content preservation and style transfer outcomes. 64.8% and 72.3% of the volunteers perceive that our method\u2019s motion content/style is closer to the input content/style. In contrast, the proportions for the other two methods [1] [25] were significantly lower than ours Table 5: The user study for content preservation and style transfer. Evaluation Metrics Aberman et al. [2] Style-ERD [25] Ours Content Preservation 20.7% 14.5% 64.8% Style Transfer 10.9% 16.8% 72.3% 5 CONCLUSION Motion style transfer is an essential research direction in multimedia applications. However, most work in the field currently adopts Generative Adversarial Networks (GANs), which lead to instability and convergence issues and result in the loss of rich motion details and style variations. To address these issues, we consider style motion as a condition for the first time and propose the SMCD framework, which can learn the style features of motion more comprehensively. Moreover, we are the first to apply Mamba model in motion style transfer, proposing the MSM module to handle longer motion sequences. Thirdly, specifically for the SMCD framework, we designe the Diffusion-based Content Consistency Loss and Content Consistency Loss to assist the overall framework training. Finally, we conduct extensive experiments, and the results demonstrate that our method outperforms the state-of-the-art methods in both qualitative and quantitative comparisons and can generate higher-quality motion sequences. 6 DISCUSSION The SMCD framework proposed by our team achieves remarkable results, paving a new path for motion style transfer by synergistically integrating Mamba and Motion Conditioned Diffusion, allowing the generated style motion to possess higher geometrical \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper authenticity and temporal consistency. However, motion style transfer remains a niche research direction in the increasingly popular multimedia application scenario. Consequently, we propose the following two suggestions for its further development: \u2022 Advocate for further research on motion style transfer based on the diffusion framework. Our studies have shown this approach to be feasible and effective at mitigating issues inherent to GANs. \u2022 The current definition of \"style\" is not entirely established. However, this definition is critical for the development of the motion style transfer field and invites potential interdisciplinary collaboration on this problem. The two suggestions above may usher in novel breakthroughs and opportunities for advancing motion style transfer. We look forward to witnessing the future progression in this field."
16
+ }
title_10K/test_title_short_2405.02905v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.02905v1",
3
+ "title": "Mixture of partially linear experts",
4
+ "abstract": "In the mixture of experts model, a common assumption is the linearity between\na response variable and covariates. While this assumption has theoretical and\ncomputational benefits, it may lead to suboptimal estimates by overlooking\npotential nonlinear relationships among the variables. To address this\nlimitation, we propose a partially linear structure that incorporates\nunspecified functions to capture nonlinear relationships. We establish the\nidentifiability of the proposed model under mild conditions and introduce a\npractical estimation algorithm. We present the performance of our approach\nthrough numerical studies, including simulations and real data analysis.",
5
+ "authors": "Yeongsan Hwang, Byungtae Seo, Sangkon Oh",
6
+ "published": "2024-05-05",
7
+ "updated": "2024-05-05",
8
+ "primary_cat": "stat.ME",
9
+ "cats": [
10
+ "stat.ME",
11
+ "stat.ML"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Mixture AND of AND Experts",
15
+ "gt": "Mixture of partially linear experts",
16
+ "main_content": "Introduction Quandt (1972) introduced a finite mixture of regressions (FMR) for uncovering hidden latent structures in data. It assumes the existence of unobserved subgroups, each characterized by distinct regression coefficients. Since the introduction of FMR, extensive research has been conducted to enhance its performance, with contributions from Neykov et al. (2007), Bai et al. (2012), Bashir and Carter (2012), Hunter and Young (2012), Yao et al. (2014), Song et al. (2014), Zeller et al. (2016), Zeller et al. (2019), Ma et al. (2021), Zarei et al. (2023), and Oh and Seo (2024). However, because FMRs assume that the assignment of each data point to clusters is independent of the covariates (Hennig, 2000), FMR can be undermined with regard to the performance of regression clustering when the assumption of assignment independence is violated. Alternatively, Jacobs et al. (1991) introduced the mixture of linear experts (MoE), allowing for the assignment of each data point to depend on the covariates. Nguyen and McLachlan (2016) suggested the Laplace distribution for the error distributions, while Chamroukhi (2016) and Chamroukhi (2017) used t distributions and skew-t distributions for errors, respectively. Murphy and Murphy (2020) further extended MoE with a parsimonious structure to improve estimation efficiency. Mirfarah et al. (2021) introduced the use of scale mixture of normal distributions for errors within MoE. Recently, Oh and Seo (2023) proposed a specific MoE variant, 1 arXiv:2405.02905v1 [stat.ME] 5 May 2024 \fassuming that covariates follow finite Gaussian location-scale mixture distributions and that the response follows finite Gaussian scale mixture distributions. In spite of extra flexibility for errors in these models, they assumed linear structures in each mixture component, which makes too simple to capture the hidden latent structures. In homogeneous population, Engle et al. (1986) introduced a partial linear model, comprising a response variable Y is represented as a linear combination of specific p-dimensional covariates X and an unspecified non-parametric function that includes an additional covariate U, as follows. y = x\u22a4\u03b2 + g(u) + \u03f5, (1) where U \u2282R, \u03f5 is an error term with a mean zero and finite variance, and the function g(\u00b7) is an unknown non-parametric function. This model has the advantages of interpretability, stemming from its linearity, with the flexibility to capture diverse functional relationships through an unspecified function g(\u00b7). The differentiation between X and U is determined either theoretically based on established knowledge in the application field or through methods like scatter plots or statistical hypothesis testing. Wu and Liu (2017) and Skhosana et al. (2023) suggested the FMR to accommodate a partially linear structure within a heterogeneous population. In this paper, we consider a novel approach that incorporates partially linear structures into MoE, utilizing unspecified functions based on kernel methods. This allows proposed model to effectively capture various relationships between the response and covariates, while latent variable is dependent on some covariates. This flexibility can significantly impact the estimation of regression coefficients and enhance clustering performance by mitigating misspecification problems arising from assumptions about the relationships between variables. In addition, we address the issue of identifiability in the proposed model to ensure the reliability of the outcomes derived from proposed approach. The remainder of this paper is organized as follows. Section 2 reviews MoE and introduces the proposed models, addressing the identifiability. Section 3 outlines the estimation procedure, while Section 4 deals with practical issues related to the proposed models. We present the results of simulation studies in Section 5 and apply the models to real datasets in Section 6. Finally, we provide a discussion in Section 7. 2 Semiparametric mixture of partially linear experts 2.1 Mixture of linear experts Let Z be a latent variable indicating the membership of the observations. MoE is a useful tool when exploring the relationship between the response variable and covariates in the presence of unobserved information about C heterogeneous subpopulations by latent variable Z. Jacobs et al. (1991) presented the conditional probability distribution of the response variable given the covariates as p(y|x) = C X c=1 p(Z = c | x)p(y | x, Z = c) = C X c=1 \u03c0c(x)\u03d5(y; \u03b20c + x\u22a4\u03b2c, \u03c32 c), (2) where \u03c0c(\u00b7), c = 1, . . . , C, represents a mixing probability that depends on the given covariates, with 0 < \u03c0c(x) < 1 and PC c=1 \u03c0c(x) = 1. Additionally, (\u03b20c, \u03b2\u22a4 c ) represents a (p+1)-dimensional vector for c = 1, . . . , C, and \u03d5(\u00b7; \u00b5, \u03c32) denotes the probability density function of the normal distribution with mean \u00b5 and variance \u03c32. 2 \fRegression clustering, the process of identifying the latent variable Z, holds significant importance in understanding the prediction mechanism employed by MoE. The predicted value of the response variable for new covariate X = x is determined as E(Y | X = x) = C X c=1 \u03c0c(x) \u00b7 (\u03b20c + x\u22a4\u03b2c), where \u03c0c(x) is often called as the gating network, while (\u03b20c +x\u22a4\u03b2c) is referred to as the expert network. That is, the prediction structure can be understood as an ensemble model as shown in Figure 1 because the predicted values are obtained by combining the outcomes of the expert networks using the gating network. Consequently, selecting an appropriate latent variable Z is a crucial aspect of the MoE model. Figure 1: Predicting mechanism of MoE MoE is applied in various fields as a machine learning model. For example, Li et al. (2019) used MoE to explain differences in lane-changing behavior based on driver characteristics. Shen et al. (2019) extended MoE to adapt to the characteristics of data for creating a translation model capable of various translation styles. Additionally, Riquelme et al. (2021) proposed Vision MoE, which maintains superior performance compared to existing models in image classification while significantly reducing estimation time. 2.2 Proposed model In this section, we introduce a semiparametric mixture of partially linear experts (MoPLE) model. The MoPLE is constructed by considering each expert network of the MoE model as a partial linear model (1), which can be defined as p(y | x, u) = C X c=1 \u03c0c(x; \u03b10c, \u03b1c)\u03d5(y; x\u22a4\u03b2c + gc(u), \u03c32 c). (3) Here, \u03c0c(x; \u03b10c, \u03b1c) is defined as \u03c0c(x; \u03b10c, \u03b1c) = exp(\u03b10c+x\u22a4\u03b1c) PC j=1 exp(\u03b10j+x\u22a4\u03b1j), where (\u03b10c, \u03b1\u22a4 c ) represents a (p + 1)-dimensional vector (c = 1, 2, . . . , C), especially with (\u03b10C, \u03b1\u22a4 C) being a zero vector. 3 \fWhen C = 1, since \u03c0C(x; \u03b10C, \u03b1C) is equal to 1, (3) simply represents a partial linear model (1). If C > 1 and gc(\u00b7) = 0, (3) is equivalent to the MoE (2). Identifiability is a fundamental concern when dealing with finite mixture models. Hennig (2000) established that finite mixture of regressions is identifiable when the domain of X includes an open set in Rp. Additionally, Huang and Yao (2012) demonstrated that (2), with unspecified \u03c0c(x) for c = 1, 2, . . . , C, is identifiable up to a permutation of relabeling. Furthermore, Wu and Liu (2017) extended these findings by establishing the identifiability of the mixture of partially linear regressions, assuming that \u03b1 = (\u03b1\u22a4 1 , \u03b1\u22a4 2 , . . . , \u03b1\u22a4 C)\u22a4is a zero vector in (3). Building upon these results, the following theorem establishes the identifiability of model (3). Theorem 1. Suppose that the functions gc(\u00b7), c = 1, 2, . . . , C, are continuous, and the parameter vectors (\u03b2c, \u03c32 c) are distinct in Rp+1 for c = 1, 2, . . . , C. Additionally, assume that the covariate X does not contain a constant, and none of its components can be a deterministic function of U. If the support of X contains an open set in Rp, then (3) is identifiable up to a permutation of its components for almost all (x\u22a4, u)\u22a4\u2208Rp+1. Proof. In (3), suppose that there exist \u02dc \u03b10k, \u02dc \u03b1k, \u02dc \u03b2k and \u02dc gk(\u00b7), k = 1, 2, . . . , K, satisfying C X c=1 \u03c0c(x; \u03b10c, \u03b1c)\u03d5(y; x\u22a4\u03b2c + gc(u), \u03c32 c) = K X k=1 \u03c0k(x; \u02dc \u03b10k, \u02dc \u03b1k)\u03d5(y; x\u22a4\u02dc \u03b2k + \u02dc gk(u), \u02dc \u03c32 k), (4) where ( \u02dc \u03b2k, \u02dc \u03c32 k), k = 1, 2, . . . , K, are distinct. Consider the set {x \u2208Rp : x\u22a4\u03b2c1 + gc1(u) = x\u22a4\u03b2c2 + gc2(u)} for any \u03b2c1 and \u03b2c2 (c1, c2 \u22081, 2, . . . , C ), where \u03b2c1 \u0338= \u03b2c2 and \u03c32 c1 = \u03c32 c2, for a given U = u. This set represents a (p \u22121)-dimensional hyperplane in Rp. For any pair of \u03b2c1 and \u03b2c2 with \u03b2c1 \u0338= \u03b2c2 and \u03c32 c1 = \u03c32 c2, the union of a finite number of such hyperplanes, where (x\u22a4\u03b2c1, \u03c32 c1) = (x\u22a4\u03b2c2, \u03c32 c2), has a zero Lebesgue measure in Rp. This fact remains true for the finite number of sets {x \u2208Rp : x\u22a4\u02dc \u03b2k1 + \u02dc gk1(u) = x\u22a4\u02dc \u03b2k2 + \u02dc gk2(u)} for any \u02dc \u03b2k1 and \u02dc \u03b2k2 (k1, k2 \u2208{1, 2, . . . , K} ), where \u02dc \u03b2k1 \u0338= \u02dc \u03b2k2 and \u02dc \u03c32 k1 = \u02dc \u03c32 k2 for given U = u. From Lemma 1 of Huang and Yao (2012), it can be established that (4) is identifiable when conditioned on w = (x\u22a4, u)\u22a4, under the condition that both sets of (x\u22a4\u03b2c, gc(u)) for c = 1, 2, . . . , C and (x\u22a4\u02dc \u03b2k, \u02dc gk(u)) for k = 1, 2, . . . , K are distinct. That is, if w is given, we obtain C = K, and there exists a permutation \u03c4w = {\u03c4w(1), \u03c4w(2), . . . , \u03c4w(C)} among the finite number of possible permutations of {1, 2, . . . , C} such that \u03c0c(x; \u03b10c, \u03b1c) = \u03c0\u03c4w(c)(x; \u02dc \u03b10\u03c4w(c), \u02dc \u03b1\u03c4w(c)), x\u22a4\u03b2c + gc(u) = x\u22a4\u02dc \u03b2\u03c4w(c) + \u02dc g\u03c4w(c)(u), \u03c32 c = \u02dc \u03c32 \u03c4w(c) where c = 1, 2, . . . , C. Now, let us consider any permutation \u03c4 = {\u03c4(1), \u03c4(2), . . . , \u03c4(C)} that satisfies x\u22a4\u03b2c + gc(u) = x\u22a4\u02dc \u03b2\u03c4(c) + \u02dc g\u03c4(c)(u), \u03c32 c = \u02dc \u03c32 \u03c4(c), c = 1, 2, . . . , C, (5) for some w, and verify that \u03c4w has to be unique \u03c4. Suppose that \u03b2c \u0338= \u02dc \u03b2\u03c4(c) and gc(u) \u0338= \u02dc g\u03c4(c)(u). This contradicts to the assumption that X cannot be a deterministic function of U. When \u03b2c \u0338= \u02dc \u03b2\u03c4(c) and gc(u) = \u02dc g\u03c4(c)(u), the set {x \u2208Rp : x\u22a4\u03b2c = x\u22a4\u02dc \u03b2\u03c4(c)} has zero Lebesgue measure since it is a (p\u22121) dimensional hyperplane in Rp. Because \u03b2c = \u02dc \u03b2\u03c4(c) indicates gc(u) = \u02dc g\u03c4(c)(u), we obtain that \u03b2c = \u02dc \u03b2\u03c4(c), gc(u) = \u02dc g\u03c4(c)(u) 4 \ffor c = 1, 2, . . . , C. Since the parameter sets (\u03b2c, \u03c32 c) and (\u02dc \u03b2k, \u02dc \u03c32 k) for c, k \u2208{1, 2, . . . , C} are distinct, the permutation \u03c4, satisfying (5) on a subset of the support of w with nonzero Lebesgue measure, is unique. Because \u03c0c(\u00b7) and \u03c0\u03c4(c)(\u00b7) are continuous and one to one function, it follows that \u03b10c + x\u22a4\u03b1c = \u02dc \u03b10\u03c4(c) + x\u22a4\u02dc \u03b1\u03c4(c) for c = 1, 2, . . . , C. Moreover, as X cannot be a constant, \u03b10c = \u02dc \u03b10\u03c4(c) must be hold. Consequently, this indicates \u03b1c = \u02dc \u03b1\u03c4(c) , except for the set {x \u2208Rp : \u03b10c+x\u22a4\u03b1c = \u02dc \u03b10\u03c4(c) + x\u22a4\u02dc \u03b1\u03c4(c)}, which has a zero Lebesgue measure in Rp, for c = 1, 2, . . . , C. Therefore, we can conclude that (3) is identifiable up to a permutation of its components. 3 Estimation When considering the observed data {(yi, xi, ui)}n i=1, the log-likelihood function is defined as \u2113(\u0398, g) = n X i=1 log \" C X c=1 \u03c0c(x)\u03d5{yi; x\u22a4 i \u03b2c + gc(ui), \u03c32 c} # , (6) where \u0398 is the set of all parameters and g = (g1(\u00b7), . . . , gC(\u00b7))\u22a4. To find \u02c6 \u0398 and \u02c6 g that maximize equation (6), we propose the Expectation Conditional Maximization (ECM) algorithm (Meng and Rubin, 1993) using the profile likelihood method. The latent indicator variable Zic (c = 1, . . . , C), which indicates to which latent cluster the observed values belong, and the complete log-likelihood function are respectively defined as Zic = ( 1, if the i-th observation belongs to the c-th latent cluster 0, otherwise and \u2113c(\u0398, g) = n X i=1 C X c=1 Zic log \" \u03c0c(x)\u03d5{yi|x\u22a4 i \u03b2c + gc(ui), \u03c32 c} # . In the E-step for the (t + 1)th iteration of the ECM algorithm, t = 0, 1, . . ., we obtain Q(\u0398(t), g(t)) = E[\u2113c(\u0398, g)|\u0398(t), g(t)] using the posterior probability z(t+1) ic given \u0398(t) and g(t), which is represented as z(t+1) ic = E(Zic|xi, yi, \u0398(t), g(t)) = \u03c0(t) c (x)\u03d5{yi; xT i \u03b2(t) c + g(t) c (ui), \u03c32 c (t)} PC j=1 \u03c0(t) j (x)\u03d5{yi; x\u22a4 i \u03b2(t) j + g(t) j (ui), \u03c32 j (t)} . While keeping \u0398(t) (c = 1, 2, . . . , C) fixed, CM-step 1 involves updating g(t) to g(t+1) that maximizes the following local likelihood: \u2113h(g) = n X i=1 C X c=1 z(t+1) ic \" log \u03d5{yi; xT i \u03b2(t) c + gc(uj), \u03c32 j (t)} # Kh(ui \u2212uj), where j \u2208{1, 2, . . . , n}, and Kh(ui\u2212uj) represents the kernel weighting function with bandwidth h. Consequently, g(t+1) c (uj) can be calculated as g(t+1) c (uj) = Pn i=1 z(t+1) ic (yi \u2212x\u22a4 i \u03b2(t) c )Kh(ui \u2212uj) Pn i=1 z(t+1) ic Kh(ui \u2212uj) . 5 \fIn CM-step 2, after fixing g(t+1) c (uj), we can determine \u0398(t+1) as follows. \u03b1(t+1) c = \u03b1(t) c \u2212 \" \u22022Q(\u0398(t), g(t+1)) \u2202\u03b1c\u2202\u03b1\u22a4 c #\u22121\" \u2202Q(\u0398(t), g(t+1)) \u2202\u03b1c # , \u03b2(t+1) c = ( \u02dc X \u22a4Z(t+1) c \u02dc X)\u22121 \u02dc X \u22a4Z(t+1) c \u02dc y, \u03c32 c (t+1) = Pn i=1 z(t+1) ic (yi \u2212xi\u03b2(t+1) c \u2212g(t+1) c (ui))2 Pn i=1 z(t+1) ic . Here, \u02dc X = (I \u2212S)X, \u02dc y = (I \u2212S)y, Z(t+1) c is a diagonal matrix with diagonal elements z(t+1) ic , I is a n \u00d7 n identity matrix, and S is a n \u00d7 n matrix with elements defined as Sij = z(t+1) ic Kh(ui \u2212uj) Pn i=1 z(t+1) ic Kh(ui \u2212uj) . 4 Practical issues In practice, it is recommend to explore multiple initial values when employing the ECM algorithm, as the mixture likelihood inherently exhibits multimodality. To acquire appropriate initial values, we utilize the mixture of linear experts approach as proposed by Jacobs et al. (1991) for parameters such as \u03b10c, \u03b1c, \u03b2c, gc(u), and \u03c32 c, where c = 1, 2, . . . , C. Specifically, we set gc(u) as \u03b20c in (2) when employing the mixture of linear experts, where c = 1, 2, . . . , C. Multiple initial values are then selected by repeating the process of generating initial values and choosing the ones with the highest likelihood. In this study, we repeat this process 10 times to ensure the acquisition of suitable initial values. Furthermore, it is crucial to employ suitable methods for determining the optimal number of mixture components. In this paper, we utilized the Bayesian information criterion (BIC; Schwarz 1978) obtained as \u22122\u2113+ log(n) \u00d7 df , where \u2113is the log-likelihood function and df is degree of freedoms, to select the number of components. However, directly applying the BIC to the proposed model is challenging due to the complexity of calculating degrees of freedom, particularly in the presence of non-parametric functions. Therefore, we adopt a modified approach for determining degrees of freedom, inspired by Wu and Liu (2017), as follows. df = C \u00d7 \u03c4Kh\u22121|\u2126| \u001a K(0) \u22121 2 Z K2(t)dt \u001b + (2C \u22121)(p + 1), where \u2126represents the support of the non-parametric component covariates and \u03c4K = K(0) \u22120.5 R K2(t)dt R {K(t) \u22120.5K(t)}2dt. Given that the degrees of freedom depends on the bandwidth, we chose the bandwidth associated with the lowest BIC among the candidates. 6 \f5 Simulaton studies In this section, we present simulation results demonstrating the performance of the proposed method compared to other estimation methods under various cases. Specifically, we consider the following methods for each simulated sample: 1. MoE: Mixture of linear experts 2. FMPLR: Finite mixture of partially linear regressions. 3. MoPLE: Mixture of partially linear experts. FMPLR was introduced by Wu and Liu (2017), where it is assumed that all \u03b1 = (\u03b11, \u03b12, . . . , \u03b1C) to be zero vectors. We utilize the MoEClust in R package (Murphy and Murphy, 2022) for MoE, while we implement our R program for FMPLR and MoPLE. We conduct three simulation scenarios, each comprising two mixture components as detailed in Table 1. In each of these experiments, we assume that the covariates X and U are independent random variables following a standard uniform distribution. In the first experiment, we assume a linear relationship between Y and (X, U) within each mixture component, with the probability of observations belonging to latent clusters dependent on X. In the second experiment, we introduce partially linear relationships between Y and (X, U) while keeping the probability of observations belonging to latent clusters independent of X. In the third experiment, we also consider partially linear relationships, but it features the probability of observations belonging to latent clusters as dependent on X. Hence, we can expect that MoE, FMPLR and MoPLE represent efficient methods for Case I, Case II, and Case III, respectively. Table 1: True parameters for each simulation scenarios Scenarios Gating Network Component 1 Component 2 \u03b101 \u03b111 \u03b21 g1(u) \u03c32 1 \u03b22 g2(u) \u03c32 2 Case I -0.5 2 -3 -3u 0.5 3 3u 0.25 Case II 0 0 -3 2u2 0.5 3 2 cos(\u03c0u)2 0.25 Case III -0.5 2 -3 2u2 0.5 3 2 cos(\u03c0u)2 0.25 The performance of each method is evaluated by calculating the bias as 1 r Pr j=1(\u02c6 \u03b2c(j) \u2212\u03b2c) and mean square error (MSE) as 1 r Pr j=1(\u02c6 \u03b2c(j) \u2212\u03b2c)2, where \u03b2c and \u02c6 \u03b2c(j) are the true regression coefficient in cth expert network and the estimate of the \u03b2c from the jth sample for c = 1, 2 and j = 1, 2, . . . , r, respectively, for every regression parameter across a total of r = 400 replicated samples, with sample sizes of n =250, 500 and 1000. To assess the quality of the estimated nonparametric function \u02c6 g = (\u02c6 g1(\u00b7), \u02c6 g2(\u00b7)) for g = (g1(\u00b7), g2(\u00b7)), we utilize the mean absolute error (MAE) defined as MAE = D\u22121 D X d=1 |\u02c6 gc(ud) \u2212gc(ud)|, where c = 1, 2, . . . , C. We chose {ud, d = 1, . . . , D} as grid points evenly distributed within the range of the covariate u, with D set to 100. We employ the Epanechnikov kernel function and determine regression clusters for observations using the maximum a posteriori. To assess the clustering performance, the Adjusted Rand Index (ARI, Hubert and Arabie, 1985) and Adjusted Mutual Information (AMI, Vinh et al., 2009) are computed. Note that smaller values of bias, 7 \fTable 2: Performance of each method for regression coefficients in Case I (Boldfaced numbers indicate the best in each criterion) Method n \u03b21 \u03b22 g1(\u00b7) g2(\u00b7) ARI AMI MSE (bias) MSE (bias) MAE MAE MoE 250 0.045 (0.016) 0.037 (0.005) 0.087 0.077 0.961 0.923 500 0.024 (0.010) 0.017 (-0.005) 0.059 0.052 0.962 0.922 1000 0.011 (-0.003) 0.009 (-0.005) 0.042 0.036 0.963 0.923 FMPLR 250 0.049 (-0.033) 0.040 (-0.023) 0.159 0.130 0.952 0.908 500 0.026 (-0.004) 0.019 (-0.034) 0.113 0.093 0.954 0.908 1000 0.014 (-0.051) 0.010 (-0.032) 0.084 0.070 0.955 0.910 MoPLE 250 0.047 (0.014) 0.040 (0.006) 0.154 0.127 0.960 0.920 500 0.024 (0.011) 0.018 (-0.006) 0.110 0.089 0.961 0.921 1000 0.011 (-0.001) 0.051 (-0.016) 0.082 0.081 0.961 0.920 Table 3: Performance of each method for regression coefficients in Case II (Boldfaced numbers indicate the best in each criterion) Method n \u03b21 \u03b22 g1(\u00b7) g2(\u00b7) ARI AMI MSE (bias) MSE (bias) MAE MAE MoE 250 0.077 (-0.062) 0.120 (-0.036) 0.362 1.056 0.652 0.562 500 0.041 (-0.079) 0.056 (-0.033) 0.361 1.063 0.657 0.562 1000 0.019 (-0.053) 0.030 (-0.033) 0.356 1.063 0.664 0.565 FMPLR 250 0.069 (0.014) 0.035 (0.015) 0.169 0.231 0.737 0.639 500 0.079 (0.026) 0.043 (0.005) 0.126 0.204 0.741 0.643 1000 0.033 (0.040) 0.009 (0.001) 0.095 0.160 0.748 0.649 MoPLE 250 0.071 (0.014) 0.035 (0.012) 0.171 0.214 0.734 0.640 500 0.035 (0.006) 0.018 (0.010) 0.125 0.171 0.744 0.646 1000 0.022 (0.029) 0.031 (-0.013) 0.101 0.131 0.750 0.651 Table 4: Performance of each method for regression coefficients in Case III (Boldfaced numbers indicate the best in each criterion) Method n \u03b21 \u03b22 g1(\u00b7) g2(\u00b7) ARI AMI MSE (bias) MSE (bias) MAE MAE MoE 250 0.062 (-0.074) 0.230 (-0.169) 0.361 1.100 0.641 0.529 500 0.034 (-0.079) 0.127 (-0.175) 0.357 1.074 0.645 0.529 1000 0.020 (-0.076) 0.087 (-0.207) 0.348 1.0578 0.652 0.533 FMPLR 250 0.078 (-0.118) 0.215 (-0.170) 0.182 0.270 0.661 0.556 500 0.044 (-0.132) 0.075 (-0.130) 0.146 0.203 0.671 0.562 1000 0.036 (-0.123) 0.075 (-0.145) 0.125 0.193 0.675 0.566 MoPLE 250 0.066 (0.038) 0.064 (-0.020) 0.172 0.245 0.743 0.638 500 0.038 (0.038) 0.086 (-0.039) 0.123 0.200 0.748 0.641 1000 0.017 (0.038) 0.076 (-0.045) 0.094 0.161 0.771 0.667 8 \fMSE and MAE indicate better performance, while larger values of ARI and AMI signify better performance. In Case I, MoE exhibits the best performance across all criteria, while MoPLE ranks second in terms of clustering performance. In Case II, MoPLE performs the best in terms of ARI and AMI, while FMPLR and MoPLE are competitive with regard to the estimating parameters. In Case III, MoPLE demonstrates the best with regard to almost all criteria compared to the other methods. Overall, MoPLE demonstrates competitive performance, ranking either as the best or the second best method across all cases. 6 Real data analysis 6.1 Prestige dataset For the first real data analysis, we consider the Prestige dataset, which is available in the car package in R. It comprises 102 observations with the variable such as Prestige, indicating occupational prestige from a mid-1960s social survey, Education, representing the average years of education for workers in 1971, Income, denoting the standardized average income of workers in 1971, and Occupational types, specifying occupational categories like professional, whitecollar, and blue-collar occupations. In this study, we model the response variable Y as Prestige, where X represents Education, and U represents Income. Additionally, we assume that the latent variable is associated with Occupational types. Table 5 displays the BIC values obtained by each method for the Prestige dataset. MoPLE correctly selects the expected number of components, while MoE and FMPLR yield fewer clusters than expected. The clustering performance of each method is summarized in Table 6. MoE performs the best in terms of ARI, whereas MoPLE excels in terms of AMI. As a result, MoPLE is considered the best method since it not only produces the expected number of clusters but also delivers competitive clustering performance. MoE is the second-best method, despite not selecting the expected number of clusters. This suggests that occupational types are dependent on education, and there are nonlinear relationships between prestige and income, at least within one component. Table 5: BIC values for each method in prestige dataset (Boldfaced numbers indicate the smallest value in each criterion) Number of clusters MoE FMPLR MoPLE 1 724.22 947.23 947.23 2 718.24 864.11 852.19 3 735.49 951.21 823.31 4 736.40 1042.96 1126.14 5 763.94 1633.26 1186.94 Based on the findings from MoPLE, the clusters denoted as 1, 2, and 3 correspond to professional, white-collar, and blue-collar occupations, respectively. The estimated coefficients for the Education in Class 1, 2, and 3 are 2.331, 5.446, and 2.547, respectively. This suggests that the impact of the education on the prestige is most pronounced in white-collar. Figure 2 illustrates the estimated gc(u) for each cluster, where c = 1, 2, 3. We note a nonlinear association between prestige and income within cluster 1, whereas clusters 2 and 3 exhibit a positive relationship between prestige and income, indicating an increasing trend. 9 \fTable 6: Clustering performance for each method in prestige dataset (Boldfaced numbers indicate the largest value in each criterion) Index MoE FMPLR MoPLE ARI 0.5096 0.0597 0.4779 AMI 0.4012 0.0725 0.4506 (a) Cluster 1 (b) Cluster 2 (c) Cluster 3 Figure 2: Estimated gc(\u00b7), c = 1, 2, 3, through MoPLE for the Prestige dataset 6.2 Gross domestic product dataset In the second real data analysis, we examine gross domestic product (GDP) dataset sourced from the STARS database of World Bank. This dataset comprises information from 82 countries over the period 1960 to 1987 and includes some variables such as log(GDP), indicating logarithm of real gross domestic product in million dollars, log(Labor), representing logarithm of the economically active population aged 15 to 65, log(Capital), implying logarithm of the estimated initial capital stock in each country, and log(Education), denoting logarithm of the average years of education. Previously, researchers such as Duffy and Papageorgiou (2000) utilized this dataset to investigate the Cobb-Douglas specification, while Wu and Liu (2017) examined how education and two other variables influence GDP using FMPLR with a fixed two-component mixture. In this paper, we investigate countries in 1975 with Y = log(GDP), X = (log(Labor), log(Capital)) and U = log(Education), comparing clustering performance. To evaluate the clustering performance, we introduce a latent variable that indicates whether the country was classified as advanced or developing in 1975 based on International Monetary Fund (IMF). Table 7 and Table 8 present the BIC values and clustering performance, respectively. In Table 7, MoPLE yield the expected number of clusters, while MoE and FMPLR selects more clusters than expected. In Table 8, MoPLE achieves the best results in terms of both ARI and AMI, followed by MoE. These findings suggest that MoPLE is the most suitable method when 10 \fTable 7: BIC values for each method in GDP dataset (Boldfaced numbers indicate the smallest value in each criterion) Number of clusters MoE FMPLR MoPLE 1 74.46 337.10 337.10 2 88.05 176.92 134.64 3 60.95 169.90 178.15 4 114.48 265.12 232.49 5 110.04 419.94 405.89 Table 8: Clustering performance for each method in GDP dataset(Boldfaced numbers indicate the largest value in each criterion) Index MoE FMPLR MoPLE ARI 0.3449 -0.1238 0.7165 AMI 0.3280 0.1042 0.6152 attempting to identify clusters among countries based on their classification as advanced or developing. According to the results derived from MoPLE, the clusters labeled as 1 and 2 represent advanced and developing countries, respectively. In addition, cluster 1 reveals estimated coefficients for log(Labor) and log(Capital) as (0.14, 0.86), while cluster 2 displays coefficients as (0.17, 0.82). These results suggest that the impact of labor and capital on GDP does not significantly differ between advanced and developing countries. Figure 3 depicts the estimated gc(u) for each cluster, with c = 1, 2. Specifically, in cluster 1, the values of log(GDP) appear to be higher compared to those in cluster 2, while their shapes look similar. 7 Discussion In this paper, we propose MoPLE, which applies a partial linear structrure to the expert network of MoE, replacing the linear structure. In numerical studies, MoPLE demonstrates the ability to estimate both parametric and non-parametric components effectively, not only under linear relationships between the response variable and covariates but also under non-linear relationships. Furthermore, it gives comparative performance in terms of the regression clustering. These results imply that MoPLE is a valuable model regardless of whether the data exhibits linear or non-linear relationships, excelling not only in parameter estimation but also in clustering. While this study assumed univariate covariates for the non-parametric component, it is possible to extend this approach to higher dimensions. Nevertheless, we must acknowledge the curse of dimensionality as a limitation of non-parametric methods. One potential alternative approach is to structure each expert as a partially linear additive model. Furthermore, although we postulate a specified variable following nonlinear relationships based on the previous work, it is still necessary to construct statistical hypothesis tests for nonlinear relationships, even though it may be challenging due to the presence of a hidden latent structure. 11 \f(a) Cluster 1 (b) Cluster 2 Figure 3: Estimated gc(\u00b7), c = 1, 2, through MoPLE for the GDP dataset"
17
+ }
title_10K/test_title_short_2405.03003v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03003v1",
3
+ "title": "Parameter-Efficient Fine-Tuning with Discrete Fourier Transform",
4
+ "abstract": "Low-rank adaptation~(LoRA) has recently gained much interest in fine-tuning\nfoundation models. It effectively reduces the number of trainable parameters by\nincorporating low-rank matrices $A$ and $B$ to represent the weight change,\ni.e., $\\Delta W=BA$. Despite LoRA's progress, it faces storage challenges when\nhandling extensive customization adaptations or larger base models. In this\nwork, we aim to further compress trainable parameters by enjoying the powerful\nexpressiveness of the Fourier transform. Specifically, we introduce FourierFT,\nwhich treats $\\Delta W$ as a matrix in the spatial domain and learns only a\nsmall fraction of its spectral coefficients. With the trained spectral\ncoefficients, we implement the inverse discrete Fourier transform to recover\n$\\Delta W$. Empirically, our FourierFT method shows comparable or better\nperformance with fewer parameters than LoRA on various tasks, including natural\nlanguage understanding, natural language generation, instruction tuning, and\nimage classification. For example, when performing instruction tuning on the\nLLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable\nparameters, compared to LoRA's 33.5M. Our code is released at\n\\url{https://github.com/Chaos96/fourierft}.",
5
+ "authors": "Ziqi Gao, Qichao Wang, Aochuan Chen, Zijing Liu, Bingzhe Wu, Liang Chen, Jia Li",
6
+ "published": "2024-05-05",
7
+ "updated": "2024-05-05",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "cs.CL"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
16
+ "gt": "Parameter-Efficient Fine-Tuning with Discrete Fourier Transform",
17
+ "main_content": "Introduction Large foundation models (LFMs) have demonstrated exceptional performance on tasks of multiple domains, including natural language processing (NLP) (Liu et al., 2019; He et al., 2020; Radford et al., 2019; Brown et al., 2020; Li et al., 2022) and computer vision (CV) (Liu et al., 2023a;b; Singh et al., 2022; Rombach et al., 2022). Owing to their *Equal contribution 1Hong Kong University of Science and Technology (Guangzhou) 2Hong Kong University of Science and Technology 3Sun Yat-sen University 4International Digital Economy Academy 5AI Lab, Tencent. Correspondence to: Jia Li <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). Figure 1. Summary of the performance (y-axis) of fine-tuning methods with different numbers (x-axis) of trainable parameters on NLP (left) and CV (right) tasks. The left side shows the instruction tuning task, where the LLaMA2-7B model is fine-tuned with Alpaca and evaluated by GPT-4. The right side shows the image classification task, where the Vision Transformer (ViT) is finetuned and tested on the DTD dataset. Black circles (\u25cf) represent the Full Fine-tuning (FF) method. Orange circles (\u25cf) represent LoRA method with r = {32, 64, 128} (left) and r = {8, 16, 32} (right). Blue circles (\u25cf) represent our proposed method with n = {1000, 2000} (left) and n = {3000, 10000} (right). impressive capabilities, fine-tuning LFMs for a wide range of downstream tasks has become prevalent (Wang et al., 2022; Taori et al., 2023; Qiu et al., 2020). Under the full fine-tuning paradigm, the new model adapted to each customized task typically contains as many parameters as the original model (Qiu et al., 2020; Raffel et al., 2020; Chen et al., 2024; Gao et al., 2024). As models grow larger and customization needs expand, the demand for storing finetuned checkpoints rises, resulting in both costly storage and memory consumption. As a popular way to address this issue, LoRA (Hu et al., 2021) represents the weight change with two low-rank matrices A and B, i.e., W0+\u2206W = W0+BA. Despite LoRA\u2019s superb performance, its large size of trainable parameters still brings high IT infrastructure consumption, which affects both ends of public communities and individual users. For the former, an intuitive example is that a LoRA adapter (finetuned weights) for a specific style of the stable diffusion model (Rombach et al., 2022) requires about 40MB of memory. This necessitates the LFM communities (e.g., Civi1 arXiv:2405.03003v1 [cs.LG] 5 May 2024 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform tai (Civitai, 2024)) to bear high storage and bandwidth costs to cater to a large user base. For the latter, fewer parameters mean direct RAM savings when loading fine-tuned weights in mobile APPs, enabling sufficient customization for individual users (Zhou et al., 2022). To this end, we naturally ask the question: How can we aggressively compress trainable parameters even further for fine-tuning LFMs? Previous works have demonstrated the powerful expressiveness of Fourier basis in data compression, where extremely sparse spectral information can be used to recover highfidelity data (e.g., 1D signal vectors (Zwartjes & Gisolf, 2007; Duarte & Baraniuk, 2013; Rudelson & Vershynin, 2008) and 2D image matrices (Vlaardingerbroek & Boer, 2013; Song et al., 2021; Shi et al., 2014)). More importantly, when dealing with more general (non-image) matrices that lack strong spatial semantics and are not frequency-sparse, Fourier transform can still handle recovery effectively (Chen & Chi, 2013; Yang & Xie, 2016). Motivated by this, we investigate the potential for updating the weight change \u2206W with its sparse spectral coefficients for fine-tuning LFMs. In this paper, we aim to aggressively reduce the number of trainable parameters for fine-tuning LFMs. To this end, we propose FourierFT (Fourier Transform for Fine-Tuning), which treats the weight change \u2206W as a matrix in the spatial domain, and learns its sparse spectral coefficients. Specifically, we first randomly select n spectral entries that are shared across all layers. For each layer, FourierFT learns n spectral coefficients located at these n selected entries and then directly applies inverse discrete Fourier transform to compute the updated \u2206W. Therefore, fine-tuning a pretrained model with Lt layers only requires storing 2n entry parameters and nLt coefficient parameters for FourierFT. Empirically, we compare our method with state-of-the-art LoRA variants and other parameter-efficient fine-tuning methods on various tasks including (1) natural language understanding (on the GLUE benchmark), (2) natural language generation (on the E2E benchmark), (3) instruction tuning (with LLaMA-family models), and (4) image classification (with vision transformers). FourierFT can always achieve comparable or even better performance than LoRA, with about 6.0%, 9.4%, 0.2% and 9.2% of LoRA\u2019s trainable parameters for these 4 tasks, respectively. For example in Figure 1, on the instruction tuning task, our FourierFT method outperforms LoRA with only 64K trainable parameters. Moreover, it achieves a comparable score to Full Fine-tuning with only 128K parameters. 2. Related Works Parameter-Efficient Fine-Tuning. With the rapid expansion of large foundation models (LFM), it has become challenging and important to efficiently adapt them for specific tasks. To this end, numerous methods for parameter-efficient fine-tuning (PEFT) are proposed, demonstrating impressive capabilities in both efficiency and accuracy. Existing PEFT methods are broadly partitioned into two categories: nonweight-based and weight-based methods. Non-weight-based methods do not optimize pre-trained LFMs at the weight level. Instead, they achieve fine-tunings by introducing additional modules or optimizing prompts and prefixes. Adapter tuning (He et al., 2021; Rebuffi et al., 2017; Pfeiffer et al., 2020; Houlsby et al., 2019; R\u00a8 uckl\u00b4 e et al., 2020; Lin et al., 2020) aims to introduce light-weighted neural modules, called adapters, between pre-trained layers of the base model. These methods keep the pre-trained weights frozen and efficiently fine-tune the adapters for customized tasks. Prompt tuning (Brown et al., 2020; Lester et al., 2021; Gao et al., 2020; Diao et al., 2022) and prefix tuning (Li & Liang, 2021) insert additional prompts or prefix tokens to the layers of the base model. Weight-based methods, represented by LoRA (Hu et al., 2021), introduce and then update weight changes that can be merged with the original weights to avoid inference latency. LoRA\u2019s innovation lies in the multiplication of low-rank matrices to approximate weight changes. Building upon this, AdaLoRA (Zhang et al., 2023) extends the LoRA method by distributing the parameter budget across weight matrices with importance scores. Additionally, Q-LoRA (Dettmers et al., 2023) proposes to back-propagate gradients upon LoRA through a quantized pre-trained model with 4-bit NormalFloat. Here, we focus on weight-based methods and achieve huge parameter reduction with the powerful expressiveness of Fourier basis, rather than following the low-rank structure. Sparse Fourier Transform in Deep Learning. Sparse Fourier transform (SFT) has flourished in various fields of deep learning (DL). The SFT technique mainly involves using sparse spectral coefficients of significant (Xu et al., 2020; Ehrlich & Davis, 2019; Gueguen et al., 2018; Tang et al., 2022) or even random (Lin et al., 2014; Rawat et al., 2019; Herrmann, 2010) spectral entries, for representation learning. One important application of this technique is matrix recovery. Patel et al. (2011) designs a gradient-based compressed sensing method to recover images with their sparse Fourier information. Shechtman et al. (2014) proposes an efficient phase retrieval method that improves data recovery using sparse Fourier coefficients. Importantly, previous works (Chen & Chi, 2013; Yang & Xie, 2016; Gao et al., 2022) show that even when the original data is not frequency-sparse, SFT can effectively recover the data with extremely few parameters. Although previous works lack studies on the recovery for the weight matrices of DL models with SFT, the aforementioned methods provide potential support for this work. 2 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Pre-trained Weights \ud835\udc4a\u2208\u211d!!\u00d7!\" \ud835\udc35= 0 \ud835\udc34= \ud835\udca9(0, \ud835\udf0e!) \u210e \ud835\udc65 \ud835\udc51# \ud835\udc51$ \ud835\udc5f Pre-trained Weights \ud835\udc4a\u2208\u211d!!\u00d7!\" \u210e \ud835\udc65 \ud835\udc51# \ud835\udc51$ Random entries (shared across layers) \u211d!\u00d7# \ud835\udc5b Coefficients : Frozen : Trainable LoRA FourierFT IDFT Dense Spectral Matrix F Figure 2. Overview of LoRA (left) and our FourierFT (right) method. In LoRA, only low-rank (r) matrices A and B are trained. The weight change is represented by their multiplication, i.e., \u2206W = BA. For each pre-trained weight W, the theoretical number of trainable parameters in LoRA is r \u00d7 (d1 + d2). In FourierFT, we first randomly generate the spectral entry matrix R2\u00d7n, which is shared across all layers to reduce parameter storage requirements. The complete spectral matrix is formed by a trainable coefficient vector Rn located at selected entries and 0s at the remaining entries. We obtain the weight change \u2206W by directly performing inverse discrete Fourier transform (IDFT) on the updated spectral matrix. For all L adapted layers, FourierFT needs to store n \u00d7 (2 + L) parameters. 3. Method We present FourierFT (depicted in Figure 2), a parameterefficient fine-tuning method based on discrete Fourier transform. FourierFT follows the principle of only learning the change in the pre-trained weight, as proposed by LoRA (Hu et al., 2021). However, unlike LoRA, FourierFT does not adopt the low-rank structure but learns a set of spectral coefficients of Fourier basis. Specifically, we randomly initialize the spectral entry matrix, which is frozen and shared across all layers. We make the spectral coefficients located at selected entries trainable, which jointly form the spectral matrix. Lastly, we apply the inverse discrete Fourier transform to the spectral matrix, yielding its spatial-domain counterpart as the updated weight change. 3.1. Forward Pass We follow the paradigm of only learning weight changes, as adopted by LoRA-based methods (Hu et al., 2021; Dettmers et al., 2023; Zhang et al., 2023). This can avoid inference latency by merging the pre-trained weight and its change. Formally, we define each pre-trained weight matrix as W0 \u2208 Rd1\u00d7d2, and the weight change for fine-tuning as \u2206W \u2208 Rd1\u00d7d2. LoRA aims to parameterize \u2206W in the form of low-rank decomposition in the forward pass: h = W0x + \u2206Wx = W0x + BAx, (1) where B \u2208Rd1\u00d7r and A \u2208Rr\u00d7d2 with the rank r \u226a min(d1,d2) are trainable matrices. The advantage of FourierFT is that the orthogonal and expressive Fourier basis enables recovery of informative weight changes. This promisingly suggests achieving comparable performance to LoRA with significantly fewer parameters. We first randomly initialize the entry matrix E \u2208R2\u00d7n containing discrete 2D spectral entries. Then we randomly initialize the coefficients c \u2208Rn with a normal Gaussian distribution. The proposed forward pass is: F = TODENSE(E,c) (2) Sp,q = d1\u22121 \u2211 j=0 d2\u22121 \u2211 k=0 Fj,kei2\u03c0( p d1 j+ q d2 k) (3) h = W0x + \u2206Wx = W0x + \u03b1R(S)x. (4) Specifically, TODENSE in Eq. 2 represents to construct the spectral matrix F \u2208Rd1\u00d7d2, i.e., Fj,k = cl (resp. 0), if j = E0,l & k = E1,l (resp. else). Eq. 3 computes the spatio matrix S via the inverse discrete Fourier transform, where i represents the imaginary unit. Finally, in Eq. 4, we take the real part of the complex matrix S (denoted as R(S)) and scale it by \u03b1. Kindly note that all layers involve training various c vectors, while sharing the matrix E and value \u03b1. The pseudocode for FourierFT is shown as Algorithm 1, adhering to the PyTorch style. Initialization for the Entry Matrix E. Previous works lack studies on the importance of the spectral entries in the weight change. Thus, we fill this gap by introducing adjustable frequency bias, causing the entries to be more likely sampled in this area. In addition to randomly sampling entries in the full d1 \u00d7 d2-sized spectral matrix (i.e., no bias), we also implement entry sampling with a bias towards a favored central frequency, e.g., low, middle, or 3 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Algorithm 1 PyTorch-style pseudocode for FourierFT. class FourierFT(nn.Module): def __init__( self, n: int = 100, # number of trainable parameters alpha: float = 300.0, # scaling d1: int = 4096, # input dimension d2: int = 4096, # output dimension base_layer: nn.Module # pre-trained layer ) # definitions self.d1 = d1 self.d2 = d2 self.n = n self.alpha = alpha self.base_layer = base_layer # entry initialization (no frequency bias) self.E = torch.randperm(d1 * d2)[:n] self.E = torch.stack([self.E // self.d1, self.E % self.d2], dim=0) # spectral coefficient initialization self.c = nn.Parameter(torch.randn(n), \\\\ requires_grad=True) def forward(self, x: torch.Tensor): # get dense spectral matrix (Eq.2) F = torch.zeros(self.d1, self.d2) F[self.E[0, :], self.E[1, :]] = self.c # compute Delta_W (Eq.3) Delta_W = torch.fft.ifft2(F).real * self.alpha # merge (Eq.4) h = self.base_layer(x) h += torch.einsum(\u2019ijk,kl->ijl\u2019, x, Delta_W) return h high frequencies. Formally, we apply the Gaussian bandpass filter (Gonzales & Wintz, 1987) to model the sampling probability for the entry (u,v),0 \u2264u \u2264d1\u22121,0 \u2264v \u2264d2\u22121: p(u,v) = exp\u239b \u239d\u2212(D2 \u2212f 2 c DW ) 2\u239e \u23a0, (5) where D represents the distance from the point (u,v) to the origin (center of the matrix), fc is the favored central frequency, and W represents the bandwidth. In Figure 3, we visualize the sampling probability map of a 768 \u00d7 768-sized spectral matrix with different fc and W = 200. fc = 0 fc = 100 fc = 200 fc = 350 fc = 480 0 0.5 1 Figure 3. Visualization of entry sampling probability at different favored central frequencies fc. Kindly note that unless specially stated, FourierFT is set by default to the entry initialization with no frequency bias. 3.2. Parameter Summary We summarize the number of trainable parameters for LoRA and FourierFT in Table 1. LoRA relies on a pair of trainable matrices A and B for each layer. Let the number of layers for fine-tuning be Lt. The total number of parameters in Table 1. Theoretical number of trainable parameters and storage requirements for fine-tuning. For both LoRA and FourierFT methods, only the query and value layers are tuned within the transformer architectures. The configurations that are exactly chosen in the \u2018Experiments\u2019 Section are highlighted . Base Models LoRA FourierFT r # Trainable Parameters Required Bytes n # Trainable Parameters Required Bytes RoBERTa Base 4 147K 574KB 200 4.8K 18.8KB 8 295K 1.13MB 200 24K 94KB RoBERTa Large 4 393K 1.5MB 200 9.6K 36.5KB 8 786K 3MB 1000 48K 183KB GPT-2 Medium 4 350K 1.34MB 500 24K 94KB 8 786K 3MB 1000 48K 188KB GPT-2 Large 4 737K 2.81MB 500 36K 141KB 8 1.47M 5.74MB 1000 72K 282KB LLaMA-2 7B 16 8.39M 32.8MB 1000 64K 250KB 64 33.5M 131.1MB 2000 128K 500KB LLaMA-2 13B 16 13.1M 51.2MB 1000 80K 312KB 64 52.4M 204.8MB 2000 160K 625KB ViT Base 8 295K 1.13MB 3000 72K 281KB 16 590K 2.25MB 10000 239K 934KB ViT Large 8 786K 2.93MB 3000 144K 563KB 16 1.57M 6MB 10000 480K 1.83MB LoRA is determined by the rank r and the dimension of weights d = d1 = d2: \u2223\u0398\u2223LoRA = 2 \u00d7 d \u00d7 Lt \u00d7 r. For Fourier, the total number takes the form: \u2223\u0398\u2223F ourierF T = n \u00d7 Lt. As an intuitive example, the RoBERTa Base model contains 12 transformer blocks with d = 768, resulting in Lt = 24 layers when we only fine-tune the query and value ones. Therefore, we have \u2223\u0398\u2223LoRA = 294,912 for r = 8, and \u2223\u0398\u2223F ourierF T = 24,000 for n = 1000. In Table 1, we highlight the configurations where LoRA and our method achieve matched performance in subsequent experiments. We note that the advantage of parameter efficiency in FourierFT becomes more pronounced as the model\u2019s scale (depth and width) increases (e.g., RoBERTa Base \u2192RoBERTa Large). This could be because \u2223\u0398\u2223LoRA has an explicit linear relationship with width d, unlike \u2223\u0398\u2223F ourierF T . 4. Experiments In this section, we evaluate FourierFT in the domains of natural language processing (NLP) and computer vision (CV). For NLP, we implement FourierFT for fine-tuning (1) RoBERTa (Base & Large) on natural language understanding (GLUE, (Wang et al., 2018)), (2) GPT-2 (Medium & Large) on natural language generation (E2E, (Novikova et al., 2017)) and (3) LLaMA-family models (7B & 13B) on instruction tuning. For CV, we apply FourierFT to fine-tune the (4) vision transformers (Base & Large) on image classification. Finally, we conduct ablation studies to analyze the effect of frequency bias, the parameter scalability, and the 4 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Table 2. Performance of various fine-tuning methods with RoBERTa Base (RoBbase) and RoBERTa Large (RoBlarge) models on 6 datasets of the GLUE benchmark. We report the Matthew\u2019s correlation coefficient (MCC) for CoLA, Pearson correlation coefficient (PCC) for STS-B and accuracy (Acc.) for all the remaining tasks. We report the median result of 5 runs, each using different random seeds. The best results for each dataset are shown in bold. Higher is better for all metrics in 6 datasets. Model & Method # Trainable Parameters SST-2 (Acc.) MRPC (Acc.) CoLA (MCC) QNLI (Acc.) RTE (Acc.) STS-B (PCC) Avg. RoBbase(FF) 125M 94.8 90.2 63.6 92.8 78.7 91.2 85.2 RoBbase(BitFit) 0.1M 93.7 92.7 62 91.8 81.5 90.8 85.4 RoBbase(AdptD) 0.3M 94.2\u00b10.1 88.5\u00b11.1 60.8\u00b10.4 93.1\u00b10.1 71.5\u00b12.7 89.7\u00b10.3 83.0 RoBbase(AdptD) 0.9M 94.7\u00b10.3 88.4\u00b10.1 62.6\u00b10.9 93.0\u00b10.2 75.9\u00b12.2 90.3\u00b10.1 84.2 RoBbase(LoRA) 0.3M 95.1\u00b10.2 89.7\u00b10.7 63.4\u00b11.2 93.3\u00b10.3 78.4\u00b10.8 91.5\u00b10.2 85.2 RoBbase(AdaLoRA) 0.3M 94.5\u00b10.2 88.7\u00b10.5 62.0\u00b10.6 93.1\u00b10.2 81.0\u00b10.6 90.5\u00b10.2 85.0 RoBbase(DyLoRA) 0.3M 94.3\u00b10.5 89.5\u00b10.5 61.1\u00b10.3 92.2\u00b10.5 78.7\u00b10.7 91.1\u00b10.6 84.5 RoBbase(FourierFT) 0.024M 94.2\u00b10.3 90.0\u00b10.8 63.8\u00b11.6 92.2\u00b10.1 79.1\u00b10.5 90.8\u00b10.2 85.0 RoBlarge(FF) 356M 96.4 90.9 68 94.7 86.6 92.4 88.2 RoBlarge(AdptP) 3M 96.1\u00b10.3 90.2\u00b10.7 68.3\u00b11.0 94.8\u00b10.2 83.8\u00b12.9 92.1\u00b10.7 87.6 RoBlarge(AdptP) 0.8M 96.6\u00b10.2 89.7\u00b11.2 67.8\u00b12.5 94.8\u00b10.3 80.1\u00b12.9 91.9\u00b10.4 86.8 RoBlarge(AdptH) 6M 96.2\u00b10.3 88.7\u00b12.9 66.5\u00b14.4 94.7\u00b10.2 83.4\u00b11.1 91.0\u00b11.7 86.8 RoBlarge(AdptH) 0.8M 96.3\u00b10.5 87.7\u00b11.7 66.3\u00b12.0 94.7\u00b10.2 72.9\u00b12.9 91.5\u00b10.5 84.9 RoBlarge(LoRA) 0.8M 96.2\u00b10.5 90.2\u00b11.0 68.2\u00b11.9 94.8\u00b10.3 85.2\u00b11.1 92.3\u00b10.5 87.8 RoBlarge(FourierFT) 0.048M 96.0\u00b10.2 90.9\u00b10.3 67.1\u00b11.4 94.4\u00b10.4 87.4\u00b11.6 91.9\u00b10.4 88.0 expressiveness of the Fourier basis. Baselines. We compare our FourierFT method with popular parameter-efficient fine-tuning (PEFT) methods. To ensure a comprehensive and fair comparison, we prioritize replicating the setups used in previous works and reusing their reported results. Involved baselines are: \u25cfFull Fine-tuning (FF) During fine-tuning, the base model is initialized with pre-trained weights and biases, and all parameters will undergo gradient updates. \u25cfBitfit (Zaken et al., 2021) Only the bias vectors are finetuned while all other parameters are frozen. \u25cfAdapter tuning This research line was first investigated by Houlsby et al. (2019), which proposes the AdapterH method. AdapterH inserts two-layer adapters between the self-attention and the FNN modules, followed by a subsequent residual connection. We compare it with three additional variants of it. AdapterL (Lin et al., 2020) is more parameter-efficient, with adapter layers applied only after the MLP modules and subsequent to a LayerNorm. AdapterP (Pfeiffer et al., 2020) implements the adapter layers after the feed-forward layer. This design was chosen through a grid search including all settings related to the adapter\u2019s position, number, ect. AdapterD (R\u00a8 uckl\u00b4 e et al., 2020) further enhances the parameter efficiency by dropping adapter layers that are not activated. \u25cfLoRA (Hu et al., 2021) LoRA is the state-of-the-art method for PEFT. It parameterizes incremental weight updates using trainable low-rank matrices. \u25cfDyLoRA (Valipour et al., 2022) This method trains dynamic search-free LoRA models for the best rank choice. \u25cfAdaLoRA (Zhang et al., 2023) This method proposes the SVD-based fine-tuning and prunes redundant singular values with the importance-aware rank allocation. 4.1. Natural Language Understanding Models and Datasets. We evaluate our method on the GLUE benchmark (General Language Understanding Evaluation (Wang et al., 2018)), which consists of a wide range of natural language understanding (NLU) tasks, including single-sentence classification tasks, similarity and paraphrase tasks and natural language inference tasks. We finetune the pre-trained RoBERTa Base and Large foundation models (Liu et al., 2019) for evaluation. Implementation Details. For both models, FourierFT is allowed to have 1000 out of 7682 (RoBERTa Base) and 10242 (RoBERTa Large) trainable spectral coefficients in each layer, i.e., n = 1000. We randomly sample the spectral entries with no frequency bias, which is shared1 across all 24 (Base) and 48 (Large) layers. For all 6 datasets in GLUE, we tune the hyperparameters of the learning rates and the scaling values. We follow the experimental setup applied in Hu et al. (2021), which involves fine-tuning only the query and value weights in each transformer block and 1We use the value 2024 as the seed for all layers. 5 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Table 3. Results from GPT-2 Medium and Large models on the E2E benchmark. We present the result from the final epoch. For all metrics, higher values indicate better performance. * indicates that the results are taken from prior works. Best results are shown in bold. Model Method # Trainable Parameters BLEU NIST METEOR ROUGE-L CIDEr GPT-2 Medium FT* 354.92M 68.2 8.62 46.2 71.0 2.47 AdptL* 0.37M 66.3 8.41 45.0 69.8 2.40 AdptL* 11.09M 68.9 8.71 46.1 71.3 2.47 AdptH* 11.09M 67.3\u00b1.6 8.5\u00b1.07 46.0\u00b1.2 70.7\u00b1.2 2.44\u00b1.01 LoRA 0.35M 68.9\u00b1.3 8.76\u00b1.06 46.6\u00b1.1 71.5\u00b1.1 2.53\u00b1.03 FourierFT 0.048M 69.1\u00b1.1 8.82 \u00b1.05 47.0 \u00b1.3 71.8 \u00b1.1 2.51\u00b1.02 GPT-2 Large FT* 774.03M 68.5 8.78 46.0 69.9 2.45 AdptL* 0.88M 69.1\u00b1.1 8.68\u00b1.03 46.3\u00b1.0 71.4\u00b1.2 2.49\u00b1.0 AdptL* 23.00M 68.9\u00b1.3 8.70\u00b1.04 46.1\u00b1.1 71.3\u00b1.2 2.45\u00b1.02 LoRA 0.77M 70.1\u00b1.3 8.83\u00b1.02 46.8\u00b1.2 72.0\u00b1.3 2.47\u00b1.02 FourierFT 0.072M 70.2\u00b1.2 8.90\u00b1.02 47.0\u00b1.2 71.8\u00b1.1 2.50 \u00b1.02 Table 4. The average scores on MT-Bench and Vicuna assessed by GPT-4. \u2020 indicates updating the layers other than lm head. Higher score is better. Model Method # Trainable Parameters MT-Bench Vicuna LLaMA1-7B LoRA\u2020 159.9M 5.05\u00b1.3 6.85\u00b1.4 LoRA 33.5M 4.99\u00b1.3 6.81\u00b1.3 FourierFT 0.064M 5.09\u00b1.6 6.85\u00b1.8 LLaMA1-13B LoRA\u2020 250.3M 5.28\u00b1.6 7.02\u00b1.3 LoRA 52.4M 5.21\u00b1.4 6.97\u00b1.4 FourierFT 0.08M 5.23\u00b1.3 7.14\u00b1.5 LLaMA2-7B LoRA\u2020 159.9M 5.19\u00b1.1 7.38\u00b1.3 LoRA 33.5M 5.20\u00b1.3 7.35\u00b1.6 FourierFT 0.064M 5.18\u00b1.3 7.49\u00b1.4 LLaMA2-13B LoRA\u2020 250.3M 5.78\u00b1.2 7.89\u00b1.5 LoRA 52.4M 5.80\u00b1.2 7.89\u00b1.6 FourierFT 0.08M 5.82\u00b1.3 7.92\u00b1.5 fully fine-tuning the classification head. We provide the hyperparameters in Table 9 in Appendix. Results. Results are summarized in Table 2. Following Hu et al. (2021), Zhang et al. (2023) and Valipour et al. (2022), we specify the number of trainable parameters for the finetuned layers excluding the classification head. We report the median of 5 random seed results, where the best epoch is selected for each run. In general, FourierFT achieves better or on-par performance compared with baseline methods with significantly fewer trainable parameters. Notably, FourierFT outperforms all baselines including fully fine-tuning the RoBERTa Base on CoLA and the RoBERTa Large on RTE. As mentioned in Section 3.2, the parameter count of LoRA is dependent on both the width and depth of models, resulting in a larger count growth (LoRA: 0.8M/0.3M \u22482.7; ours: 0.048M/0.024M = 2) compared to FourierFT. Nevertheless, FourierFT still performs comparably to LoRA, demonstrating the potential scalability of our method when facing even larger models. 4.2. Natural Language Generation Models and Datasets. We evaluate the performance of FourierFT on the E2E natural language generation (NLG) task (Novikova et al., 2017). We fine-tune the GPT-2 (Radford et al., 2019) Medium (354M) and Large (774M) models, which are both decoder-only and have 24 and 36 transformer blocks, respectively. The E2E benchmark contains roughly 42,000 training, 4,600 validation and 4,600 test samples from the restaurant domain. Implementation Details. We report prior results for baselines other than LoRA. For both LoRA and our method, we fine-tune the GPT-2 Medium and Large models with a linear learning rate scheduler for 5 epochs, where we tune the batch size and learning rate. We report the average results over 3 runs, where the last epoch is selected for each run. We provide the hyperparameters in Table 10 in Appendix. Results. We show the results in Table 3. We note that FourierFT can achieve the best performance on most metrics. More importantly, FourierFT only requires 13.7% and 9.4% of the parameter counts of LoRA, for the GPT-2 Medium and Large models respectively. 4.3. Instruction Tuning Models and Datasets. Instruction tuning, as described in (Ouyang et al., 2022; Wei et al., 2021; Mishra et al., 2021), refers to the process of fine-tuning a language model on a collection of paired prompts and responses. We apply LoRA and FourierFT to fine-tune the LLaMA (Touvron et al., 2023a) and LLaMA2 (Touvron et al., 2023b) families. Specifically, we consider the LLaMA-7B, LLaMA-13B, LLaMA2-7B and LLaMA2-13B as base models, which are fine-tuned on the Alpaca dataset (Taori et al., 2023). Alpaca contains 51K instruction-following demonstrations generated from text-davinci-003 (GPT-3.5) (Wang et al., 2022). For evaluation, we use the fine-tuned models to generate responses for the pre-defined questions, which are from the MT-Bench (Zheng et al., 2023) and Vicuna Eval (Chiang et al., 2023). GPT-4 takes these answers as input and evaluates them with scores within 10. Implementation Details. For LoRA, we use r = 64 and apply two configurations: (1) updating all linear layers except the language modelling head (lm head); (2) updating only the WQ and WV matrices. For FourierFT, we only adopt the latter configuration with n = 1000. To ensure the 6 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform Table 5. Fine-tuning results with ViT Base and Large models on different image classification datasets. We report the accuracy (%) after 10 epochs. Avg. represents the average accuracy of each method on all datasets. The best performance is shown in bold. Model Method # Trainable Parameters OxfordPets StanfordCars CIFAR10 DTD EuroSAT FGVC RESISC45 CIFAR100 Avg. ViT-Base LP 90.28\u00b10.43 25.76\u00b10.28 96.41\u00b10.02 69.77\u00b10.67 88.72\u00b10.13 17.44\u00b10.43 74.22\u00b10.10 84.28\u00b10.11 68.36 FF 85.8M 93.14\u00b10.40 79.78\u00b11.15 98.92\u00b10.05 77.68\u00b11.21 99.05\u00b10.09 54.84\u00b11.23 96.13\u00b10.13 92.38\u00b10.13 86.49 LoRA 581K 93.19\u00b10.36 45.38\u00b10.41 98.78\u00b10.05 74.95\u00b10.40 98.44\u00b10.15 25.16\u00b10.16 92.70\u00b10.18 92.02\u00b10.12 77.58 FourierFT 72K 93.21\u00b10.26 46.11\u00b10.24 98.58\u00b10.07 75.09\u00b10.37 98.29\u00b10.04 27.51\u00b10.64 91.97\u00b10.31 91.20\u00b10.14 77.75 FourierFT 239K 93.05\u00b10.34 56.36\u00b10.66 98.69\u00b10.08 77.30\u00b10.61 98.78\u00b10.11 32.44\u00b10.99 94.26\u00b10.20 91.45\u00b10.18 80.29 ViT-Large LP 91.11\u00b10.30 37.91\u00b10.27 97.78\u00b10.04 73.33\u00b10.26 92.64\u00b10.08 24.62\u00b10.24 82.02\u00b10.11 84.28\u00b10.11 72.96 FF 303.3M 94.43\u00b10.56 88.90\u00b10.26 99.15\u00b10.05 81.79\u00b11.01 99.04\u00b10.08 68.25\u00b11.63 96.43\u00b10.07 93.58\u00b10.19 90.20 LoRA 1.57M 94.82\u00b10.09 73.25\u00b10.36 99.13\u00b10.03 81.79\u00b10.45 98.63\u00b10.07 42.32\u00b10.98 94.71\u00b10.25 94.87\u00b10.10 84.94 FourierFT 144K 94.46\u00b10.28 69.56\u00b10.30 99.10\u00b10.04 80.83\u00b10.43 98.65\u00b10.09 39.92\u00b10.68 93.86\u00b10.14 93.31\u00b10.09 83.71 FourierFT 480K 94.84\u00b10.05 79.14\u00b10.67 99.08\u00b10.01 81.88\u00b10.50 98.66\u00b10.03 51.28\u00b10.68 95.20\u00b10.07 93.37\u00b10.11 86.68 feasibility of training on a single GPU, we deploy the quantization method in Dettmers et al. (2023) for fine-tuning. We train with both methods for only one epoch, and report the average scores of all answers. We provide the hyperparameter setup in Table 11 in the Appendix. Results. The results are shown in Table 4. We find that the expressive power of the 13B model is much stronger than that of the 7B model, regardless of which fine-tuning method is used. Moreover, FourierFT closely matches or slightly exceeds LoRA\u2019s performance with less than 0.2% of its parameters. We provide practical examples containing questions, answers and reviews in the Appendix D. 4.4. Image Classification Models and Datasets. We evaluate our method on the image classification task. We adopt the Base and Large versions of the popular CV foundation model, Vision Transformer (ViT) (Dosovitskiy et al., 2020). The ViTs are pretrained on the ImageNet-21K dataset (Ridnik et al., 2021). The datasets for fine-tuning include OxfordPets (372), CIFAR10 (10), DTD (47), EuroSAT (10) and RESISC45 (45) with small label spaces, as well as StanfordCars (196), FGVC (100) and CIFAR100 (100) with large label spaces. Detailed information is provided in Table 8 in the Appendix. Implementation Details. We include three baselines for evaluation: Full Fine-tuning (FF), Linear Probing (LP, finetuning the classification head only), and LoRA. For both LoRA and our method, only the query and value matrices of ViT are updated. We use r = 16 for LoRA and n = {3000,10000} for FourierFT. We tune the learning rates and weight decay for all methods, and set the maximum training epoch to 10. We provide the hyperparameters in Table 12 in Appendix. 2Numbers in parentheses indicate class counts for each dataset. Results. Table 5 summarizes the results for 8 image classification datasets with the ViT Base and Large models. Both LoRA and FourierFT methods significantly outperform the Linear Probing, demonstrating their effectiveness in the CV domain. Our method obtains matched performance using 12.4% and 9.2% of LoRA\u2019s parameter count, with ViT Base and Large models, respectively. Notably, when we increase the parameter count of FourierFT to 41.1% (ViT Base) and 30.6% (ViT Large) of LoRA\u2019s, it can outperform LoRA by 3.5% and 2.0% respectively. Moreover, our method can even (slightly) outperform the Full Fine-tuning method on OxfordPets and DTD with the ViT Large model. 4.5. Study Effect of Frequency Bias. We examine how the performance is affected by the frequency bias, i.e., the central frequency fc in Eq. 5. We directly apply the optimal hyperparameters searched in Table 2 and fine-tune the RoBERTa Base on the MRPC, STS-B, CoLA and RTE datasets. From Figure 5, we note that the fine-tuning performance of FourierFT without any frequency bias can surpass most cases that are restricted by the central frequency bias. This indicates the universality of our method. Surprisingly, we find that it is always possible to obtain results better than \u201cNo bias\u201d by traversing the fc values. Since this traversal is not efficient, we do not conduct further exploration in this paper. However, we believe that making fc trainable will be a promising new direction for improving FourierFT. Parameter Scalability. We explore the relationship between the number of trainable parameters and the performance of LoRA and our method. We use the set of ranks r = {1,2,4,6,8,15} for LoRA and n = {50,100,200,1000,6144,12288} for FourierFT on 6 tasks of the GLUE benchmark. For both LoRA and ours, the learning rate, and scaling hyperparameters are tuned. For fairness, we ensure that the number of trials for hyperparam7 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform 4 6 8 10 ln # Trainable Parameters 88 89 90 Accuracy MRPC LoRA FourierFT 4 6 8 10 ln # Trainable Parameters 58 60 62 64 Matthew s Corr. CoLA 4 6 8 10 ln # Trainable Parameters 78 80 Accuracy RTE 4 6 8 10 ln # Trainable Parameters 90.0 90.5 91.0 91.5 Pearson Corr. STS-B 4 6 8 10 ln # Trainable Parameters 94.0 94.5 95.0 Accuracy SST-2 4 6 8 10 ln # Trainable Parameters 90 91 92 Accuracy QQP Figure 4. Performance on the GLUE benchmark with RoBERTa Base vs. number of trainable parameters (each layer) of LoRA and ours. For all 6 datasets, we apply the setting of r = {1, 2, 4, 6, 8, 15} for LoRA and n = {50, 100, 200, 1000, 6144, 12288}. 0 100 200 300 400 500 fc 76 77 78 79 Acc. RTE No bias 0 100 200 300 400 500 fc 89.2 89.4 89.6 89.8 90.0 Acc. MRPC No bias 0 100 200 300 400 500 fc 90.5 90.6 90.7 90.8 PCC. STS-B No bias 0 100 200 300 400 500 fc 62.0 62.5 63.0 63.5 MCC. CoLA No bias Figure 5. Results on 4 datasets in GLUE with different fc values. eter search is 30 for both methods. As shown in Figure 4, our method outperforms LoRA on all 6 datasets. In detail, our method is significantly better than LoRA with the same parameter count, i.e., {r = 4,n = 6144} & {r = 8,n = 12288}. Moreover, we observe that a larger number of parameters does not always bring performance gains for LoRA. On the contrary, the increase of n can consistently improve the accuracy of FourierFT. On most tasks, FourierFT with n = 50 can achieve comparable or even better (MRPC, CoLA, RTE) performance than LoRA with r = 1. In this case, the parameter count in LoRA is about 31 \u00d7 that of ours. Basis Expressiveness. The inverse discrete Fourier transform (IDFT) in Eq. 3 is equivalent to the matrix multiplication (Lu et al., 2021): S = BfFB\u22ba f, where B is the transformation matrix of IDFT that contains the Fourier basis. To evaluate its expressivity, we replace the Fourier basis with random and orthogonal basis, respectively. Specifically, for F \u2208Rd1\u00d7d2, we initialize random basis B1 r \u2208Rd1\u00d7d1 and B2 r \u2208Rd2\u00d7d2 with the normal Gaussian distribution. Then Eq. 3 becomes S = B1 rFB2 r. A similar way is used for the orthogonal basis. We compare FourierFT with the random basis (R-B) and orthogonal basis (O-B) on the GLUE benchmark. Table 6 shows the results. We note that the Fourier basis used in our method outperforms the random and orthogonal basis. In addition, the expressive power of the orthogonal basis is much stronger than that of the random basis. The stronger expressive power of the Fourier basis compared to the general orthogonal basis may be attributed to its effective capture of the spectral information of \u2206W. Table 6. Results with three types of basis. Model RTE CoLA Ours R-B O-B Ours R-B O-B Base 79.1 72.7(\u21938.1%) 75.6(\u21934.4%) 63.8 58.7(\u21938.0%) 60.0(\u21936.0%) Large 87.4 81.8(\u21936.4%) 83.6(\u21934.3%) 67.1 64.8(\u21933.4%) 66.1(\u21931.5%) 5. Conclusion In this paper, we aim to achieve an extremely low storage memory for a single fine-tuning of large foundation models. This will enable the customization of multiple fine-tunings for different domains, tasks, or user preferences. To achieve this, we propose a simple yet powerful fine-tuning method that treats weight changes as spatial-domain matrices and 8 \fParameter-Efficient Fine-Tuning with Discrete Fourier Transform only learns the sparse coefficients in the spectral domain. Compared to the LoRA-style baselines, our approach reduces the number of trainable parameters by about 8 \u223c500\u00d7 on a wide range of tasks in the NLP and CV domains. 6. Impact Statements This paper presents a work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. Acknowledgements This work was supported by NSFC Grant No.62206067, HKUST\u2013HKUST(GZ) 20 for 20 Cross-campus Collaborative Research Scheme C019 and Guangzhou-HKUST(GZ) Joint Funding Scheme 2023A03J0673."
18
+ }
title_10K/test_title_short_2405.03008v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03008v1",
3
+ "title": "DVMSR: Distillated Vision Mamba for Efficient Super-Resolution",
4
+ "abstract": "Efficient Image Super-Resolution (SR) aims to accelerate SR network inference\nby minimizing computational complexity and network parameters while preserving\nperformance. Existing state-of-the-art Efficient Image Super-Resolution methods\nare based on convolutional neural networks. Few attempts have been made with\nMamba to harness its long-range modeling capability and efficient computational\ncomplexity, which have shown impressive performance on high-level vision tasks.\nIn this paper, we propose DVMSR, a novel lightweight Image SR network that\nincorporates Vision Mamba and a distillation strategy. The network of DVMSR\nconsists of three modules: feature extraction convolution, multiple stacked\nResidual State Space Blocks (RSSBs), and a reconstruction module. Specifically,\nthe deep feature extraction module is composed of several residual state space\nblocks (RSSB), each of which has several Vision Mamba Moudles(ViMM) together\nwith a residual connection. To achieve efficiency improvement while maintaining\ncomparable performance, we employ a distillation strategy to the vision Mamba\nnetwork for superior performance. Specifically, we leverage the rich\nrepresentation knowledge of teacher network as additional supervision for the\noutput of lightweight student networks. Extensive experiments have demonstrated\nthat our proposed DVMSR can outperform state-of-the-art efficient SR methods in\nterms of model parameters while maintaining the performance of both PSNR and\nSSIM. The source code is available at https://github.com/nathan66666/DVMSR.git",
5
+ "authors": "Xiaoyan Lei, Wenlong ZHang, Weifeng Cao",
6
+ "published": "2024-05-05",
7
+ "updated": "2024-05-05",
8
+ "primary_cat": "eess.IV",
9
+ "cats": [
10
+ "eess.IV",
11
+ "cs.CV",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Mamba",
16
+ "gt": "DVMSR: Distillated Vision Mamba for Efficient Super-Resolution",
17
+ "main_content": "Introduction Single image super-resolution (SR) is a key challenge in computer vision and image processing, aiming to reconstruct a high-resolution image from a low-resolution input. Effective super-resolution aims to improve the efficiency of the SR model while maintaining reconstruction perfor*Corresponding author Figure 1. PSNR results v.s the total number of parameters of different methods for image SR on Set5. mance. Since the introduction of deep learning into superresolution tasks [18], many CNN-based methods have been proposed [16, 20, 21, 46, 47, 51, 63] to improve the performance. A series of approaches [20, 37, 39, 46, 47, 50, 53, 67, 114] have been proposed for building efficient models for image SR. The majority of these efficient models focus on five factors: runtime, parameters, FLOPS, activations, and depths. To further promote the development of efficient SR, ICCV holds the first competition in the AIM 2019 challenge [122]. The information multi-distillation network(IMDN) [39] proposes cascaded information multidistillation blocks to improve the feature extraction module, which won first place in this competition. After that, The winning solution of the AIM 2020 challenge [124], residual feature distillation network(RFDN) [67], further improves the IMDN by residual learning in the main block. In the efficient SR track of NTIRE 2022 [45] challenge, the winning solution, residual local feature network(RLFN) [50], removes the hierarchical distillation connection of residual feature distillation block(RFDB) [67] to reduce the inference time. In the efficient SR track of NTIRE 2022 [114] challenge, the winning solution utilizes a multi-stage arXiv:2405.03008v1 [eess.IV] 5 May 2024 \flightweight training strategy that combines distillation and pruning to reduce both time consumption and model size. The Transformer model, initially successful in natural language processing [100], has attracted interest from the computer vision community. Its effectiveness in highlevel visual tasks (e.g., image classification [22, 72, 103]) has demonstrated the potential in super-resolution [12, 64]. Recently, Mamba [24] has demonstrated superior performance over Transformers across various sizes on largescale real data and exhibits linear scalability with sequence length. Despite pioneering works adopting Mamba for vision tasks [24, 85, 112], it is still in its initial stages of exploring its potential (e.g., long-range modeling capability and efficiency) in low-level vision. Different from the CNN-based and transformer-based methods, our goal is to explore the long-range modeling capability and efficiency of mamba-based methods for efficient SR. In this paper, we employ vision mamba as the basic architecture to enhance the model\u2019s long-range modeling capability and efficiency. Our DVMSR consists of several stacked Residual State Space Blocks (RSSB), each containing several Vision Mamba Modules (ViMM). The ViMM includes a unidirectional SSM, a residual connection, and SiLU activation function. These elements work together to accelerate model convergence and enhance model accuracy and efficiency. As shown in Figure 2, our method can achieve a larger perception range compared with other methods. Furthermore, we utilize a distillation strategy to enhance the model\u2019s efficiency. We introduce a Mamba network with a larger number of parameters as the teacher network to extract knowledge for the learning of the student network. Extensive experiments and ablation studies have shown the effectiveness of our proposed method. Our contributions can be summarized as follows: 1. By leveraging the long-range modeling capability of Vision Mamba, we propose a lightweight model with unidirectional state space models (SSM) for efficient superresolution. 2. We propose a special feature distillation strategy to enhance the efficiency ability of vision mamba for efficient super-resolution. 3. Extensive experiments have shown that our proposed method outperforms existing state-of-the-art (SOTA) methods in terms of parameters while maintaining comparable PSNR and SSIM performance. 2. Related Work 2.1. Lightweight Super Resolution SRCNN [18] marks the inaugural application of deep learning algorithms in the Single Image Super-Resolution (SISR) [11, 12]. A series of works have been explored to apply the SR method in real scenarios, such as GAN-based SR [56, 128? , 129], degradation model [107, 126, 130], multi-task learning [132] and systematic evaluation [131]. In real-world SR model deployments, the computing power of the deployed devices is often limited, such as edge devices, etc. In this case, the efficiency of the SR network becomes an important aspect. Efficient Image SuperResolution aims to reduce the computational effort and parameters of the SR network while achieving faster inference times and maintaining high performance. FSRCNN [20] reduces unnecessary computational costs by utilizing the deconvolution layer as the upsampling layer. VDSR [47] is introduced to further improve super-resolution (SR) performance. DRCN [46] achieves parameter reduction through deep recursive convolutional networks. LapSRN [53] employs a Laplacian pyramid super-resolution block for HR image reconstruction. DRRN [91] employs recursive and residual network architectures, surpassing DRCN in both performance and parameter reduction. MemNet [92] introduces a memory block to explicitly model long-term dependencies in CNN-based SR models. IDN [37] explicitly divides the preceding extracted features into two parts. IMDN [39] introduces a lightweight Information MultiDistillation Network by constructing cascaded Information Multi-Distillation Blocks. RFDN [67] proposes the residual feature distillation network. RLFN [50] improves its speed by eliminating hierarchical distillation connections. DIPNet [114] introduces the Reparameterization Residual Feature Block, which explores the potential of complex structures during optimization while maintaining computational efficiency. Besides, they achieve first place in the NTIRE 2023 Efficient Super-Resolution Challenge [60]. 2.2. State space models in Vision Recent researches have led to a surge of interest in the state space model (SSM), which has its origins in the classic Kalman filter model [44]. The linear scalability of State Space Models (SSMs) in handling long-range dependencies, exemplified by the Mamba architecture [24], contrasts with Transformers. While Mamba outperforms Transformers in natural language tasks, recent research endeavors extend its applicability to vision tasks. Specifically, Mamba models are designed to capture long-range temporal dependencies in video data, enhancing video classification performance [41, 42, 80, 102]. Additionally, other works explore Mamba\u2019s applicability in vision tasks, including image classification [71, 139], biomedical image segmentation [73], remote sensing image classification [9], and Multimodal Learning [85]. The research conducted by [26] emphasizes Mamba\u2019s utility as a straightforward and efficient baseline for image restoration in low-level vision tasks. Our work extends this by proposing a novel network architecture that combines Mamba with distillation, achieving a tradeoff between super-resolution quality and computational ef\fficiency. 2.3. Feature Distillation Knowledge distillation stands out as a straightforward yet powerful technique for enhancing the performance of smaller models, a necessity driven by the limited computing power of deployed devices. This method involves training a smaller network (student) under the guidance of a larger network (teacher), enabling effective knowledge transfer. Unlike other compression methods, knowledge distillation can reduce network size regardless of structural differences between the teacher and student networks. The seminal work by [31] introduced the knowledge distillation (KD) method, utilizing the softmax output of the teacher network. Notably, this method can be applied across various network architectures due to matching output dimensions. Over time, intermediate layer distillation methods have emerged, leveraging insights from the teacher network\u2019s convolutional or penultimate layers, preserving crucial feature-map localities [1, 31, 48, 115]. Moreover, there exists a wealth of research integrating distillation techniques into super-resolution tasks [38, 40, 68, 108, 138]. In this paper, we focus on adopting the output feature map of a pre-trained model as the distillation target. Through extensive experimentation, we demonstrate the effectiveness of our approach in enhancing model performance. 3. Methodology 3.1. Motivation Efficient Super Resolution (SR) is designed to transform low-quality images into high-quality counterparts, leveraging a small parameter set and minimal computational power. ESR predominantly relies on CNNs for local feature extraction, but their limited long-range modeling hinders performance. Transformers, while proficient in global context, introduce computational complexities. Mamba excels in high-level vision tasks, supported by prior research [9, 71, 73, 85, 112, 139]. Motivated by Mamba\u2019s long-range modeling capabilities, we investigate its performance in super-resolution (SR) tasks, comparing it to CNN-based ESR methods [39, 67, 114] and transformerbased method [64]. To elucidate Mamba\u2019s operational mechanisms, we employe a specialized diagnostic tool called LAM [13], designed specifically for SR tasks. Utilizing LAM enabled us to pinpoint the input pixels that contribute most significantly to the selected region. As depicted in Figure 2, the red-marked points denote informative pixels crucial for the reconstruction process. Notably, DVMSR exhibited a notably higher DI (Diffusion index) indication compared to other models, indicating its superior ability to leverage a broader range of pixel information and affirming its exceptional long-range modeling capability. The proposed DVMSR yields improved image details during the reconstruction process, thereby substantiating its efficacy for super-resolution tasks. 3.2. Preliminaries State space models (SSMs), such as the Mamba deep learning model, hold potential for long sequence modeling. Inspired by continuous systems, SSMs map a 1-D function or sequence x(t) \u2208R 7\u2212 \u2192y(t) \u2208R via a hidden state h(t) \u2208RN. The formulation is as follows: h\u2032(t) = Ah(t) + Bx(t), y(t) = Ch(t). (1) where N is the state size, A \u2208RN\u00d7N, B \u2208RN\u00d71, C \u2208 R1\u00d7N. Mamba is the discrete versions of the continuous system, and it achieves this by utilizing \u2206to convert continuous parameters A and B into their discrete counterparts, \u00af A and \u00af B. The commonly used method for transformation is zeroorder hold (ZOH), which is defined as follows: \u00af A = exp(\u2206A), \u00af B = (\u2206A)\u22121(exp(\u2206A) \u2212I) \u00b7 \u2206B. (2) After the discretization of \u00af A, \u00af B, the discretized version of Eq. 5 using a step size \u2206can be rewritten as: ht = \u00af Aht\u22121 + \u00af Bxt, yt = Cht. (3) 3.3. Overall network architecture The overall network architecture of our proposed DVMSR is depicted in Figure 3. Our DVMSR mainly consists of three main modules: feature extraction convolution, multiple stacked Residual State Space Blocks (RSSBs), and a reconstruction module. Specifically, for a given lowresolution (LR) input ILR \u2208RH\u00d7W \u00d7Cin , we exploit one convolution layer to extract the first feature F0 \u2208 RH\u00d7W \u00d7C, where Cin and C denote the channel number of the input and the intermediate feature. Then, a series of Residual State Space Block (RSSB) and one 3 \u00d7 3 convolution layer HConv(\u00b7) are utilized to perform the deep feature extraction. After that, we add a global residual connection to fuse shallow features F0 and deep features FD \u2208RH\u00d7W \u00d7C, and then reconstruct the high-resolution result via a reconstruction module. As depicted in Figure 3, each RSSB contains two Vision Mamba Module (ViMM) and a 3 \u00d7 3 convolution layer with a residual connection. For the reconstruction module, the pixel-shuffle method is adopted to up-sample the fused feature. \fFigure 2. The LAM results are provided for various networks including both CNN-based and transformer-based methods. LAM attribution indicates the significance of each pixel in the input LR image during the reconstruction process of the patch highlighted by a box. The Diffusion Index (DI) denotes the extent of pixel involvement. A higher DI indicates a broader range of utilized pixels. Figure 3. The overall network architecture of our DVMSR. Figure 4. The structure of Vision Mamba Module(ViMM). 3.3.1 Mamba network The design of mamba network is shown in Figure 4, which is Vision Mamba Module (ViMM) using unidirectional sequence modeling. The input token sequence X \u2208 RH\u00d7W \u00d7C is first normalized by the normalization layer. Next, we linearly project the normalized sequence, expanded the features channel to \u03bbC. We proceed by processing the projection layer through 1-D convolution, resulting in the computation of X1 via the SSM. The X1 gated by the projection layer and a residual connection to get the output token sequence Xout \u2208RH\u00d7W \u00d7C, as follows: X1 = SSM(Conv1d(Linear(LN(X)))), X2 = SiLU(Linear(LN(X))), Xout = Linear(X1 \u2299X2) + X. (4) Where LN is the layer normalization and \u2299denotes the Hadamard product. 3.3.2 Distillation strategy Our method introduces a deep feature distillation strategy (Fig. 5). During the distillation stage, the teacher network accumulates rich representation knowledge, maintaining a fixed state. By minimizing the L1 loss, we ensure alignment between student network features and those of the teacher. This formal process facilitates effective knowledge transfer from the teacher to the student network: \fFigure 5. The deep feature distillation pipeline of our method. Lout = \u03bbdisLdis + \u03bb1L1, Ldis = \u2225T (ILR) \u2212S(ILR)\u22251 , L1 = \u2225IHR \u2212S(ILR)\u22251 , (5) where \u03bbdis and \u03bb1 represents the coefficient of the Ldis loss function and the coefficient of the L1 loss function, respectively. They are set 1. T represents the function of our teacher network and S denotes the function of our proposed network. ILR and IHR are the input LR images and the corresponding ground-truth HR images, respectively. More information of Ldis can be seen from Fig.6. 4. Experiments 4.1. Datasets and metrics In this paper, DF2K (DIV2K + Flickr2K) [98] with 3450 images are used for training the proposed model from scratch. During testing, we select five standard benchmark datasets: Set5 [7], Set14 [117], BSD100 [75], Urban100 [36] and Manga109 [76]. The low-resolution images are generated from the ground truth images by the \u201cbicubic\u201d downsampling in MATLAB. PSNR/SSIM measured by discarding a 4-pixel boundary around the images, and calculated on the Y channel is reported for the quantitative metrics. 4.2. Implementation details During training, we set the input patch size to 256 \u00d7 256 and use random rotation and horizontal flipping for data augmentation. The batch size is set to 128 and the total number of iterations is 500k. The initial learning rate is set to 2 \u00d7 10\u22124. We adopt a multi-step learning rate strategy, where the learning rate will be halved when the iteration reaches 250000, 400000, 450000, and 475000, respectively. Adam optimizer with \u03b21 = 0.9 and \u03b22 = 0.99 is used to train the model. Distillation training. In the teacher learning phase, we utilize the DF2K dataset with 2K resolution to train the teacher network, which comprises 8 RSSB and 2 ViMM blocks with 192 channels. During the distillation training phase, we use DF2K datasets for the student network, which contains 4 RSSB and 2 ViMM blocks with 60 channels. 4.3. Comparison with State-of-the-art SR models We compare DVMSR with several advanced efficient superresolution model [2, 18, 20, 37, 39, 46, 47, 50, 53, 67, 91, 92, 114, 120]. The quantitative performance comparison on several benchmark datasets [7, 36, 75, 76, 117] is indicated in Table 1. Our experimental results showcase our ability to achieve smaller parameter counts while surpassing several previous methods on five benchmark datasets. Specifically, we attained higher SSIM scores on Set5, Set14, and BSD100. It\u2019s important to note that SSIM scores serve as a crucial metric, indicating how effectively our model preserves the structure and content of the images, ultimately resulting in reconstructions that closely resemble the original images. Additionally, we observed that PSNR values remain comparable across these five datasets. This comprehensive evaluation underscores the effectiveness of our approach in enhancing image quality while maintaining efficiency, making it a promising solution for various image enhancement tasks. It\u2019s worth emphasizing that in our current study, we directly utilize the final model architecture employed in the NTIRE competition. Remarkably, we manage to maintain excellent performance without unnecessarily inflating the parameter count. This strategic decision underscores our commitment to efficiency and effectiveness in model design, ensuring that our approach remains practical and scalable for real-world applications. Model complexity comparisons between SwinIR and DVMSR. Our investigation focuses on Mamba\u2019s performance in super-resolution (SR) tasks. In Fig. 2, we show the excellent long-range modeling capabilities of our \fTable 1. Average PSNR/SSIM for scale factor 4 on datasets Set5, Set14, BSD100, Urban100, and Manga109. The best and second best results are highlighted in red and blue respectively. Method Params Set5 Set14 BSD100 Urban100 Manga109 PSNR/SSIM PSNR/SSIM PSNR/SSIM PSNR/SSIM PSNR/SSIM Bicubic 28.42/0.8104 26.00/0.7027 25.96/0.6675 23.14/0.6577 24.89/0.7866 SRCNN [18] 8K 30.48/0.8626 27.50/0.7513 26.90/0.7101 24.52/0.7221 27.58/0.8555 FSRCNN [20] 13K 30.72/0.8660 27.61/0.7550 26.98/0.7150 24.62/0.7280 27.90/0.8610 VDSR [47] 666K 31.35/0.8838 28.01/0.7674 27.29/0.7251 25.18/0.7524 28.83/0.8870 DRCN [46] 1774K 31.53/0.8854 28.02/0.7670 27.23/0.7233 25.14/0.7510 28.93/0.8854 LapSRN [53] 502K 31.54/0.8852 28.09/0.7700 27.32/0.7275 25.21/0.7562 29.09/0.8900 DRRN [91] 298K 31.68/0.8888 28.21/0.7720 27.38/0.7284 25.44/0.7638 29.45/0.8946 MemNet [92] 678K 31.74/0.8893 28.26/0.7723 27.40/0.7281 25.50/0.7630 29.42/0.8942 IDN [37] 553K 31.82/0.8903 28.25/0.7730 27.41/0.7297 25.41/0.7632 29.41/0.8942 SRMDNF [120] 1552K 31.96/0.8925 28.35/0.7787 27.49/0.7337 25.68/0.7731 30.09/0.9024 CARN [2] 1592K 32.13/0.8937 28.60/0.7806 27.58/0.7349 26.07/0.7837 30.47/0.9084 IMDN [39] 715K 32.21/0.8948 28.58/0.7811 27.56/0.7353 26.04/0.7838 30.45/0.9075 RFDN [67] 550K 32.24/0.8952 28.61/0.7819 27.57/0.7360 26.11/0.7858 30.58/0.9089 RLFN [50] 543K 32.24/0.8952 28.62/0.7813 27.60/0.7364 26.17/0.7877 -/DIPNet [114] 543K 32.20/0.8950 28.58/0.7811 27.59/0.7364 26.16/0.7879 30.53/0.9087 DVMSR (Ours) 424K 32.19/0.8955 28.61/0.7823 27.58/0.7379 26.03/0.7838 30.48/0.9084 DVMSR using LAM. Additionally, we compare DVMSR with SwinIR, a transformer-based model, in terms of model complexity. SwinIR outperforms DVMSR by 0.23 dB in PSNR, but at the cost of approximately twice the number of parameters, significantly higher FLOPS, and about 20 times longer inference time. These findings suggest that Mambabased models hold promise for efficient SR. Table 2. Model complexity comparisons between SwinIR and DVMSR. Times represent the average inference time measured on the DIV2K dataset with an Nvidia RTX 3090 in seconds (s). FLOPS and memory is measured when the input is 256 \u00d7 256. PSNR is the result of testing on DIV2K. Method PSNR Time (s) Params[M] FLOPS[G] Activations Memory[M] SwinIR 29.20 dB 0.865 0.9296 70.7828 26.7387 1454.458 DVMSR 28.97 dB 0.048 0.4244 20.1680 26.7387 1094.245 4.4. Ablation Study 4.4.1 Model Parameter Analysis Here, we train DVMSR on DIV2K for classical image SR (\u00d74) and test it on Set5 and Set14. Impact of ViMM number. We show the effects of ViMM number in each RSSB on model performance in Table 3. In experiments 1 3, it is observed that the PSNR/SSIM is negatively correlated with the number of ViMMs. However, when we set the ViMM number to 1, as presented in experiment 4, the PSNR in Set5 and Set14 decreased by 0.09 dB compared to when the ViMM number is set to 2. Therefore, there may be a balance point for the ViMM number, where it should not be too large to avoid over-complexity of the model, nor too small to limit the model\u2019s ability to represent the data. Experimental results indicate that setting the ViMM number to 2 is appropriate. Table 3. Impact of ViMM number in each RSSB on the Set5 and Set14 datasets with scale factor of \u00d74. The number of RSSB is fixed at 4 and keep other parameter settings consistent. The best results are highlighted. Exp. Params[M] ViMM number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 7.222 6,6,6,6 31.99/0.8926 28.44/0.7785 2 5.214 2,2,9,2 32.17/0.8959 28.63/0.7834 3 3.651 2,2,2,2 32.30/0.8972 28.68/0.7847 4 2.758 1,1,1,1 32.21/0.8954 28.59/0.7821 Impact of RSSB number. In Table 4, In Experiments 13, as the RSSB number increases, the parameter count increases, with the channel number set to 180. Along with the increase in RSSB number, the PSNR in Set5 shows a significant improvement. Compared to Experiment 1, Experiment 2 shows an increase of 0.26 dB, and relative to Experiment 2, Experiment 3 shows an increase of 0.13 dB. When we set the RSSB number to 10, the improvement is moderated, with Experiment 4 showing an increase of 0.01 dB relative to Experiment 3. Impact of channel number. We maintained the ViMM number and RSSB number while examining the influence of channel numbers on model performance, as detailed in Table 5. Notably, our analysis revealed a diminishing improvement in model performance when the channel number \fTable 4. Impact of RSSB number on the Set5 and Set14 datasets with scale factor of \u00d74. The number of ViMM is fixed at 2 and keeps other parameter settings consistent. The best results are highlighted. Exp. Params[M] RSSB number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 2.175 2 32.04/0.8938 28.51/0.7799 2 3.651 4 32.30/0.8972 28.68/0.7847 3 5.128 6 32.43/0.8987 28.75/0.7866 4 8.080 10 32.44/0.8990 28.77/0.7874 was set to 210. Thus, we conclude that setting the channel number to 192 is more suitable for optimal model performance. Table 5. Impact of channel number on the Set5 and Set14 datasets with scale factor of \u00d74. keep other parameter settings consistent. The best results are highlighted. Exp. Params[M] channel number Set5 Set14 PSNR/SSIM PSNR/SSIM 1 2.664 150 32.32/0.8971 28.65/0.7838 2 3.651 180 32.30/0.8972 28.68/0.7847 3 4.089 192 32.37/0.8977 28.71/0.7851 4 4.809 210 32.39/0.8976 28.71/0.7850 Table 6. Comparison of unidirectional SSM or bidirectional SSM. Times represent the average inference time measured on the DIV2K dataset with an Nvidia RTX 3090 in seconds (s). FLOPS and memory are measured when the input is 256 \u00d7 256. PSNR is the result of testing on DIV2K. Method PSNR Time (s) Params[M] FLOPS[G] Activations Memory[M] unidirectional SSM 28.87 dB 0.048 0.4244 20.1680 26.7387 1094.245 bidirectional SSM 28.88 dB 0.087 0.4849 23.9429 26.7387 1451.680 4.4.2 Distillation Learning Distillation loss. To investigate the effectiveness of distillation loss, we tried multiple distillation strategies. Mid-level feature distillation and end-level feature distillation are presented in Figure 6. As shown in Table 7, using the end-level feature distillation method tends to increase the PSNR and SSIM on Set5 and Set14 datasets. This suggests that the features towards the end of the model might be closer to the target output of the SR task. When attempting to alter the weights and types of distillation loss in the mid-level feature distillation method, there were no changes observed in PSNR and SSIM values on Set5 and Set14 datasets. This indicates that it is difficult for the student model to benefit from the features of the middle layer of the teacher model, as even after modifying the weights and types of distillation loss, there were no significant changes in the results. When we increase the weight of distillation loss in the end-level Figure 6. Left: The structure of mid-level feature distillation; Right: The structure of end-level feature distillation feature distillation method, there is a slight decrease in the PSNR and SSIM on Set5 and Set14 datasets. This could be because excessively high weights on distillation loss might introduce too many constraints, thereby affecting the model\u2019s performance. Table 7. Impact of the distillation loss. \u201c\u2718\u201d signifies that distillation is not used, and \u201c\u2714\u201d signifies that distillation is used. \u201cmid\u201d and \u201cend\u201d represent mid-level feature distillation and endlevel feature distillation, respectively. Ldis : L1 represents the weight ratio of the distillation loss and L1 loss. distillation distillation distillation Ldis : L1 Set5 Set14 strategy position loss PSNR/SSIM PSNR/SSIM \u2718 32.04/0.8940 28.50/0.7801 \u2714 mid L1 1:1 32.11/0.8949 28.56/0.7811 \u2714 mid L1 5:1 32.11/0.8949 28.56/0.7811 \u2714 mid L2 1:1 32.11/0.8949 28.56/0.7811 \u2714 end L1 1:1 32.12/0.8951 28.57/0.7813 \u2714 end L1 5:1 32.11/0.8950 28.57/0.7813 Teacher model. When the teacher model has more parameters and richer representation capability, the knowledge it transfers to the student model will be more abundant, leading to a more significant performance improvement of the student model on the task. To verify this conclusion, we attempted two teacher models with different parameters. They exhibited a PSNR difference of 0.27dB on the Set5 dataset. However, as shown in Table 8, the performance of the student model remained unchanged. This could indicate that the student model\u2019s capacity or architecture may not be sufficiently expressive to fully utilize the additional knowledge provided by the larger teacher model. Therefore, finding the balance point between the performance of the teacher model and the student model is a worthwhile exploration. 4.4.3 Unidirectional v.s. Bidirectional SSM To investigate the effectiveness of bidirectional SSM in ESR, we evaluate its performance in ESR based on several aspects: PSNR, Time, Params, FLOPS, Activations, and Memory. The architecture of unidirectional SSM and \fTable 8. Design of the teacher model. PSNR is the result of testing on Set5. Params is the parameter of teacher model, and the parameter of student model is fixed. Method Params[M] Teacher model Student model PSNR/SSIM PSNR/SSIM DVMSR 32.04/0.8940 DVMSR 4.089 32.38/0.8977 32.12/0.8950 DVMSR 7.432 32.65/0.9011 32.12/0.8950 Figure 7. Unidirectional SSM or bidirectional SSM in ViMM. bidirectional SSM are presented in Figure 7. As shown in Table 6, compared to Unidirectional SSM, the improvement of bidirectional SSM in PSNR is limited (increased by 0.01dB), while the inference time has increased by 0.039s. This cost is significant. Therefore, Unidirectional SSM is more suitable for the ESR task. 4.4.4 NTIRE 2024 Challenge on Efficient SR We actively participate in the NTIRE 2024 Efficient SuperResolution Challenge [86]. The model structure and training strategy are slightly different from the above. This competition aims to procure solutions that excel in overall performance metrics, encompassing inference runtime, FLOPS, and parameter optimization on the NVIDIA GeForce RTX 3090 GPU. This challenge also requires the maintenance or enhancement of threshold PSNR results, underscoring the importance of efficiency without compromising on image quality benchmarks. During the teacher learning phase, we train the teacher network using the DIV2K dataset with a resolution of 2K. Our teacher architecture consists of 6 RSSB (Residual Scaling and Shifting Block) and 2 ViMM (Vision Mamba Modules), each configured with 180 channels. In the subsequent distillation training phase, we amalgamated data from both the DIV2K and LSDIR datasets to train the student network. This student model comprises 2 RSSB and 2 ViMM blocks, tailored with 60 channels to maintain computational efficiency while preserving performance standards. Notably, the teacher network remains unchanged. We employ DIV2K [98] and LSDIR [59] to construct the training dataset. The High-Resolution (HR) images are cropped to 256 \u00d7 256 patches for the training procedure. During network optimization, we employ the L1 loss function in conjunction with the Adam optimizer, a widely adopted optimization algorithm in deep learning tasks. Our optimization regimen commenced with an initial learning rate of 2 \u00d7 10\u22124, evolving through a multi-step learning rate strategy. Specifically, the learning rate halved at key iterations: 250000, 400000, 450000, and 475000, respectively, throughout the 500k total iterations. This adaptive learning rate scheme enhances model convergence and stability over the training period, crucial for achieving superior performance. Through extensive experiments, we refine our model\u2019s architecture and training process, aiming for excellence in both efficiency and performance, as evidenced by our results in Table 9. Our approach employs a novel architecture that differs from both CNN and transformer, providing a reference for the development of mamba in Efficient SuperReslution. Table 9. NTIRE 2024 ESR Challenge results. Model Val PSNR Test PSNR Val Time Test Time FLOPS Params (dB) (dB) (ms) (ms) (G) (M) RLFN baseline 26.96 27.07 14.348 9.194 19.67 0.317 DVMSR 26.93 27.04 40.75 34.634 20.17 0.424 5. Conclusion In this paper, we propose DVMSR, a novel lightweight Image SR network that incorporates Vision Mamba and a distillation strategy. It consists of three main modules: feature extraction convolution, multiple stacked Residual State Space Blocks (RSSBs), and a reconstruction module. In particular, we use a stack of residual state space blocks (RSSB) for deep feature extraction, and each RSSB is composed of Vision Mamba Moudles, a convolution layer and a residual connection. Specifically, we leverage the larger teacher model as additional supervision, which effectively enhances the performance of the student model. DVMSR demonstrates the potential for efficient and long-range dependency modeling in SR tasks, but our work merely offers a preliminary insight. We still need further to explore the potential of Mamba in ESR tasks."
18
+ }
title_10K/test_title_short_2405.03025v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03025v1",
3
+ "title": "Matten: Video Generation with Mamba-Attention",
4
+ "abstract": "In this paper, we introduce Matten, a cutting-edge latent diffusion model\nwith Mamba-Attention architecture for video generation. With minimal\ncomputational cost, Matten employs spatial-temporal attention for local video\ncontent modeling and bidirectional Mamba for global video content modeling. Our\ncomprehensive experimental evaluation demonstrates that Matten has competitive\nperformance with the current Transformer-based and GAN-based models in\nbenchmark performance, achieving superior FVD scores and efficiency.\nAdditionally, we observe a direct positive correlation between the complexity\nof our designed model and the improvement in video quality, indicating the\nexcellent scalability of Matten.",
5
+ "authors": "Yu Gao, Jiancheng Huang, Xiaopeng Sun, Zequn Jie, Yujie Zhong, Lin Ma",
6
+ "published": "2024-05-05",
7
+ "updated": "2024-05-05",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Mamba",
14
+ "gt": "Matten: Video Generation with Mamba-Attention",
15
+ "main_content": "Introduction Recent advancements in diffusion models have demonstrated impressive capabilities in video generation [1\u20135]. It has been observed that breakthroughs in architectural design are crucial for the efficient application of these models [6\u20138]. Contemporary studies largely concentrate on CNN-based U-Net architectures [1, 4] and Transformer-based frameworks [3, 2], both of which employ attention mechanisms to process spatio-temporal dynamics in video content. Spatial attention, which involves computing self-attention among image tokens within a single frame, is extensively utilized in both U-Net-based and Transformer-based video generation diffusion models as shown in Fig. 1 (a). Prevailing techniques typically apply local attention within the temporal layers as illustrated in Fig. 1 (b), where attention calculations are confined to identical positions across different frames. This approach fails to address the critical aspect of capturing interrelations across varying spatial positions in successive frames. A more effective method for temporal-spatial analysis would involve mapping interactions across disparate spatial and temporal locations, as depicted in Fig. 1 (c). Nonetheless, this global-attention method is computationally intensive due to the quadratic complexity involved in computing attention, thus requiring substantial computational resources. There has been a rise in fascination with state space models (SSMs) across a variety of fields, largely due to their ability to deal with long sequences of data [9\u201311]. In the field of Natural Language Processing (NLP), innovations such as the Mamba model [10] have significantly improved both the efficiency of data inference processes and the overall performance of models by introducing dynamic parameters into the SSM structure and by building algorithms tailored for better hardware compatibility. The utility of the Mamba framework has been successfully extended beyond its initial applications, demonstrating its effectiveness in areas such as vision [12, 13] and multimodal applications [14]. Given the complexity of processing video data, we propose to use the Mamba architecture to explore spatio-temporal interactions in video content, as shown in Fig. 1 (d). However, unlike the self-attention layer, it\u2019s important to note that Mamba scans, which do not inherently compute dependencies between tokens, struggle to effectively detect localised data patterns, a limitation pointed out by [15]. \u2020 Corresponding to Zequn Jie <[email protected]>. Preprint. Under review. arXiv:2405.03025v1 [cs.CV] 5 May 2024 \f(a) Spatial-Attention (b) Local Temporal-Attention (c) Global-Attention (d) Global-Mamba Figure 1: Different ways of spatio-temporal modeling using Mamba and Attention. H, W, and F denote the height, weight, and frames, respectively. The red token is an example query, and the blue tokens mean those tokens having information interaction with the query. The shade of blue represents the intensity of the information interaction, with darker colors representing more direct interactions. Mamba scan interactions are distance-related between tokens with a linear complexity, while attention interactions are equal among these tokens with a quadratic complexity. For simplicity, we only show the unidirectional Mamba scan. Regarding the advantages of Mamba and Attention, we introduce a latent diffusion model for video generation with a Mamba-Attention architecture, namely Matten. Specifically, we investigated the impact of various combinations of Mamba and Attention mechanisms on video generation. Our findings demonstrate that the most effective approach is to utilize the Mamba module to capture global temporal relationships (Fig. 1 (d)) while employing the Attention module for capturing spatial and local temporal relationships (Fig. 1 (a) and Fig. 1 (b)). We conducted experimental evaluations to examine the performance and effects of Matten in both unconditional and conditional video generation tasks. Across all test benchmarks, Matten consistently exhibits the comparable FVD score [16] and efficiency with SOTAs. Furthermore, our results indicate that Matten is scalable, evidenced by the direct positive relationship between the model\u2019s complexity and the quality of generated samples. In summary, our contributions are as follows: \u2022 We propose Matten, a novel video latent diffusion model integrated with the mamba block and attention operations, which enables efficient and superior video generation. \u2022 We design four model variants to explore the optimal combination of Mamba and attention in video generation. Based on these variants, we find that the most favorable approach is adopting attention mechanisms to capture local spatio-temporal details and utilizing the Mamba module to capture global information. \u2022 Comprehensive evaluations show that our Matten achieves comparable performance to other models with lower computational and parameter requirements and exhibits strong scalability. 2 Related Work 2.1 Video Generation The task of video generation primarily focuses on produce realistic video clips characterized by high-quality visuals and fluid movements. Previous video generation work can be grouped into 3 types. Initially, a number of researchers focused on adapting powerful GAN-based image generation techniques for video creation [17\u201321]. Nonetheless, GAN-based methods may lead to problems such as mode collapse, reducing diversity and realism. In addition, certain models suggest the learning of data distributions via autoregressive models [22\u2013 25]. These methods typically yield high-quality videos and demonstrate more reliable convergence, but they are hindered by their substantial computational demands. Finally, the latest strides in video generation are centered on the development of systems that utilize diffusion models [26, 27, 4, 28\u2013 33, 2], which have shown considerable promise. These methods primarily use CNN-based U-Net or Transformer as the model architecture. Distinct from these works, our method concentrates on investigating the underexplored area of the combination of mamba and attention within video diffusion. 2 \f2.2 Mamba Mamba, a new State-Space Model, has recently gained prominence in deep learning for its universal approximation capabilities and efficient modeling of long sequences, with applications in diverse fields such as medical imaging, image restoration, graphs, NLP, and image generation [34\u201340]. Drawing from control systems and leveraging HiPPO initialization [41], these models, like LSSL [11], address long-range dependencies but are limited by computational demands. To overcome this, S4 [42] and other structured state-space models introduce various configurations [43, 44, 9] and mechanisms [10] that have been integrated into larger representation models [45\u201347] for tasks in language and speech. Mamba, and its iterations like VisionMamba [12, 13], S4ND [48], and MambaND [49], exhibit a range of computational strategies, from bidirectional SSMs to local convolution and multi-dimensionality considerations. For 3D imaging, T-Mamba [50] tackles the challenges in orthodontic diagnosis due to the powerful ability of Mmaba to handle long-range dependencies. For video understanding, VideoMamba [51] and Video Mamba Suite [52] adapt Mamba to the video domain and address the challenges of local redundancy and global dependencies prevalent in video data. In the domain of diffusion applications using mamba, Zigzag Mamba [53] advances the scalability and efficiency of generating visual content. It tackles the crucial problem of spatial continuity with an innovative scanning approach, incorporates text-conditioning features, and shows enhanced performance across high-resolution image and video datasets. [54] closely relates to our work, employing the mamba block in the temporal layer of video diffusion. Diverging from previous research focused mainly on local temporal modeling, our method, Matten, is uniquely designed to encompass global temporal dimensions. 3 Methodology Our discussion starts with a brief overview of the latent space diffusion model and state space model in Sec. 3.1. This is followed by an in-depth description of the Matten model variants in Sec. 3.2. We then explore conditional ways related to timestep or class in Sec. 3.3. Lastly, a theoretical analysis comparing Mamba with Attention mechanisms is presented in Sec. 3.4. 3.1 Background Latent Space Diffusion Models. [55]. For an input data sample x \u2208pdata(x), Latent Diffusion Models (LDMs) initially utilize the pre-trained VAE or VQ-VAE encoder E to transform the data sample into a latent representation z = E(x). This transformation is followed by a learning phase where the data distribution is modeled through diffusion and denoising steps. During the diffusion phase, noise is incrementally added to the latent encoding, producing a series of increasingly perturbed latent states zt, where the intensity of additive noise is denoted by the timesteps t \u2208T. A specialized model such as U-Net \u03f5\u03b8 is utilized as the noise estimate network to estimate the noise perturbations affecting the latent representation zt during the denoising phase, aiming to minimize the latent diffusion objective. Lsimple = Ez\u223cp(z), \u03f5\u223cN (0,I), t h \u2225\u03f5 \u2212\u03f5\u03b8(zt, t)\u22252 2 i . (1) Furthermore, the diffusion models \u03f5\u03b8 are enhanced with a learned reverse process covariance \u03a3\u03b8, optimized using Lvlb as outlined by [6]. In our research, \u03f5\u03b8 is designed using a Mamba-based framework. Both Lsimple and Lvlb are employed to refine the model\u2019s effectiveness and efficiency. State Space Backbone. State space models (SSMs) have been rigorously validated both theoretically and through empirical evidence to adeptly manage long-range dependencies, demonstrating linear scaling with the length of data sequences. Conventionally, a linear state space model is represented as the following type: h\u2032(t) = A(t)h(t) + B(t)x(t), y(t) = C(t)h(t) + D(t)x(t), (2) which describes the transformation of a 1-D input sequence x(t) \u2208R into a 1-D output sequence y(t) \u2208R, mediated by an N-D latent state sequence h(t) \u2208RN. State space models are particularly crafted to integrate multiple layers of these basic equations within a neural sequence modeling architecture, allowing the parameters A, B, C, and D of each layer to be optimized via deep learning on loss function. N represents the state size, A \u2208RN\u00d7N, B \u2208RN\u00d71, C \u2208R1\u00d7N, and D \u2208R. The process of discretization, essential for applying state space models as detailed in Eq. 2 to realworld deep learning tasks, converts continuous system parameters like A and B into their discrete 3 \fConv \ud835\udf0e SSM \ud835\udf0e Conv \ud835\udf0e SSM \ud835\udf0e Conv \ud835\udf0e SSM flip flip Linear Projection \ud835\udf0e Activation Multiplication Summarization flip Sequence flip (a) Mamba (b) Bidirectional Mamba Figure 2: The original 1D sequence Mamba block and 2D bidirectional Mamba block. The normalization and the residual are omitted for simplification. equivalents A and B. This critical step typically utilizes the zero-order hold (ZOH) method, a technique well-established in academic research for its efficacy. The ZOH method uses the timescale parameter \u2206to bridge the gap between continuous and discrete parameters, thereby facilitating the application of theoretical models within computational settings. A = exp(\u2206A), B = (\u2206A)\u22121(exp(A) \u2212I) \u00b7 \u2206B. (3) With these discretized parameters, the model outlined in Eq. 2 is then adapted to a discrete framework using a timestep \u2206: hk = Ahk\u22121 + Bxk, yk = Chk + Dxk. (4) This approach allows for the seamless integration of state space models into digital platforms. The traditional Mamba block, initially crafted for 1D sequence processing as shown in Fig. 2, is not ideally suited for visual tasks that demand spatial cognizance. To address this limitation, Vision Mamba [13] has developed a bidirectional Mamba block specifically tailored for vision-related applications. This innovative block is engineered to handle flattened visual sequences by employing both forward and backward SSMs concurrently, significantly improving its ability to process with spatial awareness. Mamba employs a work-efficient parallel scan that effectively reduces the sequential dependencies typically associated with recurrent computations. This optimization, coupled with the strategic utilization of GPU operations, eliminates the necessity to explicitly manage the expanded state matrix. In our study, we explore the integration of the Mamba architecture within a video generation framework, leveraging its efficiency and scalability. 3.2 The Model Variants of Matten Consider the representation of a video clip\u2019s latent space, represented by VL \u2208RF \u00d7H\u00d7W \u00d7C, where F indicates the number of frames, H the height of the frame, W the width of the frame, and C the channels per frame within the video\u2019s latent configuration. We transform VL into a sequence of tokens by segmenting and reshaping it, represented as \u02c6 z \u2208R(nf \u00d7nh\u00d7nw)\u00d7d. Here, nf \u00d7 nh \u00d7 nw denotes the total number of tokens, with each token having dimension d. Adopting a strategy similar to Latte, we assign nf = F, nh = H/2, and nw = W/2 to structure the data effectively. Furthermore, a spatio-temporal positional embedding, denoted as p, is incorporated into the token sequence \u02c6 z. The input for the Matten model thus becomes z = \u02c6 z + p, facilitating complex model interactions. As illustrated in Fig. 3, we introduce four distinct variants of the Matten model to enhance its versatility and effectiveness in video processing. Global-Sequence Mamba Block. As illustrated in Fig. 3 (a), this variant refers to the execution of 3D Mamba scans in the full sequence of this spatiotemporal input. Following VideoMamba [51], we adopt Spatial-First Scan for our Global-Sequence Mamba block. This straightforward operation has been proven to be highly effective. It involves arranging spatial tokens based on their location and stacking them sequentially frame by frame. We reshape z into zfull \u2208R1\u00d7nf \u2217nh\u2217nw\u00d7d as the input of the Global-Sequence Mamba block to capture spatial-first information. The Bidirectional-Mamba layer is used. 4 \fFull-Sequence Mamba Block Embedding Full-Sequence Mamba Block \u2026 \u2026 Spatial Mamba Block Embedding Temporal Mamba Block \u2026 \u2026 Spatial Mamba Block Embedding \u2026 \u2026 Full-Sequence Scans Spatial-Attention Temporal-Attention Embedding \u2026 \u2026 Full-Sequence Scans TemporalAttention Variant 1 Variant 2 Variant 3 Variant 4 Figure 3: We introduce four model variants designed to harness spatio-temporal dynamics in videos effectively. For clarity, the embeddings shown in the diagram represent the patch and reshaped outcomes of the latent video. Spatial and Temporal Mamba Blocks Interleaved. This particular variant leverages the Mamba module as a substitute for the traditional attention module within Transformer-based diffusion models for video generation, as noted in studies such as [2, 56, 57]. Illustrated in Fig. 3 (b), the backbone of this variant, known as Matten, is equipped with two types of Bidirectional-Mamba blocks: spatial Bidirectional-Mamba blocks and temporal Bidirectional-Mamba blocks. The spatial blocks are designed to solely capture spatial details among tokens that share identical temporal indices, whereas the temporal blocks are tasked with capturing information across different times within the same spatial coordinate. For effective spatial information processing, z is restructured into zs \u2208Rnf \u00d7s\u00d7d, which then serves as the input for the spatial Mamba block. Then, we reshape zs into zt \u2208Rs\u00d7nf \u00d7d for the temporal Mamba block to process temporal information. Global-Sequence Mamba Block with Spatial-Temporal Attention Interleaved. Although Mamba demonstrates efficient performance in long-distance modeling, its advantages in shorter sequences modeling are not as pronounced [10], compared to the attention operation in Transformer. Consequently, we have developed a hybrid block that leverages the strengths of both the attention mechanism and Mamba as illustrated in Fig. 3 (c), which integrates Mamba and Attention computations for both short and long-range modeling. Each block is composed of Spatial Attention computation, Temporal Attention computation, and a Global-Sequence Mamba scan in series. This design enables our model to effectively capture both the global and local information present in the latent space of videos. Global-Sequence Mamba Block with Temporal Attention Interleaved. The scanning in the Global-Sequence Mamba block is continuous in the spatial domain but discontinuous in the temporal domain [51]. Thus, this variant has removed the Spatial Attention component, while retaining the Temporal Attention block. Consequently, by concentrating on a Spatial-First scan augmented with Temporal Attention shown in Fig. 3 (d), we strive to enhance our model\u2019s efficiency and precision in processing the dynamic facets of video data, thereby assuring robust performance in a diverse range of video processing tasks. 3.3 Conditional Way of Timestep or Class Drawing from the frameworks presented by Latte and DiS, we perform experiments on two distinct methodologies for embedding timestep or class information c into our model. The first method, inspired by DiS, involves treating c as tokens, a strategy we designate as conditional tokens. The second method adopts a technique akin to adaptive normalization (AdaN) [58, 7], specifically tailored for integration within the Mamba block. This involves using MLP layer to compute parameters 5 \fMethod Pretrained FaceForensics SkyTimelapse UCF101 Taichi-HD FLOPs (G) MoCoGAN % 124.7 206.6 2886.9 VideoGPT % 185.9 222.7 2880.6 DIGAN % 62.5 83.11 1630.2 156.7 StyleGAN-V % 47.41 79.52 1431.0 PVDM % 355.92 75.48 1141.9 540.2 MoStGAN-V % 39.70 65.30 1380.3 MoCoGAN-HD \" 111.8 164.1 1729.6 128.1 LVDM \" 95.20 372.0 99.0 Latte \" 34.00 59.82 477.97 159.60 5572 Matten (ours) % 45.01 53.56 210.61 158.56 4008 Table 1: FVD metrics for various video generation models across multiple datasets are presented. FVD scores for comparative baseline models, as reported in sources such as Latte, StyleGAN-V, or respective original publications, are included for reference. In this context, \"Pretrained\" refers to models that utilize a pretraining approach based on image generation techniques. \u03b3c and \u03b2c from c, formulating the operation AdaN(f, c) = \u03b3cNorm(f) + \u03b2c, where f denotes the feature maps in the Mamba block. Further, this adaptive normalization is implemented prior to residual connections of the Mamba block, implementing by the transformation RCs(f, c) = \u03b1cf + MambaScans(AdaN(f, c)), with MambaScans representing the Bidirectional-Mamba scans within the block. We refer to this advanced technique as Mamba adaptive normalization (MAdaN), which seamlessly incorporates class or timestep information to enhance model responsiveness and contextual relevance. 3.4 Analysis of Mamba and Attention In summary, the hyperparameters of our proposed block encompass hidden size D, expanded state dimension E, and SSM dimension N. All the settings of Matten are detailed in Table 2, covering different numbers of parameters and computation cost to thoroughly evaluate scalability performance. Specifically, the Gflop metric is analyzed during the generation of 16\u00d7256\u00d7256 unconditional videos, employing a patch size of p = 2. Consistent with [10], we standardize the SSM dimension N across all models at 16. Both the SSM block within Matten and the self-attention mechanism in Transformer architectures are integral for effective context modeling. We provide a detailed theoretical analysis of computational efficiency as well. For a given sequence X \u2208R1\u00d7J\u00d7D with the standard setting E = 2, the computational complexities of self-attention (SA), Feed-Forward Net (FFN) and SSM operations are calculated as follows: O(SA) = 2J2D, (5) O(FFN) = 4JD2, (6) O(SSM) = 3J(2D)N + J(2D)N 2. (7) 3J(2D)N involves the calculation with B, C, and D, while J(2D)N 2 denotes the calculation with A. It demonstrates that self-attention\u2019s computational demand scales quadratically with the sequence length J, whereas SSM operations scale linearly. Notably, with N typically fixed at 16, this linear scalability renders the Mamba architecture particularly apt for handling extensive sequences typical in scenarios like global relationship modeling in video data. When comparing the terms 2J2D and J(2D)N 2, it is clear that the Mamba block is more computationally efficient than self-attention, particularly when the sequence length J significantly exceeds N 2. For shorter sequences that focus on spatial and localized temporal relationships, the attention mechanism offers a more computationally efficient alternative when the computational overhead is manageable, as corroborated by empirical results. 4 Experiments This part first describes the experimental settings, including details about the datasets we used, evaluation metrics, compared methods, configurations of the Matten model, and specific implementation 6 \fVariant 1 Variant 2 Variant 3 Variant 4 Params (M) 814 814 853 846 FLOPs (G) 1590 1660 4008 3660 Table 2: The parameter count and FLOPs (Floating-Point Operations) associated with various model variants of Matten. Latte Matten PVDM Real Figure 4: Sample videos from the different methods and real data on SkyTimelapse. aspects. Following this, ablation studies are conducted to identify optimal practices and assess the impact of model size. The section concludes with a comparative analysis of our results on 4 common datasets against advanced video generation methods. 4.1 Experimental Detail Datasets Overview. We engage in extensive experiments across four renowned and common datasets: FaceForensics [59], SkyTimelapse [60], UCF101 [61], and Taichi-HD [62]. Following protocols established in Latte, we utilize predefined training and testing divisions. From these datasets, we extract video clips consisting of 16 frames, applying a sampling interval of 3, and resize each frame to a uniform resolution of 256x256 for our experiments. Evaluation Metrics. For robust quantitative analysis, we adopt the Fr\u00e9chet Video Distance (FVD) [16], recognized for its correlation with human perceptual evaluation. In compliance with the methodologies of StyleGAN-V, we determine FVD scores by examining 2,048 video clips, each containing 16 frames. Baseline Comparisons. Our study includes comparisons with advanced methods to assess the performance of our approach quantitatively, including MoCoGAN [63], VideoGPT [25], MoCoGANHD [64], DIGAN [65], StyleGAN-V [66], PVDM [1], MoStGAN-V [67], LVDM [68], and Latte [2]. Unless explicitly stated otherwise, all presented values are obtained from the latest relevant studies: Latte, StyleGAN-V, PVDM, or the original paper. Matten Model Configurations. Our Matten model is structured using a series of L Mamba blocks, with each block having a hidden dimension of D. Inspired by the Vision Transformer (ViT) approach, we delineate four distinct configurations varying in parameter count, detailed in Table 3. Implementation Specifics. All ablation experiments adopt the AdamW optimizer, set at a fixed learning rate of 1 \u00d7 10\u22124. The sole augmentation technique applied is horizontal flipping. Consistent with prevailing strategies in generative modeling [7, 8], we employ the exponential moving average (EMA) of the model weights with a decay rate of 0.99 at the first 50k steps and the other 100k steps during the training process. The results reported are derived directly using the EMA-enhanced models. Additionally, the architecture benefits from the integration of a pre-trained variational autoencoder, sourced from Stable Diffusion v1-4. 7 \fLatte Matten PVDM Real Figure 5: Sample videos generated using various methods on the UCF101 dataset, highlighting the visually appealing nature of the results. Latte Matten Real PVDM Figure 6: Sample videos generated using various methods on the FaceForensics dataset, highlighting the visually appealing nature of the results. 4.2 Ablation study In this part, we detail our experimental investigations using the SkyTimelapse dataset to assess the impact of various design modifications, model variations, and model sizes on performance, as previously introduced in Secs. 3.3 and 3.2. Timestep-Class Information Injection Illustrated in Fig. 8b, the M-AdaN approach markedly outperforms conditional tokens. We surmise this difference stems from the method of integration of timestep or class information. Conditional tokens are introduced directly into the model\u2019s input, potentially creating a spatial disconnect within the Mamba scans. In contrast, M-AdaN embeds both timestep and class data more cohesively, ensuring uniform dissemination across all video tokens, and enhancing the overall synchronization within the model. Exploring Model Variants Our analysis of Matten\u2019s model variants, as detailed in Sec. 3.2, aims to maintain consistency in parameter counts to ensure equitable comparisons. Each variant is developed from the ground up. As depicted in Fig. 8a, Variant 3 demonstrates superior performance with increasing iterations, indicating its robustness. Conversely, Variants 1 and 2, which focus primarily 8 \fLatte Matten Real LVDM Figure 7: Sample videos generated using various methods on the Taichi-HD dataset, highlighting the visually appealing nature of the results. (a) Model variants (b) Timestep-class conditional way Figure 8: Exploration of Design Choices Through Ablation Studies. We have conducted various ablation studies to identify optimal strategies for Mamba-based video diffusion models, focusing on improving FVD metrics on the SkyTimelapse dataset. For enhanced clarity, please magnify the displayed results. on local or global information, respectively, lag in performance, underscoring the necessity for a balanced approach in model design. Assessment of Model Size We experiment with four distinct sizes of the Matten model\u2014XL, L, B, and S as listed in Tab. 3 on the SkyTimelapse dataset. The progression of their Fr\u00e9chet Video Distances (FVDs) with training iterations is captured in Fig. 9. There is a clear trend showing that larger models tend to deliver improved performance, echoing findings from other studies in image and video generation [7], which highlight the benefits of scaling up model dimensions. 4.3 Comparison Experiment According to the findings from the ablation studies presented in Sec. 4.2, we have pinpointed the settings about how to design our Matten, notably highlighting the efficacy of model variant 3 equipped with M-AdaN. Leveraging these established best practices, we proceed to conduct comparisons against contemporary state-of-the-art techniques. Qualitative Assessment of Results Figures 4 through 7 display the outcomes of video synthesis using various methods across datasets such as UCF101, Taichi-HD, FaceForensics, and SkyTimelapse. Across these different contexts, our method consistently delivers realistic video generations at a high resolution of 256x256 pixels. Notable achievements include accurately capturing facial motions and effectively handling dynamic movements of athletes. Our model particularly excels in generating 9 \fFigure 9: The impact of varying model sizes on performance is notable. Generally, enlarging the model dimensions tends to markedly enhance its effectiveness. Model Layer numbers L Hidden size D SSM dimension N Param Matten-S 12 384 16 35M Matten-B 12 768 16 164M Matten-L 24 1024 16 579M Matten-XL 28 1152 16 853M Table 3: Specifics of our model configurations adhere to the setups outlined for various model sizes following the ViT and DiT frameworks. high-quality videos on the UCF101 dataset, an area where many other models frequently falter. This capability underscores our method\u2019s robustness in tackling complex video synthesis challenges. Quantitative results. Tab. 1 presents the quantitative results of each comparative method. Overall, our method surpasses prior works and matches the performance of methods with image-pretrained weights, demonstrating our method\u2019s superiority in video generation. Furthermore, our model attains roughly a 25% reduction in flops compared to Latte, the latest Transformer-based model. Given the abundance of released pre-trained U-Net-based (Stable Diffusion, SDXL) or Transformer-based (DiT, PixArt) image generation models, these U-Net-based or Transformer-based video generation models can leverage these pre-trained models for training. However, there are no released, pre-trained Mamba-based image generation models yet, so our model has to be trained from scratch. We believe that once Mamba-based image generation models become available, they will be of great help in training our Matten. 5 Conclusion This paper proposes a simple diffusion method for video generation, Matten, with the MambaAttention structure as the backbone for generating videos. To explore the quality of Mamba for generating videos, we explore different configurations of the model, including four model variants, time step and category information injection, and model size. Extensive experiments demonstrate that Matten excels in four standard video generation benchmarks and displays impressive scalability."
16
+ }
title_10K/test_title_short_2405.03085v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03085v1",
3
+ "title": "Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation",
4
+ "abstract": "Large Language Models (LLMs) have made significant strides in information\nacquisition. However, their overreliance on potentially flawed parametric\nknowledge leads to hallucinations and inaccuracies, particularly when handling\nlong-tail, domain-specific queries. Retrieval Augmented Generation (RAG)\naddresses this limitation by incorporating external, non-parametric knowledge.\nNevertheless, the retrieved long-context documents often contain noisy,\nirrelevant information alongside vital knowledge, negatively diluting LLMs'\nattention. Inspired by the supportive role of essential concepts in\nindividuals' reading comprehension, we propose a novel concept-based RAG\nframework with the Abstract Meaning Representation (AMR)-based concept\ndistillation algorithm. The proposed algorithm compresses the cluttered raw\nretrieved documents into a compact set of crucial concepts distilled from the\ninformative nodes of AMR by referring to reliable linguistic features. The\nconcepts explicitly constrain LLMs to focus solely on vital information in the\ninference process. We conduct extensive experiments on open-domain\nquestion-answering datasets to empirically evaluate the proposed method's\neffectiveness. The results indicate that the concept-based RAG framework\noutperforms other baseline methods, particularly as the number of supporting\ndocuments increases, while also exhibiting robustness across various backbone\nLLMs. This emphasizes the distilled concepts are informative for augmenting the\nRAG process by filtering out interference information. To the best of our\nknowledge, this is the first work introducing AMR to enhance the RAG,\npresenting a potential solution to augment inference performance with\nsemantic-based context compression.",
5
+ "authors": "Kaize Shi, Xueyao Sun, Qing Li, Guandong Xu",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
14
+ "gt": "Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation",
15
+ "main_content": "Introduction Large Language Models (LLMs) have emerged as indispensable tools for daily information acquisition, owing to their extensive knowledge base and ability to fulfil diverse user instructions [6, 47, 1]. By leveraging large-scale pre-training on massive datasets, LLMs memorize vast amounts of knowledge within their parameters as internal memory, known as parametric knowledge [33]. However, the presence of outdated or incorrect knowledge within internal memory can lead to hallucinations, hindering the performance of LLMs\u2019 inferencing process [46]. This limitation is particularly pronounced when handling long-tail knowledge for domain-specific or highly specialized queries, as the inherent difficulty in memorizing rare entities persists even in the most robust models. Consequently, the overreliance on potentially flawed parametric knowledge can significantly interfere with the reliability of LLMs\u2019 outputs, especially in scenarios with fine-grained knowledge requirements [58, 36]. Retrieval Augmented Generation (RAG) employs additional retrievers to augment LLMs with external, non-parametric knowledge, effectively expanding their internal knowledge boundaries [27, 14]. This Preprint. Under review. arXiv:2405.03085v1 [cs.CL] 6 May 2024 \fallows LLMs to access up-to-date, query-focused information that may not be adequately memorized within their parametric memory to alleviate the aforementioned limitations [24]. In contrast to finetuning by updating the model parameters, RAG preserves pre-trained knowledge while dynamically incorporating relevant external context. This paradigm offers greater flexibility and scalability, as the retrievers can be easily plug-and-play without modifying the underlying language model\u2019s parameters, thus circumventing complex computational hurdles [17, 16]. However, RAG is easily confused when dealing with long contextual retrieved support documents, which often consist of multiple shreds of evidence for providing vital knowledgeable context but are also accompanied by noisy and irrelevant information [56]. The distracting contexts can dilute the LLMs\u2019 attention and adversely affect their performance with misrepresentation [30, 25]. Compressing lengthy contexts to distil vital knowledge is crucial for enhancing LLMs and ensuring factually consistent responses in the RAG process. Figure 1: The examples of concept-based RAG1. Numerous studies have demonstrated that individuals tend to directly search for key concepts when reading long documents as the brain will complete the remaining details based on prior knowledge, expectations, background, and motivations [15, 22]. This selective attention to critical information allows ignoring redundant details and rearranging the text informatively [51]. As illustrated in Fig. 1, given only the key concepts of the question-related supporting documents that still enable us to grasp the crucial semantics. LLMs parameterize massive common knowledge, enabling them to exhibit a similar ability in context understanding even when the word or character-level information is disrupted [43, 7]. This provides the possibility of whether LLMs can comprehend scenarios solely based on discrete informative concepts. Linguistic features, such as semantic and syntactic, have significantly improved the interpretability, controllability, and diversity of Natural Language Generation (NLG) [28]. Language models can implicitly discover these features during pre-training to ensure the logic of the generated text [21]. It has been demonstrated that explicitly leveraging linguistic features for downstream tasks is beneficial, as it refactors the source documents into concise representations that reduce entropy by focusing on the critical information, thereby aiding in a comprehensive understanding of the described scenarios [41, 48, 44, 28, 23, 55]. This advantage enables the stable linguistic features to reliably assist context understanding. Inspired by the aforementioned insights, we propose enhancing RAG\u2019s performance with the crucial concepts distilled from the raw retrieved supporting documents. To effectively capture the informative concepts, we introduce Abstract Meaning Representation (AMR), a semantic formalism that encodes the meaning of serialized texts by a rooted, directed, labelled, acyclic graph [3]. Compared to other linguistic representations, AMR prioritizes semantic consistency among concepts carried by nodes when representing sentences, offering the advantage of automatically rectifying surfacelevel variations or understanding abbreviated terms, ensuring the structured concepts represent the underlying meaning to transcend the limitations of linguistic noise [59]. Specifically, we propose the concept-based RAG framework with the AMR-based concept distillation algorithm, which formats the concepts for augmenting LLMs by compressing the lengthy context to concentrate on crucial information exclusively. We empirically experiment on two open-domain Q&A datasets, PopQA [32] and EntityQuestions [40]. The results show that the performance of our method improves significantly as the number of supporting documents increases, outperforming baselines with various compression methods and backbone LLMs. The contributions of this paper can be summarized as follows: \u2022 This paper proposes the concept-based RAG framework that explicitly integrates AMR, a semantic representation, to enable LLMs to focus on essential rather than messy knowledge 1The corresponding complete sentences: [1] The Outfit is a 1973 crime film directed by John Flynn. [2] It stars Robert Duvall, Karen Black, Joe Don Baker and Robert Ryan. [3] Flynn\u2019s screenplay is an adaptation of the novel of the same name by Richard Stark. [4] Two hitmen drive to Eddie Macklin\u2019s house to assassinate him as he builds a brick wall in his backyard. 2 \fwhen processing long-context retrieved supporting documents. To the best of our knowledge, this is the first research introducing AMR to enhance RAG for more reliable inference. \u2022 We propose an AMR-based concept distillation algorithm, which compresses long-context raw supporting documents into concepts by formatting the informative nodes. The distilled concepts are more knowledge-centralized than the raw supporting documents, reducing the interference of irrelevant information during the inference process of LLMs. \u2022 We conduct extensive experiments on open-domain Q&A datasets. The results indicate that our framework effectively enhances inference performance as the number of supporting documents increases, outperforming baselines with various context compression methods and backbone LLMs. This demonstrates its applicability in long-context RAG scenarios. 2 Related Works 2.1 Long-context Understanding The increasing complexity of downstream tasks and the demand for models capable of capturing intricate dependencies have driven significant attention to the long-context understanding of LLMs [37, 19, 53]. One prominent research avenue involves modifying the basic architecture of LLMs. For instance, Dai et al.[11] introduced a segment-level recurrence mechanism with their Transformer-XL model, enabling it to retain longer contextual information than the standard Transformer structure. Similarly, Beltagy et al.[4] extended the self-attention mechanism in their Longformer model to handle longer sequences by introducing a sparse attention pattern, thereby facilitating the efficient processing of documents with thousands of tokens. However, a significant drawback of modifying model architecture is the necessity for complex re-training processes. In contrast, research on prompt compression aims to understand long-token prompts by compressing them into low-dimensional soft prompts [50, 9, 34]. While offering a more efficient alternative to architecture modification, this approach constrains the transferability of learned prompts across various LLMs. Recent research has advanced to a more intuitive level, aiming to comprehensively understand the context by directly expanding the context window or explicit compression. Chen et al.[8] introduced position interpolation to extend the context window of pre-trained LLMs, scaling LLaMA\u2019s context window to 32k tokens with few fine-tuning steps. Ding et al.[12] proposed LongRoPE to extend LLMs\u2019 context window to 2048k tokens while maintaining the performance of the original short context window through a positional and interpolation progressive extension strategy. However, the long context window raises another challenge of diluting core information with redundant data [53]. To address this, Li et al.[29] filtered out irrelevant context with low self-information for compressing the long prompts. Chuang et al.[10] proposed the Nano-Capsulator to compress original prompts into capsule prompts, decreasing inference latency across diverse LLMs. Compression methods can benefit the RAG by allowing LLMs to focus on essential knowledge in supporting documents [54]. 2.2 Linguistics-augmented NLG Incorporating linguistic principles into LLMs has shown promise in improving the coherence and semantic fidelity of generated text [55]. Augmentation techniques like syntactic trees [35] and lexical patterns [28] assist in linguistic feature injection, enabling language models to generate more faithful text. Ahmed et al. [2] proposed automatic semantic augmentation of prompts to enhance LLMs with tagged facts, resulting in improved code summarization performance. Zhou et al. [60] introduced InstructCTG, a framework for controlling LLMs\u2019 generation based on syntax constraints, facilitating flexibility and adaptation to new conditions without complex model modification. LLMs can be explicitly guided by leveraging linguistic insights to mitigate biases inherent in parameterized-only approaches, hereby enhancing performance in tasks demanding strict factual consistency. Abstract Meaning Representation (AMR) has proven its efficacy in enhancing downstream generation tasks by providing a structured semantic representation that encapsulates static concepts [18]. Frisoni et al. [13] integrated AMR with pre-trained language models to enhance biomedical summarization by capturing inter-entity relations. Ribeiro et al. [38] employed AMR to improve factuality evaluation in abstractive summarization by identifying content verifiability errors and subsentence-level factual inconsistencies. Shi et al. [42] proposed AMR-TST, which generates fluent and reliable texts with the target style by optimizing core concept nodes. Jangra et al. [20] preserved style-agnostic content 3 \fwhile generating transferred text by utilizing AMR as an intermediate representation. These studies illustrate AMR\u2019s advantages in capturing essential concepts containing informative linguistic features. 3 Method 3.1 Concept-based RAG Framework This section introduces the proposed concept-based RAG framework for inference utilising the concepts distilled from the raw supporting documents. The overview of the framework is in Fig. 2. Figure 2: The overview of the concept-based RAG framework, which consists of three main components: (a) information retrieval, (b) concept distillation, and (c) concept-based inference. Given an input question Q, the (a) information retrieval component aims to utilize a retriever to return the top-K knowledgeable supporting documents D = {D1, ..., DK} relevant to Q from sources such as Wikipedia or other information repositories. At this stage, the retriever\u2019s performance significantly influences the resulting answer set A = {A1, ..., AM} [33, 14]. However, the retriever\u2019s performance is beyond this paper\u2019s scope. We hypothesize that all retrieved supporting documents D contain the correct answer corresponding to Q, expressed as a proposition: \u2200Dk \u2208D, \u2203Am \u2208A, Am \u2286Dk. The (b) concept distillation component is devised to format the concept C from the retrieved supporting document D by the proposed AMR-based concept distillation algorithm. This algorithm converts the supporting documents from continuous sequences to discrete concepts formatted from the AMR graph, denoted as G. Further details of this algorithm will be elucidated in the subsequent section. After obtaining the distilled concept C, the (c) concept-based inference component proceeds to integrate it with various backbone LLMs to derive answers A using a faithful-intensive prompt template as follows: [Refer to the following facts to answer the question. Facts: C. Question: Q]. The intensity of prompts has been demonstrated to influence LLMs\u2019 adherence to knowledge from internal memory and retrieved documents [52]. Since our hypothesis is that the retrieved documents contain correct answers, we encourage the LLMs to leverage the knowledge encapsulated in C when responding to queries. This strategy helps minimize potential conflicts caused by their memorized parametric knowledge. To achieve this objective, we designate the concept as a \"fact\" within the instructional prompt, explicitly delineating a delimited sandbox for LLMs to presuppose the absolute correctness of the knowledge conveyed by C. This non-parametric knowledge can seamlessly integrate into LLMs in a plug-and-play manner. The overarching framework can be represented as Eq. 1. P(A|Q) = P(A|C, Q)P(C|D, Q)P(D|Q). (1) 3.2 AMR-based Concept Distillation Abstract Meaning Representation (AMR) serves as a logical formal semantic structure proficient in encapsulating common-sense knowledge necessary for representing events, time, participants, and other elements within serialized texts [39]. Given a supporting document Dk \u2208D, the AMR parser is utilized to parse Dk into the corresponding AMR graph G =< N, E >, where C represents the nodes for concepts and E denotes the edges for the correlation relationships. In this context, we utilize a 4 \fmBart-based [31] parser2 trained on the AMR 3.0 corpus3 to address potential multilingual concerns. The detailed illustration of the AMR graph parsing is depicted in Table A1. Algorithm 1: Concept Distillation Input :AMR Graph (G) Output :concept (C) 1 Function Concept_Distillation(G): 2 concept \u2190[], role \u2190[]; 3 for Gsntn in SplitSnt (G) do 4 for N in DFS(Gsntn) do 5 if IsRole(N) then 6 if IsName(N) then 7 AppendRole(HandleName(N)) 8 if IsWiki(N) then 9 AppendRole(HandleWiki(N)) 10 if IsDate(N) then 11 AppendRole(HandleDate(N)) 12 else 13 if role is not None then 14 AppendConcept(HandleRole(role)); 15 role \u2190[]; 16 AppendConcept(N); 17 if (N is Last) and (role is not None) then repeat :Algorithm.Line 5-11 18 AppendConcept(HandleRole(role)); 19 concept \u2190ConceptFormat (concept); 20 concept \u2190ConceptBacktrace (concept); 21 return C \u2190concept We propose the concept distillation algorithm to format the concepts represented in G, as described in Algorithm 1. The supporting document Dk encompasses multiple sentences (sntn), and the AMR parser can structurally parse Dk into a pre-defined multi-sentence structure. The SplitSnt(\u00b7) function is designed to partition G and organize the resulting sentence-based sub-graphs according to the sequential order. Notably, we simplify G by disregarding the agent and patient of the concepts, i.e., the edges denoting relations between the connected concepts (Frame args, ARGX). Consequently, G is streamlined into a unidirectional connecting structure. Leveraging this structure, we perform a Depth First Search, DFS(\u00b7) on the N of G to traverse the concepts while maintaining the relative positional correlation of adjacent nodes. This approach emphasizes the connection as it exists in the preceding sequential representation, and the process is elaborated in Fig. A1. Previous research has investigated the influence of context order on LLMs [30]. We delve into the various traversal methods for testing their potential impact in Section D. The AMR defines a set of roles to meticulously delineate the semantic fabric of sentences. This paper underscores the meticulous handling of three roles, namely :name, :wiki, and date-entity, employing IsRole(\u00b7) to identify the predefined roles comprehensively. The :name role signifies a property node within the AMR graph, signifying entities such as individuals, organizations, or geographic locations. In instances where the concept expressed by :name spans multiple words, the parsing process of AMR decomposes each word within the :name into predicate roles (:op), thereby dispersing the holistic concept across multiple nodes. During the DFS(\u00b7) traversal process, fragmented nodes can potentially confuse LLMs due to incomplete meaning expressions. To maintain the integrity of concepts carried by :name, we introduce HandleName(\u00b7), organizing predicates in a stack structure. The :wiki role provides reliable external concept references sourced from Wikipedia. For standardizing concepts\u2019 diverse expressions referring to the same named entities, we utilize the HandleWiki (\u00b7) function, which aligns the concepts with the corresponding definitions in Wikipedia. If the concept in :name differs from :wiki, we designate the concept expressed by this node as :wiki to avoid semantic disambiguation. In addition, there is a date-entity role that depicts temporal concepts. In our algorithm, we specifically manage the roles :year, :month, and :day by HandleDate (\u00b7). This function consolidates roles under the same date-entity, forming concepts like \"19 04 2024\" with numerical months translated into textual representations, \"19 April 2024\", for clear expression. AMR incorporates special numerical annotations for certain parsing nodes, such as work-01, where the number appended to the word indicates different meanings of the same word in distinct contexts as defined in OntoNotes [49]. In the RAG scenario, we provide 2https://github.com/BramVanroy/multilingual-text-to-amr 3https://catalog.ldc.upenn.edu/LDC2020T02 5 \fLLMs with supporting documents comprising a set of concepts. This suggests that concepts are understood in relation to relevant contexts rather than in isolation. Therefore, the proposed conceptbased RAG framework depends on the contextual learning capability of LLMs to distinguish between polysemous concepts, instead of relying on intricate semantic references. The nodes belonging to the aforementioned roles are integrated into the preliminary concept set with the HandleRole(\u00b7), while the AppendConcept(\u00b7) directly integrate the remaining nodes based on the corresponding instances. The structure of AMR comprises a collection of canonical nodes (city-district, market-sector, etc.) designed to enforce knowledge and prevent hallucination regarding entity types. However, in the concept-based RAG scenario, the inference process isn\u2019t directly based on AMR but distilled concepts. The auxiliary semantics embedded within these nodes, which are absent in the source supporting documents, may dilute the essence of the core concept. To address this concern, we employ ConceptFormat(\u00b7) to filter out these nodes to reduce the potential interference. Additionally, frequently occurring concepts are filtered out based on their Inverse Document Frequency (IDF). Furthermore, the selection of representations in AMR is based on the principle of abstraction and generalization rather than the exact lexical items. This representation may mislead the nodes into ignoring variations such as tense, which are informative for concept-based RAG without reference annotations. To mitigate this, we develop the ConceptBacktrace(\u00b7) function to maintain consistency with concepts expressed in the source supporting documents. This function facilitates the backtracking of formatted concepts by incorporating representations from the supporting documents, ensuring they closely adhere to the original semantics without deviation. Subsequently, the backtraced concepts serve as the finalized concepts C, providing conceptual support for LLMs in RAG inference. 4 Experiments 4.1 Datasets We conducted extensive experiments to verify the efficacy of the concept-based RAG framework on open-domain Q&A datasets: PopQA [32] and EntityQuestions [40]. Each dataset includes a label (\"hasanswer\") for every supporting document, indicating whether it contains the answer to the associated question. To ensure a focused evaluation, we screened out the \"<Q-A-D>\" pairs where hasanswer=True. This selection criterion accommodates scenarios where all retrieved documents contribute positively to answering questions, thus mitigating interference from extraneous factors. The experiments involved verifying the LLMs\u2019 inference performance with different K, which denotes the amount of supporting documents to Q. For the PopQA dataset, we filtered out questions with subject entities having monthly Wikipedia pageviews (spop) \u2265500. This step excludes frequently accessed entities, preserving the dataset focused on long-tail knowledge. This approach serves the dual purpose of preventing data contamination and encouraging LLMs to rely more on retrieved documents than memorized knowledge, mitigating potential knowledge conflicts in the RAG process. The statistical results of the number of the selected pairs with different K settings are in Table 1. Table 1: Statistical results of the number of screened-out <Q-A-D> pairs from the datasets. K= 1 2 3 4 5 6 7 8 9 10 PopQA [32] 738 1307 422 262 161 151 108 79 66 70 EntityQuestions [40] 1671 1127 670 454 335 264 196 166 163 103 4.2 Baselines The baseline evaluations encompass two aspects: (1) exploration of diverse backbone LLMs, and (2) experimentation with different context compression methods. Specifically, we consider various mainstream LLMs as backbones, including GPT-Neo-1.3B, GPT-Neo-2.7B [5], OPT-1.3b, OPT2.7b [57], bloom-560m, bloom-1b1, bloom-1b7, bloom-3b [26], LLaMA-2-7b-chat-hf, LLaMA-213b-chat-hf [47]. The backbone LLMs coupled with the original supporting documents serve as the Vanilla methods. Regarding the alternative aspect, we explore the three context compression methods: context keywords extraction, context summarization, and Selective Context (SelCon) [29]. These methods aim to validate the efficacy of context compression while preserving essential information for inference, emphasizing discrete key features, fluent representation, and non-redundant information. 6 \fInspired by Chuang et al. [10], we employ a novel open-access LLM, LLaMA-2-13b-chat-hf [47], for context keyword extraction and summarization. This process involves extracting key phrases or terms from the context and generating a concise summary of the provided content, constrained by prompts of \"[Generate a short summary of the following content.]\" and \"[Extract a few keywords from the following content.]\". The detailed prompts are available in Appendix B. The SelCon enhances the efficiency of LLMs\u2019 inference by identifying and eliminating redundant content from the source context for compression. The reduction ratio of the SelCon compared here is set to 0.5. These baseline settings effectively demonstrate the comprehensive advantages of the proposed algorithm in capturing informative concepts when compared to various alternative compression techniques, whether generative-based or semantic-based methods. 4.3 Evaluation Metrics We employ two metrics to evaluate the concept-based RAG: accuracy (Acc.) and integration (Intg.). Accuracy (Acc.) is determined by assessing whether any answer A matches any of the gold answers corresponding to the question Q. The integration metric (Intg.) is designed to comprehensively evaluate the performance across various K of the retrieved supporting documents D. Specifically, the Intg. signifies the area beneath the accuracy curve of each model plotted against the X-axis (K). The calculation of Intg. is as Eq. 2, where K \u2208[xs, xe], and xs and xe represent the minimum and maximum number of supporting documents respectively. A higher value of Intg. indicates superior overall performance. Given that the proposed framework aims to enhance long-context RAG, we segment the evaluation of Intg. into two distinct intervals: normal interval (In = [1, 10], K \u2208In) and longer interval (Il = [6, 10], K \u2208Il). This division is intended to emphasize the effectiveness of the concept-based RAG framework, particularly in scenarios involving longer contexts. Intg. = Z xe xs Acc(x) dx \u22481 2 xe\u2212xs+1 X i=1 (xi \u2212xi\u22121) [Acc(xi) + Acc(xi\u22121)] (2) 5 Results and Analysis The evaluation results for the PopQA and EntityQuestion datasets are depicted in Fig. 3 and Fig. 4, respectively, providing graphical trends of Acc. as K increases intuitively. Furthermore, Table 2 and Table 3 present quantitative results of Intg. for the datasets. These tables include the calculation of \u2206, quantifying the improvement achieved by our proposed method over the Vanilla methods. Specifically, \u2206is computed as follows: \u2206= Intg.ours \u2212Intg.vanilla. The detailed quantitative evaluation results of Acc. are provided in Table A3 and Table A4. Section E and section F examine compression ratio and inference latency comparison to demonstrate the advantages of concept-compressed contexts. Figure 3: The evaluation results of the Acc. \u2191trends and Intg. \u2191on the PopQA dataset. The vertical axis represents Acc., and the horizontal axis represents the number of supporting documents, K. The polyline reflects the changing trend of Acc. with different K, and the under area is Intg. A key intuitive finding reflected by Fig. 3 and Fig. 4 is the superior performance of our method in long-context scenarios, particularly evident when K is high. As K increases, especially within 7 \fFigure 4: The evaluation results of the Acc. \u2191trends and Intg. \u2191on the EntityQuestion dataset. The definitions of the axis and symbols are the same with the Fig. 3. Table 2: The quantitative results of Intg. \u2191for the PopQA dataset, where the full name order of the LLMs is: GPT-Neo-1.3B, GPT-Neo-2.7B, OPT-1.3b, OPT-2.7b, bloom-560m, bloom-1b1, bloom-1b7, bloom-3b, LLaMA-2-chat-7b, LLaMA-2-chat-13b. The best results are in bold, and the second best results are in underlined. The increased and decreased \u2206are marked differently. D K G-1.3 G-2.7 O-1.3 O-2.7 b-560 b-1b1 b-1b7 b-3 L-7 L-13 Vanilla In 620.68 631.39 656.68 687.15 619.86 692.68 707.25 671.88 682.30 672.03 Il 291.08 275.32 300.85 322.23 294.94 325.37 326.29 305.91 337.19 312.62 Keywords In 468.94 484.98 554.67 571.38 502.70 610.69 621.85 600.65 628.78 617.06 Il 257.12 244.24 297.70 305.64 275.39 327.70 338.01 318.37 326.41 315.93 Summary In 517.57 513.37 619.78 575.32 573.95 608.41 637.55 591.12 564.51 553.24 Il 263.14 260.64 316.80 290.50 304.55 313.36 336.20 297.44 291.50 291.39 SelCon In 444.29 524.54 615.78 607.12 423.22 634.81 606.15 625.66 715.90 703.29 Il 237.49 262.78 313.39 323.69 230.20 318.64 306.72 314.07 344.10 332.51 Ours In 625.31 652.71 668.86 688.47 608.31 686.29 698.91 681.22 738.82 716.55 Il 322.37 321.73 329.65 344.31 314.34 347.71 355.52 344.08 357.56 339.38 \u2206 In +4.63 +21.32 +12.18 +1.32 -11.55 -6.93 -8.34 +9.34 +56.52 +44.52 Il +31.29 +46.41 +28.8 +22.08 +19.40 +22.34 +29.23 +38.17 +20.37 +26.76 Table 3: The quantitative results of Intg. \u2191for the EntityQuestions dataset. The LLMs\u2019 order and symbol definitions are the same as Table 2. D K G-1.3 G-2.7 O-1.3 O-2.7 b-560 b-1b1 b-1b7 b-3 L-7 L-13 Vanilla In 531.54 605.06 602.52 634.28 488.95 594.88 608.85 619.30 607.22 632.24 Il 247.50 284.47 277.47 299.03 222.99 266.91 284.00 289.26 289.95 287.48 Keywords In 280.76 360.00 403.37 439.73 295.02 428.54 465.15 462.65 584.67 574.61 Il 134.96 167.13 196.04 215.41 143.68 207.59 227.84 223.38 287.84 284.53 Summary In 366.73 406.72 501.51 446.50 388.36 415.61 501.90 435.49 425.70 438.31 Il 179.97 205.02 255.51 210.93 187.75 197.43 257.16 211.83 210.34 222.92 SelCon In 298.49 405.22 471.36 468.18 215.52 460.37 451.41 539.49 623.91 641.01 Il 144.69 195.05 231.76 223.55 108.45 214.94 217.40 261.79 295.33 304.57 Ours In 551.50 618.18 609.88 652.48 483.02 600.72 624.53 621.36 664.18 703.67 Il 267.12 298.74 285.06 303.49 243.55 286.20 295.45 300.29 303.39 320.87 \u2206 In +19.96 +13.12 +7.36 +18.2 -5.93 +5.84 +15.58 +2.06 +56.96 +71.43 Il +19.62 +14.27 +7.59 +4.45 +20.56 +19.29 +11.45 +11.03 +13.44 +33.39 8 \fthe longer context setting (Il), the Acc. of our method consistently outperforms that of various backbone LLMs coupled with other context compression methods. This trend suggests that the concepts distilled by our method are supportive of reducing interference and enabling the LLMs to concentrate on key knowledge. Moreover, the positive values of \u2206in Table 2 and Table 3 for the Il interval further underscore the improvement achieved by our framework over baseline methods when handling longer contexts. This observation emphasizes the effectiveness of the AMR-based concept distillation algorithm in capturing essential semantic information from supporting documents, thereby enabling LLMs to generate more accurate answers even when confronted with messy contexts. When setting the bloom-560m model as the backbone LLMs, an interesting finding is that \u2206exhibits negative trends in the In interval of both datasets, while the SelCon does not perform ideally either. We hypothesize that this is due to the limitation of small-scale models to associate semantic scenarios through discrete concepts, which results in the model\u2019s inability to understand the core information expressed in the compressed supporting documents. Conversely, when coupling advanced LLMs, such as LLaMA-2, the contexts compressed by the proposed method and SelCon exhibit the most significant and second most significant enhancements to the LLMs, respectively. This observation likely arises from these large-scale models\u2019 superior contextual understanding capabilities, which corroborates our hypothesis. Regarding the improvements of \u2206on Il interval of two datasets, our method\u2019s enhancement on the PopQA dataset is more pronounced. This is because PopQA was released recently, and its knowledge is less likely to be memorized by earlier models such as GPT-Neo and OPT. Moreover, the screening of long-tail knowledge further accentuates the unique scenario provided by PopQA, making it an ideal testbed for evaluating context compression methods. The proposed AMR-based concept distillation method demonstrates clear advantages over generative compression methods of keyword extraction and summarization. While these methods utilise the LLMs to generate compressed representations and show competitive results in certain cases, they may inadvertently introduce noise or lose essential details during the compression process. Moreover, the generative nature of these methods makes them inherently difficult to control, even when provided with instructions as constraints. Consequently, the generated keywords and summaries may exhibit randomness, potentially deviating from the core concepts conveyed in the original supporting documents. In contrast, our framework leverages the inherent structured semantic representation of AMR to capture the core concepts explicitly. This semantic-level abstraction enables the framework to faithfully format the concepts to provide more reliable and informative support for the RAG process. Compared to the linguistics context compression baseline, SelCon, which identifies and prunes redundant content based on self-information computed at the lexical level, the proposed method based on the semantic level achieves superior results. SelCon\u2019s effectiveness depends on determining the right granularity for redundancy removal, making it sensitive to lexical unit choice. In contrast, our method takes a macro view by focusing on the semantic consistency carried by the AMR structure, making it insensitive to the delicate lexical bias. This characteristic enables it to be a reliable plug-andplay component in various RAG systems dealing with supporting documents containing irrelevant information and potential lexical errors. The robustness of the proposed framework is demonstrated by its consistent performance improvements across various LLMs. The experimental results on both datasets showcase the generalizability of our method, irrespective of the underlying LLM architecture. This finding suggests that the concept-based RAG framework can be effectively coupled with diverse LLMs, making it a versatile solution for enhancing inference performance in long-context scenarios. 6 Conclusion and Future Research This paper introduces a novel concept-based RAG framework that utilizes AMR to distil essential concepts from long-context supporting documents, enabling LLMs to focus on the most supportive knowledge for accurate question-answering efficiently. The proposed AMR-based concept distillation algorithm systematically traverses the AMR graph to format key concept nodes with informative semantic features, transforming redundant supporting documents into a concise concept set. The proposed framework significantly enhances RAG performance compared with baselines comprising various backbone LLMs and context compression methods. To the best of our knowledge, this is the first work to augment RAG with AMR, offering a novel direction for integrating reliable structured semantic representations with RAG to handle tasks requiring high fidelity to the knowledge. 9 \fIt has been demonstrated that the LLMs with fewer parameters within the proposed framework can also exhibit comparable or superior performance to larger models in certain cases. Consequently, it is plausible to speculate on the feasibility of employing small-scale LLMs solely equipped with the general natural language understanding capabilities, coupled with comprehensive and informative concept sets, to implement the lightweight Q&A systems. This approach would alleviate the constraints imposed by the computational complexity of large-scale LLMs during their practical application and deployment. Exploring this possibility will be one of the focus of our future research. 10"
16
+ }
title_10K/test_title_short_2405.03108v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03108v1",
3
+ "title": "Impact of Postshock Turbulence on the Radio Spectrum of Radio Relic Shocks in Merging Clusters",
4
+ "abstract": "This study investigates the impact of magnetic turbulence on cosmic ray (CR)\nelectrons through Fermi-II acceleration behind merger-driven shocks in the\nintracluster medium and examines how the ensuing synchrotron radio emission is\ninfluenced by the decay of magnetic energy through dissipation in the postshock\nregion. We adopt simplified models for the momentum diffusion coefficient,\nspecifically considering transit-time-damping resonance with fast-mode waves\nand gyroresonance with Alfv\\'en waves. Utilizing analytic solutions derived\nfrom diffusive shock acceleration theory, at the shock location, we introduce a\nCR spectrum that is either shock-injected or shock-reaccelerated. We then track\nits temporal evolution along the Lagrangian fluid element in the time domain.\nThe resulting CR spectra are mapped onto a spherical shell configuration to\nestimate the surface brightness profile of the model radio relics. Turbulent\nacceleration proves to be a significant factor in delaying the aging of\npostshock CR electrons, while decaying magnetic fields have marginal impacts\ndue to the dominance of inverse Compton cooling over synchrotron cooling.\nHowever, the decay of magnetic fields substantially reduces synchrotron\nradiation. Consequently, the spatial distribution of the postshock magnetic\nfields affects the volume-integrated radio spectrum and its spectral index. We\ndemonstrate that the Mach numbers estimated from the integrated spectral index\ntend to be higher than the actual shock Mach numbers, highlighting the\nnecessity for accurate modeling of postshock magnetic turbulence in\ninterpreting observations of radio relics.",
5
+ "authors": "Hyesung Kang",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "astro-ph.HE",
9
+ "cats": [
10
+ "astro-ph.HE"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Impact of Postshock Turbulence on the Radio Spectrum of Radio Relic Shocks in Merging Clusters",
15
+ "main_content": "Introduction Giant radio relics found in the outskirts of galaxy clusters, such as the Sausage and Toothbrush relics, are thought to result from shocks that occur following the passage of the dark matter (DM) core during major mergers (e.g., van Weeren et al. 2010, 2016; Ha et al. 2018). They are weak quasi-perpendicular shocks with low Mach numbers (Ms \u22723) formed in the weakly magnetized intracluster medium (ICM) (e.g., Kang et al. 2012; Kang 2016; Kang et al. 2017). Diffuse radio emissions originate from cosmic ray (CR) electrons with the Lorentz factor \u03b3 \u223c103 \u2212104, gyrating in microgauss-level magnetic fields. These electrons are believed to be accelerated via diffusive shock acceleration (DSA) (see Brunetti & Jones 2014; van Weeren et al. 2019, for reviews). Alternative scenarios such as adiabatic compression by shocks (Ensslin & Gopal-Krishna 2001; Ensslin & Br\u00fcggen 2002), reacceleration of fossil CR electrons by shocks (Kang et al. 2012; Pinzke et al. 2013), and reacceleration by postshock turbulence (Fujita et al. 2015; Kang 2017) have been considered as well. The DSA theory predicts that the energy spectrum of CR particles, accelerated through the Fermi first-order (Fermi-I) process, follows a power-law distribution, fsh \u221dp\u2212q, where q = 4M 2 s /(M 2 s \u22121) (Bell 1978; Drury 1983). Consequently, this leads to a synchrotron radio spectrum, j\u03bd \u221d\u03bd\u2212\u03b1sh with the so-called \u201cinjection spectral index\u201d, \u03b1sh = (q \u22123)/2, immediately behind the shock. As a result, the Mach numbers of radio relic shocks can be estimated using the relation (e.g., Kang 2015): Mrad,sh = \u00123 + 2\u03b1sh 2\u03b1sh \u22121 \u00131/2 . (1) Alternatively, one can determine the Mach numbers by observing the steepening of the volume-integrated spectrum, J\u03bd \u221d\u03bd\u2212\u03b1int, toward the so-called \u201cintegrated spectral index\", \u03b1int = \u03b1sh +0.5, at high frequencies. This steepening is attributed to synchrotron and inverse-Compton (IC) losses in the postshock region with a constant magnetic field strength, leading to the following relation (e.g., Kang et al. 2017) : Mrad,int = \u0012\u03b1int + 1 \u03b1int \u22121 \u00131/2 . (2) \u00a9 Published under Creative Commons license CC BY-SA 4.0 1 arXiv:2405.03108v1 [astro-ph.HE] 6 May 2024 \fImpact of Postshock Turbulence on Radio Relics However, the transition of the power-law index from \u03b1sh to \u03b1int takes place gradually over the broad frequency range of \u223c0.1 \u221210 GHz, depending on the shock age and postshock magnetic field strength. Furthermore, the volume-integrated emission spectrum could deviate from the simple DSA powerlaw in the case of the evolving shock dynamics and nonuniform magnetic field strength in the postshock regions, as suggested by Kang (2015). Thus, the estimation of Mrad,int of observed radio relics tend to be higher than Mrad,sh (e.g., Hoang et al. 2018). On the other hand, Mach numbers inferred from X-ray observations, MX, are sometimes found to be smaller than Mrad, i.e., MX \u2272Mrad (e.g., Akamatsu & Kawahara 2013; van Weeren et al. 2019). This discrepancy is recognized as an unsolved challenge in understanding the origin of radio relics. Wittor et al. (2021) compiled values of Mrad and MX for observed radio relics available in the literature, confirming the Mach number discrepancy (refer to their Figure 7). By employing cosmological structure formation simulations, the authors confirmed the prevailing notion that radio flux is dominated by contributions from high Mach number shocks among the ensemble associated with the particular relic, whereas Xray emission predominantly originates from low Mach number shocks (see also Hong et al. 2015; Roh et al. 2019; Botteon et al. 2020; Dom\u00ednguez-Fern\u00e1ndez et al. 2021). Additionally, several potential solutions have been suggested to address this puzzle. These include the reacceleration of preexisting fossil CR electrons with a flat spectrum (e.g., Pinzke et al. 2013; Kang 2016; Kang et al. 2017) and acceleration by multiple shocks with different Mach numbers formed in the turbulent ICM (e.g., Inchingolo et al. 2022). As clusters form through numerous merging episodes of smaller subclusters, the gas flows within the ICM inherently become turbulent (Miniati 2015; Poter et al. 2015; Vazza et al. 2017). During active mergers, the ICM turbulence becomes transonic, and the largest turbulent eddies (L \u223c100\u2212500 kpc) undergo decay into smaller ones. This process cascades into magnetohydrodynamic (MHD) turbulence and further down to kinetic turbulence through plasma instabilities, as comprehensively reviewed by Brunetti & Jones (2014). Additionally, vorticity generated behind curved ICM shocks is known to produce MHD turbulence and amplify magnetic fields in the postshock region (Ryu et al. 2008). On the other hand, numerical simulations of non-driven, decaying MHD turbulence indicate that turbulent energy dissipates within one eddy turnover time, tdec \u223c\u03bbd/vturb, where \u03bbd represents the largest driving scale, and vturb is the mean turbulent velocity (e.g. MacLow et el. 1998; MacLow 1999; Cho & Lazarizn 2003). Consequently, behind typical merger shocks, the estimated turbulent decay timescale is approximately tdec \u223cL/u2 \u223c(100 kpc)/(103 km s\u22121) \u223c0.1 Gyr, where L is the largest eddy size of the induced turbulence and u2 is the characteristic postshock flow speed1. Moreover, the interaction of preexisting turbulence with shock waves can induce corrugation of the shock front, thereby 1Throughout the paper, the subscript \u20182\u2019 is used for the postshock quantities. enhancing postshock turbulence on plasma kinetic scales through processes such as shock compression and turbulent dynamo mechanisms (Guo & Giacalone 2015; Trotta et al. 2023). Hybrid kinetic simulations of similar setups also indicate that postshock magnetic fluctuations exhibit a Kolmogorov spectrum and undergo substantial decay downstream due to dissipation (Nakanotani et al. 2022). Although these studies examined the plasma processes and wave-particle interactions on kinetic scales in a low beta (\u03b2 = PB/Pg \u223c1) plasma relevant for interplanetary shocks, we expect the same processes to operate similarly in the postshock region of ICM shocks formed in high beta (\u03b2 \u223c100) plasma as well. The amplification of postshock magnetic fields and the subsequent decay of MHD turbulence affects the radio spectrum of relic shocks. First, CR electrons can be further energized via Fermi second-order (Fermi-II) acceleration primarily through the interaction with the compressible fast mode waves via the transit-time-damping (TTD) resonance (Brunetti & Lazarian 2007; Brunetti & Jones 2014), and Alfv\u00e9n waves via gyroresonance (Brunetti et al. 2004; Fujita et al. 2015). Additionally, the synchrotron emission scales with the magnetic field strength as j\u03bd \u221dB(q\u22121)/2, typically with q \u223c4.0 \u22125.0, so the decay of magnetic fields B significantly reduces synchrotron radiation emission. In this study, we explore the impact of turbulent acceleration (TA) on the evolution of the CR electron spectrum in the postshock flow, considering the decay of the magnetic fluctuations. The numerical procedure is outlined as follows: 1. We incorporate Fermi-II acceleration of CR electrons, employing simplified models for the momentum diffusion coefficient, Dpp(p). This accounts for TTD resonance with fast mode waves and gyroresonance with Alfv\u00e9n waves. 2. We track the time evolution of the CR electron population, f(p, t), by following the Lagrangian fluid element through advection in the postshock region. This is accomplished by solving the Fokker-Planck equation in the time domain. In a one-dimensional (1D) planar shock configuration, the time integration can be transformed into the spatial profile of f(p, x) through the relation x = u2t, where u2 is a constant postshock speed and t is the advection time since the shock passage. 3. The synchrotron emissivity, j\u03bd(t), is calculated, utilizing the information for f(p, t) and B(t). 4. The surface brightness profile, I\u03bd(d), is estimated as a function of the distance d from the relic edge projected onto the sky plane. This is obtained by adopting a coconut-shell-shaped spherical surface, as illustrated in Figure 1. In the next section, we provide detailed explanations of the numerical methods and working models employed to simulate physical processes. In Section 3, we apply our approach to various examples. Specifically, we focus on scenarios involving the injection and the reacceleration of CR electrons by weak shocks with Mach numbers 2.3 \u2272M \u22723. Additionally, \fImpact of Postshock Turbulence on Radio Relics Figure 1. Schematic diagrams elucidate our model assumptions. (a) To model the surface of a radio relic, we employ a spherical, coconutshell-shaped structure with an axial ratio of a/b \u22731 and a thickness of lcool = u2tcool. Here, u2 and tcool represent the advection speed and cooling timescale in the post-shock flow, respectively. Radio relics become prominent after the passage of the dark matter core during a major merger. (b) The surface brightness, I\u03bd(d), is estimated by integrating the volume emissivity, j\u03bd(x), with x = u2t, along a line of sight, where d is the distance from the relic edge projected onto the sky plane. I\u03bd(d) depends on the CRe density, the magnetic field strength, B(x), and the momentum diffusion coefficient, Dpp(x), which decay with a timescale of tdec. Here, B2 and Dpp,2 are the immediate postshock values. The inset panel illustrates how the spatial profile of I\u03bd(d) depends on the decay timescale, tdec, of magnetic turbulence. Here, turbulent acceleration is ignored (Dpp = 0), but synchrotron and inverse-Compton losses are included. The shell radius is Rs = 1 Mpc, and the extension angles are \u03c81 = \u03c82 = 15\u25e6. we estimate the resulting radio emission spectra in an idealized setup. A brief summary of our findings will be presented in Section 4. 2. Physical Models and Numerical Method Here, we consider merger-driven shocks that become radioluminous subsequent to the DM core passage in a major binary merger, as depicted in Figure 1(a) (Ha et al. 2018). Although the shock surface evolves as a spherical shell expanding radially, we treat its dynamics as a planar shock with a constant postshock speed. This simplification is justified because the thickness of the postshock volume is on the order of lcool \u2248u2tcool \u223c0.1 Mpc, which is much smaller than the shock radius, Rs \u223c1 \u22121.5 Mpc. Furthermore, the cooing timescale, tcool \u223c0.1 Gyr, is shorter than the typical dynamical timescales of clusters, tdyn \u223c1 Gyr. In such a scenario, the time integration can be transformed into the spatial profile using the relation x = u2t. 2.1. Postshock Magnetic Turbulence As outlined in the introduction, downstream of the shock front, CR electrons further acquire energy through TTD resonance with compressive fast-mode waves and gyroresonant scattering off Alfv\u00e9n waves. These waves might be present in small-scale, kinetic magnetic fluctuations that are cascaded down from MHD-scale turbulence (Brunetti & Jones 2014) or excited by plasma microinstabilities in the shock transition zone (Guo & Giacalone 2015; Trotta et al. 2023). However, the microphysics governing the excitation and evolution of MHD/plasma waves and Fermi-II acceleration of CR electrons in the high beta ICM plasmas are quite complex and relatively underexplored (e.g. Lazarian et al. 2012). This makes it hard to formulate accurate models for the momentum diffusion coefficient, Dpp. The TA timescale due to the interaction with fast modes can be related with Dpp as Dpp,f p2 \u2248 4 \u03c4pp(p), (3) where, in general, \u03c4pp(p) depends on the nature and amplitude of magnetic fluctuations, \u03b4B(x, t), in the flow. As in many previous studies (e.g. Kang et al. 2017), we take a practical approach in which a constant value, \u03c4pp = 0.1 Gyr is assumed since the detail properties of the postshock turbulence are not well constrained. For instance, using cosmological structure formation simulations, Miniati (2015) found that in the ICM typically tpp \u223c0.1\u22121 Gyr due to enhanced turbulence during the active phase of major mergers. Based on the work of Fujita et al. (2015), we adopt Dpp due to gyro-resonance with Alfv\u00e9n waves as follows: Dpp,A p2 \u223c1 9( v2 A Dxx ) \u223c1 3(vA c )( vA lmfp ) \u223c1 3(vA c )( vA lmfp,c )\u03b7\u22121 m (p/p0)qK\u22122 (4) where vA = B/\u221a4\u03c0\u03c1 is the Alfv\u00e9n speed, Dxx \u223cclmfp/3 is \fImpact of Postshock Turbulence on Radio Relics Figure 2. Cooling timescales and TA timescales, all in units of 109 years: \u03c4Coul (blue) for Coulomb losses, \u03c4Syn+IC (red) for synchrotron and inverse Compton losses, \u03c4cool (black) for the total losses, \u03c4Dpf (magenta) due to fast mode waves, and \u03c4DpA (cyan) due to Alfv\u00e9n mode waves. Representative cases are considered with the following parameters: gas density n = 10\u22124 cm\u22123, magnetic field strength B = 2 \u00b5G, redshift zr = 0.2, and reduction factor, \u03b7m = 5 \u00d7 10\u22124. the spatial diffusion coefficient, \u03b7m \u223c5 \u00d7 10\u22124 is a reduction factor for waves on small kinetic scales, and p0 = 10\u22123mec is the reference momentum. The slope qK = 5/3 is adopted since Alfv\u00e9n modes of decaying MHD turbulence are expected to have a Kolmogorov spectrum (e.g., Cho & Lazarizn 2003). As a result, Dpp,A/p2 \u221dp\u22121/3B2, so TA becomes increasingly inefficient at higher momentum. In addition, Dpp,A decreases as magnetic fluctuations decay in the postshock flow. The Coulomb mean free path for thermal electrons can be estimated as lmfp,c \u223c174 kpc(ln \u039b 40 )\u22121( T 108K )2( n 10\u22124cm\u22123 )\u22121, (5) where ln \u039b \u223c40 is the Coulomb logarithm (Brunetti & Lazarian 2007). Figure 2 shows the cooling timescales for Coulomb collisions, \u03c4Coul, and synchrotron plus IC losses, \u03c4sync+IC, for a representative set of parameters for the ICM, i.e., n = 10\u22124cm\u22123, B = 2 \u00b5G, and the redshift, zr = 0.2. For radio emitting CR electrons with the Lorentz factor, \u03b3 \u223c103 \u2212104, typical cooling timescales range \u03c4cool \u223c0.1 \u22121 Gyr. The figure also compares the TA timescales due to fast modes, \u03c4Dpf, and for Alfv\u00e9n modes, \u03c4DpA. For the set of representative parameters considered here, TA with Dpp,A is more efficient compared to radiative losses for \u03b3 \u22723\u00d7103, whereas TA with Dpp,f is more efficient for \u03b3 \u2272104. The full consideration of the evolution of postshock turbulence, including the vorticity generation behind a rippled shock front, additional injection of turbulence driven by continuous subclump mergers, decompression of the postshock flows, and kinetic wave-particle interactions, is beyond the scope of this Figure 3. The momentum distribution function, g(p) = p4f(p), is depicted in a Ms = 3.0 shock, based on the test-particle DSA model. The blue line represents the injected population, finj. The magenta dotted-dashed line illustrates a power-law spectrum of the pre-existing fossil electron population, fpre, with a slope s = 4.7 and a cutoff momentum pcut/mec = 103. The magenta dotted line displays the spectrum of the reaccelerated population, fRA. Here, the amplitude of fpre is the same as that of finj(p) at pmin = Qe \u00b7 pth, where Qe = 3.5. In our calculations, finj and fRA are deposited at the shock front (t = 0) in the M3In* and M3RA* models, respectively. study. In anticipation of the dissipation of MHD turbulence energy, we employ an exponential function to model the decay of magnetic energy and the reduction of momentum diffusion: B(t) = B2 \u00b7 exp(\u2212t/tdec) Dpp(p, t) = Dpp,2(p) \u00b7 exp(\u2212t/tdec), (6) where tdec = 0.1 or 0.2 Gyr is considered (see Table 1). Although the functional forms for the two quantities could differ with separate values of tdec, we opt for this simple model to reduce the number of free parameters in our modeling. In addition, we note that non-driven MHD turbulence is known to decay as a power law in time, i.e., EB \u221d(1 + CBt/tdec)\u2212\u03b7 with \u03b7 \u223c1 and CB \u223c1 (MacLow et el. 1998; Cho & Lazarizn 2003). Within one eddy turnover time (tdec), the magnetic energy density decreases by a factor of \u223c2.72 in the exponential decay model given in equation (6), and by a factor of \u223c2 in the power-law decline model. We can justify our choice since our study primarily focuses on a qualitative examination of how turbulence decay influences postshock synchrotron emission. Considering that tdec is a not-so-well constrained, free parameter in our model, the quantitative interpretation of our results should be taken with caution. 2.2. DSA CR Spectrum at the Shock Position We follow the time evolution of the CR distribution function, f(p, t), in the Lagrangian fluid element that advects downstream with the constant postshock speed. So the spatial advection distance of the fluid element from the shock front is \fImpact of Postshock Turbulence on Radio Relics Table 1. Model Parameters and Estimated Spectral Indices Model Name Dpp tdec(Myr) finj \u221dp\u2212q (\u03b10.61 0.15)a \u03b13.0 0.61 \u03b116 3.0 (M 0.61 0.15 )b M 3.0 0.61 M 16 3.0 M3InDp0 Dpp = 0 \u221e 1.15 1.25 1.25 3.80 3.01 2.97 M3InDp0(200) Dpp = 0 200 1.02 1.10 1.18 11.1 4.51 3.52 M3InDpf(200) Dpp,f 200 1.09 1.30 1.39 4.78 2.75 2.49 M3InDpA(200) Dpp,A 200 1.18 1.18 1.22 4.45 3.51 3.20 M3InDp0(100) Dpp = 0 100 0.938 1.03 1.12 8.68 4.22 M3InDpf(100) Dpp,f 100 0.985 1.10 1.21 4.49 3.21 M3InDpA(100) Dpp,A 100 0.981 1.06 1.14 5.80 3.86 Model Name Dpp tdec(Myr) fpre \u221dp\u2212s \u03b10.61 0.15 \u03b13.0 0.61 \u03b116 3.0 M 0.61 0.15 M 3.0 0.61 M 16 3.0 M3RADp0(4.3) Dpp = 0 100 s = 4.3 0.938 1.03 1.12 8.68 4.22 M3RADp0(4.7) Dpp = 0 100 s = 4.7 0.938 1.03 1.12 8.68 4.22 M3RADpf(4.3) Dpp,f 100 s = 4.3 0.985 1.10 1.21 4.49 3.21 M3RADpf(4.7) Dpp,f 100 s = 4.7 0.985 1.10 1.21 4.49 3.21 M3RADpA(4.3) Dpp,A 100 s = 4.3 0.981 1.06 1.14 5.80 3.86 M3RADpA(4.7) Dpp,A 100 s = 4.7 0.981 1.06 1.14 5.80 3.86 The model name consists of characters that represent the sonic Mach number, injection (In) or reacceleration (RA) cases, and the momentum diffusion models (Dp0, Dpf, and DpA). For M3In*(tdec) models, the number in the parenthesis is the decay time scale in units of Myr, while for M3RA*(s) models, it is the power-law slope of the preexisting CR population. The same set of models, M2.3*, for Ms = 2.3 shocks are also considered. a The spectral index, \u03b1\u03bd2 \u03bd1, is estimated from the volume-integrated spectrum, J\u03bd, between two frequencies, \u03bd1 and \u03bd2, where \u03bd = 0.15, 0.61, 3.0, and 16 GHz. b The integrated Mach number, M\u03bd2 \u03bd1 , is estimated based on Equation (2) using \u03b1\u03bd2 \u03bd1. Note that for \u03b1\u03bd2 \u03bd1 < 1, M\u03bd2 \u03bd1 cannot be calculated. given as x = u2t. At the shock position (t = 0), the shockinjected spectrum, finj(p), or the shock-reaccelerated spectrum, fRA(p), are assigned as the initial spectrum (see Figure 3). The spectrum of injected CR electrons is assumed to follow the DSA power-law for p \u2265pmin: finj(p) \u2248[ n2 \u03c01.5 p\u22123 th exp(\u2212Q2 e)] \u00b7 \u0012 p pmin \u0013\u2212q , (7) where n2 and T2 are the postshock gas density and temperature, respectively (Kang 2020). In addition, pth = \u221a2mekBT2, pmin = Qe pth with the injection parameter Qe = 3.5. Usual physical constants are used: me for the electron mass, c for the speed of light, and kB for the Boltzmann constant. For the preshock population of CR electrons, we adopt a power-law spectrum with the slope s for p \u2265pmin: fpre(p) = fo \u00b7 \u0012 p pmin \u0013\u2212s exp \u0012 \u2212p2 p2 cut \u0013 , (8) where fo is the normalization factor and pcut \u2248103mec is a cutoff momentum due to cooling. The preexisting CR electrons may consist of fossil electrons injected by relativistic jets from radio galaxies or residual electrons accelerated in previous shock passages. If these fossil electrons are accelerated by relativistic shocks contained in relativistic jets, the power-law slope could be s \u22484.3 (Kirk et al. 2000). On the other hand, if they are accelerated by ICM shock with Ms \u22482.3\u22123 in the cluster outskirts, s \u22484.5 \u22124.9 (Hong et al. 2014). The reaccelerated population at the shock can be calculated by the following integration: fRA(p) = q \u00b7 p\u2212q Z p pmin p\u2032q\u22121fpre(p\u2032)dp\u2032 (9) (Drury 1983; Kang & Ryu 2011). Except in the case of q = s, fRA(p) \u221dp\u2212r with r = min(q, s), meaning fRA(p) adopts the harder spectrum between p\u2212q and p\u2212s. 2.3. Model Parameters We choose shocks with Mach numbers Ms = 2.3 and Ms = 3.0 as the reference models. This selection is based on the observation that the Mach number of radio relic shocks detected in the cluster outskirts typically falls in the range of 2 \u2272Mrad \u22725 (Wittor et al. 2021). Furthermore, numerous particle-in-cell (PIC) simulations have shown that only supercritical shocks with Ms \u22732.3 can effectively accelerate CR electrons in weakly magnetized ICM characterized by \u03b2 \u223c50 \u2212100 (e.g., Kang et al. 2019; Ha et al. 2021; Boula et al. 2024). The columns 1-4 of Table 1 list the model names for shocks with Ms = 3.0, along with the various model parameters being considered. In M3In* models, the shock-injected population given in Equation (7) is deposited at the shock location, while the reaccelerated population given in Equation (9) is used in M3RA* models. Additionally, we will present the same set of models with Ms = 2.3, denoted as M2.3*, in Section 3. M3InDp0 corresponds to the conventional DSA model without TA (Dpp = 0) in the postshock region with a constant magnetic field (B2). The effects of decaying B(t) is explored with the two values of the decay time, tdec = 100 Myr and 200 Myr. For M3In* models, the number in the parenthesis represents tdec in units of Myr. Additionally, we investigate the dependence on the momentum diffusion models, namely Dpp,f and Dpp,A. Note that for the models with nonzero Dpp, the constant B field case is not included, as it is incompatible \fImpact of Postshock Turbulence on Radio Relics Figure 4. Evolution of momentum distribution function, g(p) = p4f(p), at the avection time, t = 0.02, 0.04, ...0.2 Gyr behind the Ms = 3 shock models, illustrating the postshock aging with the color coded lines. See Table 1 for the model names and parameters. (a-c): The M3In* models are presented. The dotted line in each model represents the volume-integrated spectrum, G(p) = p4 \u00b7 u2 R tf 0 f(p, t)dt. (d-f): The M3RA*(4.3) models with s = 4.3 and pcut = 103mec are displayed, including the green dotted-dashed line for fpre(p) and the green dotted line for G(p). Additionally, for comparison, fpre(p) and G(p) for the M3RA*(4.7) models with s = 4.7 are shown in the magenta lines. All functions are given in arbitrary units, but the relative amplitudes among different models are valid. For all models, the decay timescale for postshock magnetic turbulence is set as tdec = 0.1 Gyr. with the decaying model for magnetic fluctuations. For M3RA* models, we explore two values of the powerlaw slope, s = 4.3 and 4.7, considering the DSA slope q = 4.5 for Ms = 3 shocks. Note that the number in the parenthesis of the model names for M3RA* represents the value of s. 2.4. Evolution of CR Spectrum in the Postshock Flow To follow the time evolution of f(p, t) along the Lagrangian fluid element, we solve the following Fokker-Planck equation: d f(p, t) dt = (1 3\u2207\u00b7 u)p\u2202f \u2202p + 1 p2 \u2202 \u2202p \u0014 p2bl \u2202f \u2202p \u0015 + 1 p2 \u2202 \u2202p \u0014 p2Dpp \u2202f \u2202p \u0015 + S(p). (10) Here, the cooling rate, bl, includes energy losses from Coulomb, synchrotron, and IC interactions. Standard formulas for these processes can be found in various previous papers, such as Brunetti & Jones (2014). Specifically, the Coulomb interaction depends on the density of thermal electrons, n, synchrotron losses depend on the magnetic field strength, B, and the inverse Compton scattering off the cosmic background radiation depends on the redshift, zr (see Figure 2). The divergence term becomes \u2207\u00b7 u = 0 in the postshock flow in 1D plane-parallel geometry, and the source term S(p) accounts for finj(p) and fRA(p) deposited at the shock position. 3. Results 3.1. Postshock Cooling and TA of CR Electrons Figure 4 illustrates the evolution of the distribution function, g(p) = p4f(p), for M3In*(100) and M3RA*(4.3) models. Additionally, it presents the volume-integrated spectrum, G(p) = p4F(p) = p4 \u00b7 u2 R tf 0 f(p, t)dt, where tf = 0.2 Gyr denotes the final advection time. The M3InDp0(100) model, which solely incorporates radiative cooling without TA, serves as a reference for comparison with other models. In Panel (a), it is evident that Coulomb loss is important only for low-energy electrons with \u03b3 < 10, whereas synchrotron + IC losses are significant for \u03b3 > 103. This panel demonstrates that the volume-integrated CR spectrum F(p) steepens from p\u2212q to p\u2212(q+1) above the \u201cbreak momentum\u201d as expected: pbr mec \u2248104 \u0012 t 0.1Gyr \u0013\u22121 \u0012 Be 5\u00b5G \u0013\u22122 , (11) where the effective magnetic field strength, B2 e = B2 2 + B2 rad, takes account for radiative losses due to both synchrotron and IC processes, and Brad = 3.24\u00b5G(1+zr)2 corresponds to the cosmic background radiation at redshift zr. Figures 4(b-c) illustrate how TA with Dpp,f or Dpp,A delays or reduces the postshock cooling, enhancing f(p). Consequently, the resulting spectrum, including both f(p, t) and \fImpact of Postshock Turbulence on Radio Relics Figure 5. (a-c): Volume-integrated spectrum, G(p), for different models with Ms = 3. See Table 1 for the model names and parameters. In each column (from top to bottom), the lines and the model names have the same color. The M3InDp0 model (no TA and constant B) is displayed in the black dotted-dashed line in each panel for comparison. (d-f): Volume-integrated radio spectrum, \u03bdJ\u03bd, for the same models shown in the top panels. (g-i): Spectral index, \u03b1\u03bd = \u2212d ln J\u03bd/d ln \u03bd, for the same models shown in the top panels. All functions except \u03b1\u03bd are given in arbitrary units, but the relative amplitudes among different models are valid. For all the models, the total advection time is set as tf = 0.2 Gyr. F(p), deviates significantly from the simple DSA predictions that take into account only postshock cooling. As shown in Figure 2, TA with Dpp,A is dominant for \u03b3 < 102, while TA with Dpp,f becomes more effective for higher \u03b3 for the parameters considered here. Regarding the parameter dependence, obviously, TA with Dpp,f becomes less efficient for a greater value of \u03c4Dpf. On the other hand, TA with Dpp,A becomes more efficient with a stronger B and a smaller \u03b7m. Figures 4(d-f) present similar results for the M3RA*(4.3) models, wherein the reaccelerated spectrum fRA(p) with s = 4.3 is introduced at t = 0. For illustrative purposes, the normalization factor, fo, is set to be the same as that of finj(p) in Equation (7). Consequently, the resulting fRA (depicted by the blue lines at t = 0 in the lower panels) is larger than finj (represented by the blue lines at t = 0 in the upper panels), as shown in the figure. For the M3RA*(4.7) models with s = 4.7, only fpre(p) and G(p) are displayed in the magenta lines for comparison. In the case of the reacceleration models, both the postshock spectrum, f(p, t), and the volume-integrated spectrum, F(p), may not be represented by simple power-law forms, even without TA. 3.2. Volume Integrated Radio Emission Figures 5(a-c) compare G(p) for all Ms = 3 models listed in Table 1. For the three models without TA but with different values of tdec, M3InDp0, M3InDp0(200), and M3InDp0(100), G(p) is almost the same since the total cooling is dominated by the IC cooling, and the effects of decaying B(t) are relatively minor. For comparison, G(p) for M3InDp0 (no TA and a constant B2) is displayed in the black dotted-dashed line in each panel. Panels (b) and (c) show the effects of TA with Dpp,f and Dpp,A, respectively. Thus, compared with the conventional DSA model, TA due to postshock turbulence may enhance the CR electron population. In addition, the reaccelerated spectrum, fRA (green and magenta dotted lines) could be higher than finj, depending on the amplitude of the fossil electron population. For the same thirteen models depicted in Figures 5(a-c), the volume-integrated synchrotron spectrum, \u03bdJ\u03bd, is shown in Figures 5(d-f), while its spectral index, \u03b1\u03bd = \u2212d ln J\u03bd/d ln \u03bd \fImpact of Postshock Turbulence on Radio Relics Figure 6. The same as Figure 5 except that Ms = 2.3 models are shown. is displayed in Figures 5(g-i). Again, in each panel, the black dotted-dashed line represents the results for M3InDp0, included for comparison. In Panels (d) and (g), the three models without TA, M3InDp0, M3InDp0(200) and M3InDp0(100), are depicted in the black, red, and blue lines, respectively. They demonstrate that the effects of decaying B(t) are quite prominent due the strong dependence of the synchrotron emissivity on the magnetic field strength. For example, j\u03bd \u221dB(q\u22121)/2 for the power-law spectrum of f(p) \u221dp\u2212q. In the conventional DSA model with a constant B (M3InDp0), the transition from \u03b1sh to \u03b1int occur rather gradually around the break frequency, \u03bdbr \u22480.25 GHz \u0012 tage 0.1Gyr \u0013\u22122 \u0012 Be 5\u00b5G \u0013\u22124 \u0012 B2 2\u00b5G \u0013 . (12) So one should use radio observations at sufficiently high frequencies, \u03bd \u226b\u03bdbr, to estimate the Mach number given in Equation (2) using the integrated spectral index (Kang 2015). However, as depicted in the red and blue solid lines in Panel (g), this transition takes place much more gradually in the case of decaying magnetic fields with smaller tdec. Thus, an accurate model for the postshock B(x) is required to estimate the Mach number of radio relic shocks using Equation (2), considering the observational radio frequency range of \u223c0.1 \u221230 GHz. Figures 5(h-i) illustrate that TA with a large momentum diffusion coefficient, especially Dpp,A, could lead to a significant deviation from the simple DSA prediction with a constant magnetic field strength. We also note that, in Panels (g)-(i), the blue, green, and magenta lines (all with tdec = 100 Myr) overlap with each other, except for very low frequencies (\u03bd < 10 MHz), whereas they differ significantly from the black (tdec = \u221e) and red (tdec = 200 Myr) lines. This implies that the magnetic field distribution plays a significant role in governing the integrated spectral index \u03b1\u03bd of the volume-integrated radio spectrum. In Table 1 for the M3* models, the columns 5-7 list the integrated spectral index, \u03b1\u03bd2 \u03bd1, between two frequencies, \u03bd1 and \u03bd2, where \u03bd = 0.15, 0.61, 3.0, and 16 GHz are chosen as representative values. Moreover, the columns 8-10 list the integrated Mach number, M \u03bd2 \u03bd1 , estimated based on Equation (2) using \u03b1\u03bd2 \u03bd1. For M3InDp0, the results are consistent with conventional DSA predictions except for the low frequency case: i.e., \u03b1\u03bd2 \u03bd1 = 1.25 and M \u03bd2 \u03bd1 = 3 for \u03bd \u226b\u03bdbr. In the case of \u03b10.61 0.15, the frequencies are not sufficiently high, resulting in the overestimation of Mach number, M 0.61 0.15 = 3.8 for M3InDp0. In fact, for most other models, \u03b10.61 0.15 < 1, so M 0.61 0.15 cannot be estimated. Both TA and reacceleration significantly influence the integrated spectrum J\u03bd and tend to generate smaller \u03b1\u03bd2 \u03bd1, resulting in higher M \u03bd2 \u03bd1 except for M3InDpf(200) (see also Figures 5(g-i)). \fImpact of Postshock Turbulence on Radio Relics Figure 7. (a-c): Surface brightness profile at 0.15 GHz, I0.15(d), for the same M3 models presented in Figure 4. See Table 1 for the model names and parameters. In each column (from top to bottom), the lines and the model names have the same color. In the M3InDp0 model(black dotted\u2013dashed lines), the postshock magnetic field remain constant as B2 and Dpp = 0 (no TA). See Figure 1 for the adopted shape of the relic surface and the definition of the intensity, I\u03bd(d). The extension angles are \u03c81 = \u03c82 = 15\u25e6. The displayed functions are given in arbitrary units, but the relative amplitudes among different models are valid. (d-f): Surface brightness profile at 0.61 GHz, I0.61(d), for the same models as in (a-c). (g-i): Spectral index between 0.15 and 0.61 GHz, \u03b10.61 0.15, for the same models shown in the upper panels. The M2.3* models also exhibit similar results, as can be seen in Figure 6. 3.3. Surface Brightness Profile of Model Radio Relics Using the geometrical configuration of the shock surface depicted in Figure 1, we estimate the surface brightness, I\u03bd(d), as a function of the projected distance, d. In brief, a radio relic has a coconut-shell-shaped, elongated surface with an axial ratio a/b \u223c1\u22121.5 and a thickness corresponding to the cooling length of electrons, lcool. Here, the radius of the spherical shell is set as Rs = 1 Mpc. Then the surface brightness or intensity is calculated by I\u03bd(d) = Z hmax hmin j\u03bd(x)dh, (13) where hmin and hmax are determined by the extension angles, \u03c81 and \u03c82. As illustrated in Figure 1, the path length h along the observer\u2019s line of sight reaches its maximum at dpeak = Rs(1 \u2212cos \u03c81). So for the assumed model parameters, Rs = 1 Mpc and \u03c81 = \u03c82 = 15\u25e6, the surface brightness peaks at dpeak \u224834 kpc. Figure 7 presents the spatial profiles of I0.15(d) at 0.15 GHz and I0.61(d) at 0.61 GHz for the same thirteen models shown in Figure 5. The spectral index \u03b10.61 0.15(d) is calculated from the projected I\u03bd(d) between the two frequencies. Several points are noted: 1. The postshock magnetic field plays a key role in determining the profile of I\u03bd(d) and \u03b1\u03bd(d), as it governs the synchrotron emissivity j\u03bd and Dpp,A. Consequently, the results depend sensitively on the decay of B(t) in the postshock region. 2. The models with postshock TA (middle and right columns) exhibit a slower decrease in I\u03bd(d) compared to the models without TA (left column). This occurs because TA delays the postshock cooling of electrons, resulting in a broader effective width of radio relics. In particular, the models with Dpp,f generate greater widths than those with Dpp,A. \fImpact of Postshock Turbulence on Radio Relics Figure 8. The same as Figure 7 except that Ms = 2.3 models are shown. 3. In the models with Dpp,A, the enhancement by TA is less significant due to the effects of decaying magnetic fields, distinguishing it from models with Dpp,f. 4. Panels (g-i) demonstrate that the postshock profile of \u03b1\u03bd is independent of the injection spectrum (i.e., finj or fRA). The profile is mainly influenced by the decay profile of B(x) and by TA due to Dpp(p, x). 5. The spectral index is the smallest at the relic edge (d = 0), while the intensity profile peaks at dpeak in our model setup for the relic shock surface. Therefore, in observations of radio relics, the region d < dpeak corresponds to the postshock region rather than the preshock region. The M2.3* models presented in Figure 8 also exhibit the similar behaviors. 4. Summary Giant radio relics are thought to be generated by weak bow shocks that form after the DM core passage during major mergers of galaxy clusters. In such a scenario, CR electrons are accelerated mainly via the Fermi-I mechanism, resulting in the simple predictions for the DSA power-law spectrum, f(p) \u221dp\u2212q, and the ensuing synchrotron radiation spectrum, j\u03bd \u221d\u03bd\u2212\u03b1sh. Although most observational aspects of radio relics are consistent with such DSA predictions, the so-called Mach number discrepancy among the estimated Mach numbers based on various methods, i.e., Mrad,int \u2273Mrad,sh \u2273MX, remains yet to be resolved. The ICM is turbulent by nature. The cascade of magnetic turbulence from large MHD scales to small kinetic scales and the excitation and amplification of magnetic fluctuations via plasma microinstabilities behind the shock front could influence the CR energy spectrum through Fermi-II acceleration. Moreover, magnetic turbulence is expected to decay approximately in one eddy turnover time, L/u2 \u223c0.1 Gyr, and decaying magnetic fields could significantly affect turbulent acceleration (TA) and the synchrotron emissivity in the postshock region. In this study, we adopt simplified models for the momentum diffusion coefficient, Dpp,f due to fast-mode waves and Dpp,A due to Alfv\u00e9n-mode waves, to explore the effects of TA. The CR spectrum finj(p) for the shock-injected population or fRA(p) for the shock-reaccelerated population is deposited at the shock front at t = 0. Then the time evolution of f(p, t) is calculated along the Lagrangian fluid element in the time-domain. The results are mapped onto the spherical shell, whose geometrical configuration is depicted in Figure 1, to estimate the surface brightness profile, I\u03bd(d), as a function of the projected distance d. \fImpact of Postshock Turbulence on Radio Relics The main results can be summarized as follows: 1. TA due to Dpp,f and Dpp,A could delay the postshock aging of CR electrons, leading to a significant deviation from the simple power-law spectrum (Figure 4) and a broader spatial width of the surface brightness of radio relics (Figure 6). 2. The postshock aging of the CR electron spectrum is insensitive to the decay of magnetic fields since IC cooling dominates over synchrotron cooling (typically Brad > B in the postshock region) (Figures 5(a-c) and 6(a-c)). 3. The integrated spectral index, \u03b1\u03bd, of the volumeintegrated radio spectrum sensitively depends on the postshock magnetic field distribution, whereas it is insensitive to the CR spectrum deposited at the shock front. For instance, the transition from the power-law index \u03b1sh to \u03b1int occurs more gradually than predicted by the simple DSA model with a constant postshock magnetic field (Figures 5(g-i) and 6(g-i)). Therefore, observational frequencies should be sufficiently high (i.e., \u03bd \u226b\u03bdbr) for estimating the Mach number using the integrated spectral index . 4. On the other hand, the synchrotron emissivity scales as j\u03bd \u221dB(q\u22121)/2 and the momentum diffusion coefficient due to Alfv\u00e9n modes, Dpp,A \u221dB2. This means that the decay of B fields significantly impacts both the surface brightness, I\u03bd(d), and the spectral index, \u03b1\u03bd2 \u03bd1(d) (Figures 7 and 8). 5. The columns 8-10 of Table 1 indicate that, in most models except the MInDp0 model (no TA and constant B), the integrated Mach number, M \u03bd2 \u03bd1 , estimated using the integrated spectral index, \u03b1\u03bd2 \u03bd1, between two frequencies \u03bd1 and n2, tends to be higher than the actual shock Mach number. This highlights the critical importance of incorporating accurate models for turbulent acceleration arising from postshock turbulence and the impact of decaying magnetic fields when interpreting observations of radio relics. In particular, the shock Mach number estimated using the integrated spectral index may tend to be larger than the actual Mach number. Therefore, a thorough consideration of these factors is essential for a more precise interpretation of radio relic observations. Acknowledgments The author thanks the anonymous referee for constructive feedback. This work was supported by a 2-Year Research Grant of Pusan National University."
16
+ }
title_10K/test_title_short_2405.03121v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03121v1",
3
+ "title": "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding",
4
+ "abstract": "The paper introduces AniTalker, an innovative framework designed to generate\nlifelike talking faces from a single portrait. Unlike existing models that\nprimarily focus on verbal cues such as lip synchronization and fail to capture\nthe complex dynamics of facial expressions and nonverbal cues, AniTalker\nemploys a universal motion representation. This innovative representation\neffectively captures a wide range of facial dynamics, including subtle\nexpressions and head movements. AniTalker enhances motion depiction through two\nself-supervised learning strategies: the first involves reconstructing target\nvideo frames from source frames within the same identity to learn subtle motion\nrepresentations, and the second develops an identity encoder using metric\nlearning while actively minimizing mutual information between the identity and\nmotion encoders. This approach ensures that the motion representation is\ndynamic and devoid of identity-specific details, significantly reducing the\nneed for labeled data. Additionally, the integration of a diffusion model with\na variance adapter allows for the generation of diverse and controllable facial\nanimations. This method not only demonstrates AniTalker's capability to create\ndetailed and realistic facial movements but also underscores its potential in\ncrafting dynamic avatars for real-world applications. Synthetic results can be\nviewed at https://github.com/X-LANCE/AniTalker.",
5
+ "authors": "Tao Liu, Feilong Chen, Shuai Fan, Chenpeng Du, Qi Chen, Xie Chen, Kai Yu",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding",
16
+ "main_content": "INTRODUCTION Integrating speech signals with single portraits [13, 18, 33, 45, 47, 59\u2013 61] to generate talking avatars has greatly enhanced both the entertainment and education sectors, providing innovative avenues for interactive digital experiences. While current methodologies [36, 47, 57, 61, 62] have made notable strides in achieving synchronicity between speech signals and lip movements, thus enhancing verbal communication, they often neglect the critical aspect of nonverbal communication. Nonverbal communication encompasses the transmission of information without the use of words, including but not limited to specific head movements, facial expressions, and blinking. Research [35] indicates that these nonverbal cues are pivotal in communicating. The primary challenge lies in the inadequacy of existing models to encapsulate the complex dynamics associated with facial motion representation. Existing approaches predominantly employ explicit structural representations such as blendshapes [3, 13, 34], landmark coefficients [18, 48, 60], or 3D Morphable Models (3DMM) [7, 14, 27] to animate faces. Designed initially for single-image processing, these methods offer a constrained approximation of facial dynamics, failing to capture the full breadth of human expressiveness. Recent advancements [11, 25] have introduced trainable facial motion encoders as alternatives to conventional explicit features, showing \u2217The Corresponding author. significant progress in capturing detailed facial movements. However, their deployment is often tailored for specific speakers [11] or limited to the mouth region [25], highlighting a gap in fine-grained motion representation that captures all varieties of facial dynamics. A universal and fine-grained motion representation that is applicable across different characters remains absent. Such a representation should fulfill three key criteria: capturing minute details, such as minor mouth movements, eye blinks, or slight facial muscle twitching; ensuring universality, making it applicable to any speaker while removing identity-specific information to maintain a clear separation between appearance and motion; and incorporating a wide range of nonverbal cues, such as expressions, head movements, and posture. In this paper, we introduce AniTalker. Our approach hinges on a universal motion encoder designed to grasp the intricacies of facial dynamics. By adopting the self-supervised learning paradigm, we mitigate the reliance on labeled data, enabling our motion encoder to learn robust motion representations. This learning process operates on dual levels: one entails understanding motion dynamics through the transformation of a source image into a target image, capturing a spectrum of facial movements, from subtle changes to significant alterations. Concurrently, the use of identity labels within the dataset facilitates the joint optimization of an identity recognition network in a self-supervised manner, further aiming to disentangle identity from motion information through mutual information minimization. This ensures that the motion representation retains minimal identity information, upholding its universal applicability. To authenticate the versatility of our motion space, we integrate a diffusion model and a variance adapter to enable varied generation and manipulation of facial animations. Thanks to our sophisticated representation and the diffusion motion generator, AniTalker is capable of producing diverse and controllable talking faces. In summary, our contributions are threefold: (1) We have developed universal facial motion encoders using a self-supervised approach that effectively captures facial dynamics across various individuals. These encoders feature an identity decoupling mechanism to minimize identity information in the motion data and prevent identity leakage. (2) Our framework includes a motion generation system that combines a diffusion-based motion generator with a variance adapter. This system allows for the production of diverse and controllable facial animations, showcasing the flexibility of our motion space. (3) Extensive evaluations affirm our framework\u2019s contribution to enhancing the realism and dynamism of digital human representations, while simultaneously preserving identity. 2 RELATED WORKS Speech-driven Talking Face Generation refers to creating talking faces driven by speech, We categorize the models based on whether they are single-stage or two-stage. Single-stage models [36, 58, 61] generate images directly from speech, performing end-toend rendering. Due to the size constraints of rendering networks, this method struggles with processing longer videos, generally managing hundreds of milliseconds. The two-stage type [3, 11, 13, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 18, 25, 33, 60] decouples motion information from facial appearance and consists of a speech-to-motion generator followed by a motion-to-video rendering stage. As the first stage solely generates motion information and does not involve the texture information of the frames, it requires less model size and can handle long sequences, up to several seconds or even minutes. This two-stage method is known to reduce jitter [3, 11, 25], enhance speech-tomotion synchronization [11, 13, 33, 60], reduce the need for aligned audio-visual training data [3, 25], and enable the creation of longer videos [18]. Our framework also employs a two-stage structure but with a redesigned motion representation and generation process. Motion Representation serves as an essential bridge between the driving features and the final rendered output in creating talking faces. Current methods predominantly utilize explicit structural representations, such as blendshapes [3, 13, 32], 3D Morphable Models (3DMMs) [27], or landmarks [48, 60]. These formats offer high interpretability and facilitate the separation of facial actions from textures, making them favored as intermediary representations in facial generation tasks. However, due to the wide range of variability in real-world facial movements, they often fail to capture the subtle nuances of facial expressions fully, thus limiting the diversity and expressiveness of methods dependent on these representations. Our research is dedicated to expanding the spectrum of motion representation by developing a learned implicit representation that is not constrained by the limitations of explicit parametric models. Self-supervised motion transfer approaches [31, 41, 44, 48, 49, 51, 54] aim to reconstruct the target image from a source image by learning robust motion representations from a large amount of unlabeled data. This significantly reduces the need for labeled data. A key challenge in these methods is separating motion from identity information. They primarily warp the source image using predicted dense optical flow fields. This approach attempts to disentangle motion from identity by predicting distortions and transformations of the source image. However, information leakage occurs in practice, causing the target image to contain not just motion but also identity information. Building on this observation, we explicitly introduce identity modeling and employ the Mutual Information Neural Estimation (MINE) [1, 4] method to achieve a motion representation independent of identity. Diffusion Models [19] have demonstrated outstanding performance across various generative tasks [12, 17, 21, 39]. Recent research has utilized diffusion models as a rendering module [2, 11, 25, 29, 40, 43, 45]. Although diffusion models often produce higher-quality images, they require extensive model parameters and substantial training data to converge. To enhance the generation process, several approaches [18, 27, 28, 32, 55] employ diffusion models for generating motion representations. Diffusion models excel at addressing the one-to-many mapping challenge, which is crucial for speech-driven generation tasks. Given that the same audio clip can lead to different actions (e.g., lip movements and head poses) across different individuals or even within the same person, diffusion models provide a robust solution for managing this variability. Additionally, the training and inference phases of diffusion models, which systematically introduce and then remove noise, allow for the incorporation of noise during generation to foster diversity. We also use diffusion in conjunction with our motion representation to further explore diversity in talking face generation. 3 ANITALKER FRAMEWORK 3.1 Model Overview AniTalker contains two critical components: (1) Training a motion representation that can capture universal face dynamics, and (2) Based on the well-trained motion encoder from the previous step, the generation or manipulation of the motion representation using the user-controlled driving signal to produce the synthesised talking face video. 3.2 Universal Motion Representation Our approach utilizes a self-supervised image animation framework, employing two RGB images from a video clip: a source image \ud835\udc3c\ud835\udc60and a target image \ud835\udc3c\ud835\udc61(\ud835\udc3c\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d73), to serve distinct functions: \ud835\udc3c\ud835\udc60provides identity information, whereas \ud835\udc3c\ud835\udc61delivers motion details. The primary aim is to reconstruct \ud835\udc3c\ud835\udc61. Due to the random selection of frames, occasionally adjacent frames are chosen, enabling the network to learn representations of subtle movements. As depicted in Figure 2 (a), both the source and target images originate from the same video clip. Through this self-supervised learning method, the target image\u2019s encoder is intended to exclusively capture motion information. By learning from frame-to-frame transfer, we can acquire a more universal representation of facial motion. This representation includes verbal actions such as lip movements, as well as nonverbal actions, including expressions, posture, and movement. To explicitly decouple motion and identity in the aforementioned processes, we strengthen the self-supervised learning approach by incorporating Metric Learning (ML) and Mutual Information Disentanglement (MID). Specifically: Metric Learning. Drawing inspiration from face recognition [8, 46] and speaker identification [9], metric learning facilitates the generation of robust identity information. This technique employs a strategy involving pairs of positive and negative samples, aiming to minimize the distance between similar samples and maximize it between dissimilar ones, thereby enhancing the network\u2019s ability to discriminate between different identities. This process can also proceed in a self-supervised fashion, with each iteration randomly selecting distinct identities from the dataset. Specifically, the approach establishes an anchor (\ud835\udc4e) and selects a positive sample (\ud835\udc5d) and a negative sample (\ud835\udc5b)\u2014corresponding to faces of different identities\u2014with the goal of reducing the distance (\ud835\udc51) between the anchor and the positive sample while increasing the distance between the anchor and the negative samples. This optimization, depicted in Figure 2 (b), involves randomly selecting a different identity from a list of candidates not belonging to the current person as the negative sample. The optimization goal for this process is as follows: L\ud835\udc40\ud835\udc3f= max (0, \ud835\udc51(\ud835\udc4e, \ud835\udc5d) \u2212\ud835\udc51(\ud835\udc4e,\ud835\udc5b) + margin) Here, the margin is a positive threshold introduced to further separate the positive and negative samples, thus improving the model\u2019s ability to distinguish between different identities. Mutual Information Disentanglement. Although metric learning effectively constrains the identity encoder, focusing solely on this encoder does not adequately minimize the identity information \f, 2024, Tao Liu, et al. Motion Encoder t t HAL Identity Encoder Motion Encoder s s HAL Identity Encoder Pull Push Target Image Source Image AvgPool \ud835\udc5a! \u2026 \u2026 \u2026 Weighted Sum Target Image Wrap Layer Feature Maps (d) HAL Image Renderer o t s Positive Speech Encoder Image Renderer \u2026 \u2026 Speech Variance Adapter Diffusion Motion Generator Motion Encoder ( Conformer \u00d7 N ) ( Conformer \u00d7 N ) Other Images Motion Latent Motion Latent Identity Latent Noisy Latent \ud835\udc74!~\ud835\udc41(0,1) Motion Encoder Image Encoder \ud835\udc5a\" \ud835\udc5a# \ud835\udc5a (a) Details of Training Universal Motion Representation Flow Fields (c) MID (b) ML MLP MLP \u2026 Candidates (e) Motion Generator \u2026 \u2026 Positional Embedding Audio-driven Video-driven Frozen Layers Image Encoder \u2026 Denoising Iteration Anchor Negative (\ud835\udc74) \u2026 Random Pick Figure 2: The AniTalker framework comprises two main components: learning a universal motion representation and then generating and manipulating this representation through a sequence model. Specifically, the first part aims to learn a robust motion representation by employing metric learning (ML), mutual information disentanglement (MID), and Hierarchical Aggregation Layer (HAL). Subsequently, this motion representation can be used for further generation and manipulation. within the motion encoder. To tackle this issue, we utilize Mutual Information (MI), a statistical measure that evaluates the dependency between the outputs of the identity and motion encoders. Given the challenge of directly computing MI between two variables, we adopt a parametric method to approximate MI estimation among random variables. Specifically, we use CLUB [4], which estimates an upper bound for MI. Assuming the output of the identity encoder is the identity latent \ud835\udc67\ud835\udc56\ud835\udc51and the motion encoder\u2019s output is the motion latent \ud835\udc67\ud835\udc5a, our goal is to optimize the mutual information \ud835\udc3c(E(\ud835\udc67\ud835\udc56\ud835\udc51); E(\ud835\udc67\ud835\udc5a)), where E denotes the learnable Multi-Layer Perceptron (MLP) within CLUB. This optimization ensures that the motion encoder primarily captures motion, thereby preventing identity information from contaminating the motion space. This strategy is depicted in Figure 2 (c). In summary, by leveraging Metric Learning and Mutual Information Disentanglement, we enhance the model\u2019s capacity to accurately differentiate between identity and motion while reducing reliance on labeled data. Hierarchical Aggregation Layer (HAL). To enhance the motion encoder\u2019s capability to understand motion variance across different scales, we introduce the Hierarchical Aggregation Layer (HAL). This layer aims to integrate information from various stages of the image encoder, each providing different receptive fields [24]. HAL processes inputs from all intermediate layers of the image encoder and passes them through an Average Pooling (AvgPool) layer to capture scale-specific information. A Weighted Sum [53] layer follows, assigning learnable weights to effectively merge information from these diverse layers. This soft fusion approach enables the motion encoder to capture and depict movements across a broad range of scales. Such a strategy allows our representations to adapt to faces of different sizes without the need for prior face alignment or normalization. Specifically, the features following the AvgPool layer are denoted as [\ud835\udc5a1,\ud835\udc5a2, . . . ,\ud835\udc5a\ud835\udc5b], representing the set of averaged features, with [\ud835\udc641,\ud835\udc642, . . . ,\ud835\udc64\ud835\udc5b] as the corresponding set of weights, where \ud835\udc5bsymbolizes the number of intermediate layers in the image encoder. These weights undergo normalization through the softmax function to guarantee a cumulative weight of 1. The equation for the weighted sum of tensors, indicating the layer\u2019s output, is formulated as m = \u00cd\ud835\udc5b \ud835\udc56=1 \ud835\udc64\ud835\udc56\u00b7 \ud835\udc5a\ud835\udc56. The softmax normalization process is mathematically articulated as \ud835\udc64\ud835\udc56= \ud835\udc52\ud835\udc4a\ud835\udc56 \u00cd\ud835\udc5b \ud835\udc57=1 \ud835\udc52\ud835\udc4a\ud835\udc57, ensuring the proportional distribution of weights across the various layers. Subsequently, m is fed into the motion encoder for further encoding. Learning Objective. The main goal of learning is to reconstruct the target image by inputting two images: the source and the target within the current identity index. Several loss functions are utilized during the training process, including reconstruction loss \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b, perceptual loss \ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d, adversarial loss \ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63, mutual information loss \ud835\udc3f\ud835\udc40\ud835\udc3c, and identity metric learning loss \ud835\udc3f\ud835\udc40\ud835\udc3f. The total loss is formulated as follows: \ud835\udc3f\ud835\udc5a\ud835\udc5c\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b= \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50\ud835\udc5c\ud835\udc5b+ \ud835\udf061\ud835\udc3f\ud835\udc5d\ud835\udc52\ud835\udc5f\ud835\udc50\ud835\udc52\ud835\udc5d+ \ud835\udf062\ud835\udc3f\ud835\udc4e\ud835\udc51\ud835\udc63+ \ud835\udf063\ud835\udc3f\ud835\udc40\ud835\udc3c+ \ud835\udf064\ud835\udc3f\ud835\udc40\ud835\udc3f \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, 3.3 Motion Generation Once the motion encoder and image renderer are trained, at the second stage, we can freeze these models. The motion encoder is used to generate images, then video-driven or speech-driven methods are employed to produce motion, and finally, the image renderer carries out the final frame-by-frame rendering. 3.3.1 Video-Driven Pipeline. Video driving, also referred to face reenactment, leverages a driven speaker\u2019s video sequence I\ud835\udc51= [\ud835\udc3c\ud835\udc51 1 , \ud835\udc3c\ud835\udc51 2 , . . . , \ud835\udc3c\ud835\udc51 \ud835\udc47] to animate a source image \ud835\udc3c\ud835\udc60, resulting in a video that accurately replicates the driven poses and facial expressions. In this process, the video sequence I\ud835\udc51is input into the motion encoder, previously trained in the first phase, to extract the motion latent. This latent, along with \ud835\udc3c\ud835\udc60, is then directly fed, frame by frame, into the image renderer for rendering. No additional training is required. The detailed inference process, where the orange lines represent the data flow during video-driven inference, is depicted in Figure 2 (e). 3.3.2 Speech-Driven Pipeline. Unlike video-driven methods that use images, the speech-driven approach generates videos consistent with the speech signal or other control signals to animate a source image \ud835\udc3c\ud835\udc60. Specifically, we utilize a combination of diffusion and variance adapters: the former learns a better distribution of motion data, while the latter mainly introduces attribute manipulation. Diffusion Models. For generating motion latent sequences, we utilize a multi-layer Conformer [16]. During training, we incorporate the training process of diffusion, which includes both adding noise and denoising steps. The noising process gradually converts clean Motion Latent M into Gaussian noise M\ud835\udc47, where\ud835\udc47represents the number of total denoising steps in the diffusion process. Conversely, the denoising process systematically eliminates noise from the Gaussian noise, resulting in clean Motion Latents. This iterative process better captures the distribution of motion, enhancing the diversity of the generated results. During the training phase, we adhere to the methodology described in [19] for the DDPM\u2019s training stage, applying the specified simplified loss objective, as illustrated in Equation 1, where \ud835\udc61represents a specific time step and C represents the control signal, which refers to either speech or speech perturbed by a Variance Adapter (to be discussed in the following section). For inference, considering the numerous iteration steps required by diffusion, we select the Denoising Diffusion Implicit Model (DDIM) [42]\u2014an alternate non-Markovian noising process\u2014as the solver to quicken the sampling process. \ud835\udc3fdiff = E\ud835\udc61,M,\ud835\udf16 \u0002 \u2225\ud835\udf16\u2212\u02c6 \ud835\udf16\ud835\udc61(M\ud835\udc61,\ud835\udc61, C)\u22252\u0003 (1) Variance Adapter. The Variance Adapter [38] is a residual branch connected to audio features, allowing optional control over the speech signal. Originally proposed to mitigate the one-to-many problem in Text-to-Speech (TTS) tasks, its architecture includes a predictor and an encoder that use speech signals to predict attribute representations. A residual connection is then applied between the encoder output and the speech signals. During the Training Stage, the encoder processes speech features in collaboration with the predictor to minimize the L2 loss against a ground truth control signal. This includes incorporating an attribute extractor for targeting specific attributes, such as employing a pose extractor (yaw, pitch, roll) to control head posture during the audio generation process. In Predictor \u2295 L2 Loss Encoder Speech Feature Attribute Extractor (a) Training Stage (b) Inference Stage Predictor \u2295 Speech Feature Attribute Extractor or Encoder Audio-driven only w. Attribute Control ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) ( LSTM \u00d7 N ) \u2026 \u2026 GT images Any images Attribute Latent \u00d7 N Figure 3: Variance Adapter Block. Each block models a single attribute and can be iterated multiple times, where \ud835\udc41represents the number of attributes. the Inference Stage, the trained encoder and predictor can flexibly synthesize speech with controlled attributes or operate based on speech-driven inputs. The detailed structure is depicted in Figure 3. Our approach extends previous works [11, 18] by incorporating LSTM [15] for improved temporal modeling and introducing additional cues such as head position and head scale, which we refer to as camera parameters. The architecture is detailed in Figure 3. Learning Objective. The total loss comprises diffusion loss and variance adapter loss, where \ud835\udc3erepresents the number of attributes: \ud835\udc3fgen = \ud835\udc3fdiff + \ud835\udf06 \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 \ud835\udc3fvar\ud835\udc58 4 EXPERIMENTS 4.1 Experimental Settings We utilizes three datasets: VoxCeleb [30], HDTF [59], and VFHQ [52]. Due to different processing approaches across these datasets, we re-downloaded the original videos and processed them in a unified way. Specifically, our processing pipeline included filtering out blurred faces and faces at extreme angles. It is noted that we did not align faces but instead used a fixed detection box for each video clip, allowing for natural head movement. This effort resulted in a dataset containing 4,242 unique speaker IDs, encompassing 17,108 video clips with a cumulative duration of 55 hours. Details of this filtering process are provided in the supplementary material. Each video in these datasets carries a unique facial ID tag, which we used as labels for training our identity encoder. We also reserved some videos from HDTF for testing, following the test split in [58]. Scenario Setting We evaluate methods under two scenarios: video-driven and speech-driven, both operating on a one-shot basis with only a single portrait required. The primary distinction lies in the source of animation: image sequences for video-driven and audio signals for speech-driven scenarios. The detailed data flow for inference is illustrated in Figure 2. Additionally, each scenario is divided into two types: self-driven, where the source and target \f, 2024, Tao Liu, et al. share the same identity, and cross-driven, involving different identities. In speech-driven tasks, if posture information is needed, it is provided from the ground truth. Moreover, for our motion generator, unless specified otherwise, we use a consistent seed to generate all outcomes. To ensure a fair comparison, the output resolution for all algorithms is standardized to 256 \u00d7 256. Implementation Details In training the motion representation, our self-supervised training paradigm is primarily based on LIA [49]. Both the identity and motion encoders employ MLPs. Our training targets use the CLUB 1 for mutual information loss, in conjunction with AAM-Softmax [46]. This robust metric learning method utilizes angular distance and incorporates an increased number of negative samples to enhance the metric learning loss. In the second phase, the speech encoder and the Motion Generator utilize a four-layer and a two-layer conformer architecture, respectively, inspired by [11, 25]. This architecture integrates the conformer structure [16] and relative positional encoding [6]. A pre-trained HuBERT-large model [20] serves as the audio feature encoder, incorporating a downsampling layer to adjust the audio sampling rate from 50 Hz to 25 Hz to synchronize with the video frame rate. The training of the audio generation process spans 125 frames (5 seconds). Detailed implementation specifics and model structure are further elaborated in the supplementary materials. Evaluation Metric For objective metrics, we utilize Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) [50], and Learned Perceptual Image Patch Similarity (LPIPS) [56] to quantify the similarity between generated and ground truth images. Cosine Similarity (CSIM) 2 measures facial similarity using a pretrained face recognition. Lip-sync Error Distance (LSE-D) [5] assesses the alignment between generated lip movements and the corresponding audio. Regarding subjective metrics, we employ the Mean Opinion Score (MOS) as our metric, with 10 participants rating our method based on Fidelity (F), Lip-sync (LS), Naturalness (N), and Motion Jittering (MJ). 4.2 Video Driven Methods Table 1: Quantitative comparisons with previous Face Reenactment methods. Method Self-Reenactment Cross-Reenactment PSNR\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 SSIM\u2191 LPIPS\u2193 CSIM\u2191 FOMM [41] 23.944 0.775 0.178 0.830 0.411 0.423 0.494 DPE [31] 27.239 0.861 0.151 0.912 0.445 0.410 0.567 MTIA [44] 28.435 0.870 0.122 0.929 0.393 0.456 0.448 Vid2Vid [48] 27.659 0.870 0.115 0.924 0.410 0.401 0.553 LIA [49] 25.854 0.831 0.137 0.916 0.421 0.406 0.522 FADM [54] 26.169 0.849 0.147 0.916 0.445 0.399 0.574 AniTalker 29.071 0.905 0.079 0.927 0.494 0.347 0.586 Quantitative Results We benchmarked our approach against several leading face reenactment methods [31, 41, 44, 48, 49, 54], all employing variations of self-supervised learning. The results are presented in Table 1. Due to the inherent challenges and the absence 1https://github.com/Linear95/CLUB/ 2https://github.com/dc3ea9f/vico_challenge_baseline of frame-by-frame ground truth in Cross-Reenactment (using another person\u2019s video for driving), the overall results tend to be lower compared to Self-Reenactment (using the current person\u2019s video). In Self-Reenactment, our algorithm achieved superior results for image structural metrics such as PSNR, SSIM, and LPIPS, validating the effectiveness of our motion representation in reconstructing images. Additionally, using the CSIM metric to measure face similarity, we observed that the similarity between the reconstructed face and the original portrait was the second highest, slightly behind MTIA [44], illustrating our model\u2019s identity preservation capabilities. For Cross-Reenactment, where the portrait serves as ground truth and considering cross-driven deformations, we focused on high-level metrics: SSIM and LPIPS. Our method demonstrated commendable performance. We also evaluated CSIM, which, unlike self-reenactment, showed a significant improvement, achieving the best results among these datasets. This highlights our algorithm\u2019s outstanding ability to disentangle identity and motion when driving with different individuals. Qualitative Results To highlight comparative results, we conducted a cross-reenactment scenario analysis with different algorithms, as presented in Figure 4. The objective was to deform the source portrait using the actions of the target. Each row in the figure represents a driving case. We observed that baseline methods exhibited varying degrees of identity leakage, where the identity information from the target contaminated the source portrait\u2019s identity. For example, as demonstrated in the fourth row, the slim facial structure of the driving portrait led to slimmer outcomes, which was unintended. However, our results consistently preserved the facial identity. Additionally, in terms of expression recovery, as evident in the first and third rows, our approach replicated the action of opening the eyes in the source portrait accurately, creating a natural set of eyes. In contrast, other algorithms either produced slight eye-opening or unnatural eyes. These qualitative findings highlight the advantage of decoupling ability. 4.3 Speech-driven Methods Table 2: Quantitative comparisons with previous speechdriven methods. The subjective evaluation is the mean option score (MOS) rated at five grades (1-5) in terms of Fidelity (F), Lip-Sync (LS), Naturalness (N), and Motion Jittering (MJ). Method Subjective Evaluation Objective Evaluation (Self) MOS-F\u2191 MOS-LS\u2191 MOS-N\u2191 MOS-MJ\u2191 SSIM\u2191 CSIM\u2191 Sync-D\u2193 MakeItTalk [62] 3.434 1.922 2.823 3.129 0.580 0.719 8.933 PC-AVS [61] 3.322 3.785 2.582 2.573 0.305 0.703 7.597 Audio2Head [47] 3.127 3.650 2.891 2.467 0.597 0.719 8.197 SadTalker [57] 3.772 3.963 2.733 3.883 0.504 0.723 7.967 AniTalker 3.832 3.978 3.832 3.976 0.671 0.725 8.298 We compare our method against existing state-of-the-art speechdriven approaches, including MakeItTalk [62], PC-AVS [61], Audio2Head [47], and SadTalker [57]. Quantitative results are presented in Table 2. From the subjective evaluation, our method consistently shows improvements in fidelity, lip-sync accuracy, naturalness, and a reduction in motion jittering, particularly noted for the enhanced naturalness of movements. These advancements can \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, Portrait (Source) FOMM Portrait (Target) DPE MTIA Vid2Vid LIA FADM AniTalker Figure 4: Cross-Reenactment Visualization: This task involves transferring actions from a target portrait to a source portrait to evaluate each algorithm\u2019s ability to separate motion and appearance. Starting from the third column, each column represents the output from a different algorithm. The results highlight our method\u2019s superior ability to preserve fidelity in both motion transfer and appearance retention. I /a\u026a/ State /\u02c8ste\u026at/ Believe / b\u026a\u02c8li\u02d0v / Climate /\u02c8kla\u026am\u0259t/ Self Driven Cross Driven Portrait MakeItTalk Audio Source: Audio2Head SadTalker AniTalker Figure 5: Visual comparison of the speech-driven method in selfand cross-driven scenarios. Phonetic sounds are highlighted in red. be attributed to our sophisticated universal motion representation. The objective evaluation involves driving the image with its audio. Compared to these methods, our approach shows significant improvements in SSIM and CSIM. However, our Sync-D metric shows a decrease, which we believe is due to two main reasons: (1) we do not use this metric as a supervisory signal, and (2) the Sync-D metric focuses on short-term alignment and does not adequately represent long-term information that is more crucial for the comprehensibility of generated videos. This is also corroborated by the qualitative results shown in Figure 5, highlighting our model\u2019s ability to produce convincingly synchronized lip movements to the given phonetic sounds. 4.4 Ablation Study Table 3: Quantitative comparisons of disentanglement methods and the HAL module in Self-Reenactment setting Method ML MID HAL PNSR \u2191 SSIM \u2191 CSIM \u2191 Baseline 25.854 0.849 0.916 Triplet [10] \u2713 26.455 0.860 0.911 AAM-Softmax [46] \u2713 27.922 0.894 0.923 AAM-Softmax + CLUB [4] \u2713 \u2713 28.728 0.900 0.924 AniTalker \u2713 \u2713 \u2713 29.071 0.905 0.927 4.4.1 Ablations on Disentanglement. To further validate the effectiveness of our disentanglement between motion and identity, we \f, 2024, Tao Liu, et al. conducted tests using various methods. Initially, to evaluate the performance of developing a reliable identity encoder using only Metric Learning (ML) without Mutual Information Disentanglement (MID), we assessed both Triplet loss [10] and AAM-Softmax [46]. Our results indicate that AAM-Softmax, an angle-based metric, achieves superior outcomes in our experiments. Additionally, by incorporating a mutual information decoupling module alongside AAM-Softmax, we noted further improvements in results. This enhancement encouraged the motion encoder to focus exclusively on motion-related information. These findings are comprehensively detailed in Table 3. Table 4: Different intermediate representations under the Self-Reenactment setting. \u2018Face Repr.\u2019 is short for face representation, and \u2018Dim.\u2019 represents the corresponding dimension. Method Face Repr. Dim. PSNR \u2191 SSIM \u2191 CSIM\u2191 EMOCA [7] 3DMM 50 20.911 0.670 0.768 PIPNet [22] Landmark 136 22.360 0.725 0.830 AniTalker Motion Latent 20 29.071 0.905 0.927 4.4.2 Ablation Study on Motion Representation. To compare our motion representation with commonly used landmark and 3D Morphable Model (3DMM) representations, we utilized 68 2D coordinates [22] (136 dimensions) for the landmark representation and expression parameters (50 dimensions) from EMOCA [7] for the 3DMM representation. In self-reenactment scenarios, all rendering methods were kept consistent, and different features were used to generate driven images. We observed several key points: (1) As shown in Table 4, our learned representation exhibits a more compact dimensionality, indicating a more succinct encoding of facial dynamics. (2) Our video comparisons show that, unlike these explicit representations, our implicit motion representation maintains frame stability without the need for additional smoothing. This can be attributed to our self-supervised training strategy of sampling adjacent frames, which effectively captures subtle dynamic changes while inherently ensuring temporal stability. 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 \u2026 \u2026 Image Encoder Layers Weights Figure 6: The weights of motion representation from different layers of the Image Encoder. 4.4.3 Ablations on HAL. To explore the significance of the Hierarchical Aggregation Layer (HAL) in dynamic representations, we conducted a series of ablation experiments focusing on the HAL layer. The results showed that models incorporating the HAL layer exhibited performance improvements, as detailed in the final row of Table 3. To analyze the impact and importance of different HAL layers on motion representation, we extracted and examined the softmax-normalized weights of each layer (a total of 8 layers in our experiment) in our Image Encoder as shown in Figure 6. It was found that the weights of the last layer contributed most significantly, likely because it represents global features that can effectively recover most motion information. Notably, the fourth layer\u2014situated in the middle of the image encoder feature map\u2014demonstrated a local maximum. Considering the receptive field size of this layer\u2019s patch is similar to the size of eyes and approximately half the size of the mouth, this finding suggests that the layer plays a potential role in simulating areas such as the mouth and eyes. These results not only confirm the pivotal role of the HAL layer in dynamic representation but also reveal the deep mechanisms of the model\u2019s ability to capture facial movements of different scales. Motion Manifold Turn Head Left Eye Closed Diversity Perturbation Speak with Homophones Figure 7: Motion Manifold of the continuous motion space. 5 DISCUSSION Discussion on Universal Motion Representation Our investigations into the model\u2019s ability to encode facial dynamics have highlighted a universal representation of human facial movements. As depicted in Figure 7, we observed that different individuals maintain consistent postures and expressions (such as turning the head left, speaking with homophones, and closing eyes) at each point within our motion space, demonstrating that our motion space forms a Motion Manifold. This manifold facilitates the representation of a continuous motion space, enabling the precise modeling of subtle facial feature variations and allowing for smooth transitions. Additionally, by integrating perturbations through diffusion noise, \fAniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding , 2024, our model can simulate random, minute motion changes that align with fundamental movement patterns, thus enhancing the diversity of generated expressions. These findings demonstrate that our motion representation has a robust capacity to capture and represent a wide array of human facial movements. Discussion on Generalization Ability Although our model is trained on real human faces, it demonstrates the ability to generalize to other images with facial structures, such as cartoons, sculptures, reliefs, and game characters. This underscores the model\u2019s excellent scalability. We primarily attribute this capability to the complete decoupling of identity and motion, which ensures that the model grasps the intrinsic nature of facial movements, thereby enhancing its generalization capability. 6 CONCLUSION The AniTalker framework represents a significant advancement in the creation of lifelike talking avatars, addressing the need for a fine-grained and universal motion representation in digital human animation. By integrating a self-supervised universal motion encoder and employing sophisticated techniques like metric learning and mutual information disentanglement, AniTalker effectively captures the subtleties of both verbal and non-verbal facial dynamics. The resulting framework not only achieves enhanced realism in facial animations but also demonstrates strong generalization capabilities across different identities and media. AniTalker sets a new benchmark for the realistic and dynamic representation of digital human faces, promising broad applications in entertainment, communication, and education. Limitation and Future Work While AniTalker shows promise in generalizing motion dynamics, it still faces challenges. Our rendering network generates frames individually, which can lead to inconsistencies in complex backgrounds. Additionally, limited by the performance of the warping technique, extreme cases where the face shifts to a large angle may result in noticeable blurring at the edges. Future work will focus on improving the temporal coherence and rendering effects of the rendering module."
17
+ }
title_10K/test_title_short_2405.03133v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03133v1",
3
+ "title": "Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training",
4
+ "abstract": "Mixture-of-experts (MoE) models facilitate efficient scaling; however,\ntraining the router network introduces the challenge of optimizing a\nnon-differentiable, discrete objective. Recently, a fully-differentiable MoE\narchitecture, SMEAR, was proposed (Muqeeth et al., 2023), which softly merges\nexperts in the parameter space; nevertheless, its effectiveness was only\ndemonstrated in downstream fine-tuning on classification tasks. In this paper,\nwe present Lory, the first approach that scales such architectures to\nautoregressive language model pre-training. Lory introduces two key techniques:\n(1) a causal segment routing strategy that achieves high efficiency for expert\nmerging operations while preserving the autoregressive nature of language\nmodels; (2) a similarity-based data batching method that encourages expert\nspecialization by grouping similar documents in training instances. We\npre-train a series of Lory models on 150B tokens from scratch, with up to 32\nexperts and 30B (1.5B active) parameters. Experimental results show significant\nperformance gains over parameter-matched dense models on both perplexity\n(+13.9%) and a variety of downstream tasks (+1.5%-11.1%). Despite segment-level\nrouting, Lory models achieve competitive performance compared to\nstate-of-the-art MoE models with token-level routing. We further demonstrate\nthat the trained experts in Lory capture domain-level specialization without\nsupervision. Our work highlights the potential of fully-differentiable MoE\narchitectures for language model pre-training and advocates future research in\nthis area.",
5
+ "authors": "Zexuan Zhong, Mengzhou Xia, Danqi Chen, Mike Lewis",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Mixture AND of AND Experts",
15
+ "gt": "Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training",
16
+ "main_content": "Introduction Mixture-of-experts (MoE) architectures with sparse activation enable the scaling of model sizes while maintaining high training and inference efficiency (Lepikhin et al., 2021; Fedus et al., 2022; Du et al., 2022; Zoph et al., 2022; Lewis et al., 2021; Zhou et al., 2022; Jiang et al., 2024; Xue et al., 2024; Shen et al., 2024). However, training the routing network in MoE architectures introduces the challenge of optimizing a non-differentiable, discrete objective (Shazeer et al., 2017; Zoph et al., 2022). Various techniques\u2014such as switch routing (Fedus et al., 2022), top-k expert-choice routing (Zhou et al., 2022), and linear programming (Lewis et al., 2021)\u2014have been developed to address this challenge, often requiring carefully designed load balancing objectives (Fedus et al., 2022) or introducing additional complexity in assignment algorithms (Lewis et al., 2021; Roller et al., 2021). Recent research has started to explore fully-differentiable MoE architectures as an alternative to overcome training difficulty. Notably, SMEAR (Muqeeth et al., 2023) is an approach that softly merges experts as a weighted average of all the experts\u2019 parameters, as opposed to activating the top-k experts. However, the effectiveness of SMEAR has only been demonstrated in small-scale fine-tuning experiments on downstream classification tasks (Wang et al., 2018). In this work, we propose Lory1, the first approach that scales such fully-differentiated MoE architectures to autoregressive language model pre-training. Unlike 1Lory is a tribe of parrots with rainbow-like colors, which resembles the spirit of \u2018soft\u2019 MoE. 1 arXiv:2405.03133v1 [cs.CL] 6 May 2024 \fPreprint Merged FFN Segment 1 ( ) T \u00d7 d Router FFN 1 FFN 2 FFN 3 FFN 4 FFN 1 FFN 2 FFN 3 FFN 4 Merged FFN FFN 1 FFN 2 FFN 3 FFN 4 Merged FFN Stop gradient Segment 2 ( ) T \u00d7 d Segment 3 ( ) T \u00d7 d Input of the MoE layer ( ) L \u00d7 d Output of the MoE layer ( ) L \u00d7 d Router Router Attention layer MoE layer doc 1 doc 2 \u2026 doc m Training instance: similar docs The Fields Medal is a prize awarded to two, three, \u2026 Huh was awarded 2022 Fields Medal \u2026 Similarity-based data batching Causal segment routing Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training Figure 1: We propose Lory, a fully differentiable MoE architecture designed for autoregressive language models based on expert merging (Section 2.2). We introduce two key techniques to train Lory: First, we propose the causal segment routing strategy, which conducts expert merging at the segment level and preserves the autoregressive property of language models. Second, we use the similarity-based data batching method to construct training instances, which steers the experts toward specializing in specific domains or topics. text classification tasks which only require routing each input sequence to different experts, language modeling makes predictions for each input token, and performing token-level routing is prohibitively expensive as the computational cost of merging operations scales linearly with the number of experts. Lory is based on two key techniques (Figure 1). We first propose causal segment routing. For a sequence of input tokens, we split them into multiple segments with a fixed length, and use the previous segment to determine the router\u2019s weights and calculate the merged expert for the subsequent segment. During inference, we can simply use the prompt to make a single routing decision throughout the generation. This segment-level routing strategy preserves the autoregressive nature of language models, while keeping the merging operations efficient. However, since the text data for pre-training language models usually concatenates random sets of documents, we find that such routing can lead to scenarios in which experts are not sufficiently specialized. Hence, we propose our second technique\u2014 similarity-based data batching for MoE training, which groups semantically similar documents to form consecutive segments. This idea has been recently proposed to train LMs to better reason across document boundaries (Shi et al., 2024), while we find that it leads to more effective training of expert routing. We pre-train a series of Lory models from scratch under a training budget of 150B tokens, with 0.3B and 1.5B active parameters, and 8, 16 or 32 experts (up to 6.8B and 29.5B full parameters; see Table 3). Experimental results show that our Lory models significantly outperform equal-sized dense models trained with the same amount of data, achieving performance gains on both perplexity (+13.9%), and a wide range of downstream tasks including commonsense reasoning (+3.7%), reading comprehension (+3.3%), closed-book QA (+1.5%), and text classification (+11.1%). Interestingly, despite that Lory uses segmentlevel routing, we find it achieves competitive performance compared to state-of-the-art MoE models with token-level, non-differentiable discrete routing (Zhou et al., 2022). Our analysis further shows that the trained experts in Lory capture domain-level specialization without any supervision, making it distinct from previous MoE LMs with token-level routing, which only exhibits local patterns uniformly distributed across different domains (Xue et al., 2024; Jiang et al., 2024). Together, we present the first fully-differentiated MoE model that is suitable for language model pre-training, and demonstrate its effectiveness at scale. We hope our work sheds light on the potential of fully differentiable MoE architectures in cultivating specialized experts and we seek to encourage continued exploration in this research field. 2 \fPreprint 2 Preliminaries 2.1 Sparsely-activated MoE Transformer-based MoE language models typically substitute feed-forward network (FFN) layers with sparsely-activiated MoE layers (Shazeer et al., 2017; Fedus et al., 2022; Zoph et al., 2022). Assume an MoE layer consists of E expert FFNs, each parameterized as FFN(\u00b7; \u03b81), . . . , FFN(\u00b7; \u03b8E), where the function FFN : Rd \u2192Rd defines a single expert module. For each token x in a sequence, an MoE layer takes the hidden representation hx \u2208Rd as the input and computes its output ox \u2208Rd by sparsely activating k experts in this layer and aggregating the outputs through a weighted sum: ox = E \u2211 i=1 ei \u00b7 FFN(hx; \u03b8i), where ei = Top-k(Softmax(R(hx)))i. (1) The routing weight ei for the i-th expert is measured by a routing network or router R, which takes hx as input and calculates the weight for each expert. In practice, to achieve sparsity and computational efficiency, only one (Fedus et al., 2022) or top-k (Lepikhin et al., 2021) experts with the highest routing weights are activated at each MoE layer. The weights of the remaining experts are set to 0 (i.e., ei = 0), eliminating the need to compute FFN(hx; \u03b8i) and effectively deactivating the i-th expert. 2.2 Fully Differentiable MoE Architectures via Expert Merging The primary challenges in training sparsely activated MoE models arise from the difficulty in training discrete routers. A promising direction is to design fully differentiable MoE architectures that do not depend on extra loss formulations for stablized training. A recent model architecture (Muqeeth et al., 2023) demonstrates the feasibility by computing a weighted average of all expert FFNs in the parameter space (Matena & Raffel, 2022; Wortsman et al., 2022), thereby creating a \"merged FFN\". Given an input x and its corresponding routing weights ei, the output ox of a merged FFN is computed as: ox = FFN(hx; E \u2211 i=1 ei \u00b7 \u03b8i), where ei = Softmax(R(hx))i. (2) However, naively extending it to autoregressive language models, which would require computing the merged FFN for each token in a sequence, would be infeasible as the computational costs of merging operations scales linearly with the number of experts. SMEAR (Muqeeth et al., 2023) has only been evaluated for downstream fine-tuning on text classification tasks, which makes routing decisions based on a pooling representation of the entire input sequence, i.e., ei = Softmax(R( \u2211L j=1 hxj L ))i. Such operations will disrupt the autoregressive property in language model pre-training. In this work, we address these challenges by developing a fully differentiable MoE architecture suitable for autoregressive language modeling, and pre-train such models at scale. 3 Our Approach: Lory In this section, we present Lory, an approach for pre-training fully differentiable MoE language models (Figure 1). The core technique that enables Lory to be fully differentiable is expert merging (Muqeeth et al., 2023, see details in Section 2.2). To make it computationally feasible, we propose a causal segment routing method that only merges experts once for each segment, effectively reducing the number of merging operations (Section 3.1). We also propose a data batching strategy of grouping semantically similar texts, which is crucial for effective training of the segment-level router (Section 3.2). Notations. We denote an input sequence of L tokens as X = (x1, x2, . . . , xL). By considering a segment size T, we divide the input sequence into N = \u2308L/T\u2309segments, denoted as 3 \fPreprint S1, S2, . . . , SN. We use R to denote the routing network (parameterized as a linear layer) that computes the weights for expert merging. Let hx represent the hidden representation of the token x. The parameters of the i-th expert FFN are denoted by \u03b8i. 3.1 Efficient Expert Merging via Causal Segment Routing Challenges. An intuitive way of reducing the computational cost is to use segment-level routing instead of token-level routing, which can reduce the number of merging operations from L to N times. However, simply using the current segment to compute the routing weights can cause information leakage. Training design. We propose causal segment routing to effectively route information across segments in an autoregressive manner.2 It merges FFNs in an MoE layer based on the previous segment\u2019s information, and uses it to process the current segment. Specifically, given a training instance X that consists of L tokens (e.g., L = 4096), we split the training instance into N segments, each of which contains T (e.g., T = 256) consecutive tokens. For the k-th segment Sk when k > 1, we compute the average of the hidden representations of its preceding segment Sk\u22121 , denoted as \u00af hk\u22121. Using the average hidden representation allows the model to adapt to prompts of varying lengths during inference. \u00af hk\u22121 is then utilized to determine the routing weights, resulting in a merged expert \u00af \u03b8: \u00af hk\u22121 = 1 T \u2211 x\u2208Sk\u22121 hx, ei = Softmax(R(\u00af hk\u22121)), \u00af \u03b8 = \u2211 i ei \u00b7 \u03b8i. (3) We then use the merged expert \u00af \u03b8 to process all the tokens in the current segment Sk, i.e., ox = FFN(hx; \u00af \u03b8), \u2200x \u2208Sk. This approach guarantees that the routing decisions made by the model are based exclusively on data from preceding positions. For the first segment S1, the representation of the segment itself is used to compute the merging weights for its own FFN. To prevent information leakage, we implement a stop-gradient operation on R(\u00af h1). As demonstrated in Appendix B, merging experts at the segment level incurs minimal overhead compared to the training of dense models. Prompt-only routing during inference. During inference, we begin with a given prompt and make a single routing decision per layer based on the average hidden representations of the prompt. This routing decision determines a merged FFN and it is used consistently throughout the entire generation process. It is important to note that this inference process is as simple and computationally efficient as dense models.3 3.2 Similarity-based Data Batching The standard practice of pre-training LMs is to randomly concatenate documents to construct training instances with a fixed length. This could lead to under-specialized experts, because tokens within adjacent segments may come from very different and irrelevant documents. To mitigate this issue, we employ a similarity-based data batching technique inspired by Shi et al. (2024), which sequentially concatenates similar documents to construct training instances. This encourages high similarity between adjacent segments, enabling the experts to specialize in different domains or topics. We measure document similarity using Contriever (Izacard et al., 2022) and concatenate similar documents based on a greedy 2A piece of pseudocode of the causal segment routing strategy can be found in Appendix A. 3In Appendix G.3, we compare the prompt-only routing strategy to using the causal segment routing strategy that faithfully follows the training design. We find that these two strategies do not lead to significant differences in performance on downstream tasks. Given the efficiency advantage of making a single routing decision, we adopt the prompt-only strategy as the default approach. In Appendix H.2, we discuss the potential of converting Lory to sparsely activated MoE models for memory-efficient inference, which we leave it as future work. 4 \fPreprint search algorithm (see Appendix C). Although we employ a data batching technique similar to Shi et al. (2024), our motivation differs from theirs. While their work aims to improve language models\u2019 reasoning across document boundaries, we find this technique effective in encouraging expert specialization in training MoE models. 4 Experiments In this section, we evaluate Lory by training a series of language models from scratch. We first describe the experimental setups (Section 4.1) and then present the results (Section 4.2). 4.1 Setups Models. We evaluate our approach by training decoder-only Transformer models which consist of 0.3B and 1.5B active parameters.4 For each FFN layer in the Transformer model, we replace it with MoE layers with E \u2208{8, 16, 32} experts with exactly the same architecture.5 Appendix D shows the configuration of model architectures as well as the total parameter count. We follow LLaMA (Touvron et al., 2023a) and use SwiGLU (Shazeer, 2020) as the activation function in FFNs. We use the same tokenizer as the LLaMA models (Touvron et al., 2023a;b). All models are trained with a 4096-token context window. In the causal segment routing strategy, we set the length of each segment to be T = 256. Training details. We employ the AdamW optimizer (Loshchilov & Hutter, 2019) with \u03b21 = 0.9 and \u03b22 = 0.95 and use a learning rate of 2e-4 with a cosine learning rate scheduler. All models with a batch size of 1 million tokens. We employ the data parallelism with the ZeRO optimization (Rajbhandari et al., 2020) for distributed training.6 At the beginning of training, we train a parameter-matched dense model and duplicate the FFN layers as initialization of the MoE model. In our experiments, we use the first 5% training steps as the warmup to initialize the MoE weights. We find that without warmup training, there may be more experts under-utilized (see Appendix G.4 for an ablation study). We also apply a linear warmup to the learning rate scheduler for the first 5% training steps. We train our models with up to 64 A100 GPUs. Training datasets. We randomly sample a subset of the Commoncrawl dataset (Wenzek et al., 2019) as the training data. The full training dataset consists of 150 billion tokens in total. We apply the similarity-based data batching method on this subset of construct all the training instances, following Shi et al. (2024). See Appendix C for details of the data batching method. Evaluation datasets. We evaluate all the models on language modeling tasks by measuring the perplexity of trained models on held-out evaluation datasets sampled from arXiv, Books, Wikipedia, C4 (Raffel et al., 2020), and Python code (a Python subset of Github). Each evaluation dataset contains 1K samples, each of which consists of 4096 tokens. We also evaluate models in downstream tasks with in-context learning (Brown et al., 2020), including common sense reasoning: BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), SIQA (Sap et al., 2019), HellaSwag (Zellers et al., 2019), WinoGrand (Sakaguchi et al., 2020); reading comprehension: RACE (Lai et al., 2017), ARC (Clark et al., 2018)); closedbook QA: Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017); and text classification: AGNews (Zhang et al., 2015), SST-2 Socher et al. (2013), Amazon and Yelp (Zhang et al., 2015), FEVER (Thorne et al., 2018), MRPC (Dolan & Brockett, 2005). For text classification tasks, we follow the evaluation setup of Min et al. (2022); for the rest of tasks, we follow the same setup as Touvron et al. (2023b). 4Here, \u201cactive parameters\u201d refers to the size of the model after merging at each MoE layer. 5In Appendix E, we additionally conduct experiments on a 7B dense model and a 7B/4E MoE model without using similarity-based data batching. Due to the limited computing resources, we are not able to train 7B models on the similarity-based batched dataset. 6In Appendix H.1, we discuss parallelism strategies when scaling up model sizes (e.g., > 100B). 5 \fPreprint 0 50 100 150 2.1 2.2 2.3 2.4 2.5 2.6 Log Perplexity 0.3B (dense) 0.3B/8E (Lory) 0.3B/16E (Lory) 0.3B/32E (Lory) 0 50 100 150 1.9 2.0 2.1 2.2 2.3 2.4 1.5B (dense) 1.5B/8E (Lory) 1.5B/16E (Lory) 1.5B/32E (Lory) Billion of tokens Model arXiv Books Wiki C4 Python 0.3B 8.4 18.0 10.3 13.8 15.2 0.3B/8E 7.4 16.0 9.2 13.3 12.5 0.3B/16E 7.2 15.7 9.1 13.1 12.2 0.3B/32E 7.2 15.5 8.9 13.0 11.7 1.5B 6.6 13.6 7.8 10.7 10.4 1.5B/8E 6.2 12.8 7.6 10.6 10.1 1.5B/16E 6.0 12.4 7.1 10.6 8.9 1.5B/32E 5.8 12.3 7.1 10.4 8.7 Figure 2: Left: training curves (log perplexity) of models with different sizes and experts. Right: Perplexity of trained models on different evaluation sets (arXiv, Books, Wikipedia, C4, and Python). We include the detailed model configurations and sizes in Appendix D. Commonsense Reasoning Reading Comprehension Model PIQA SIQA BoolQ HellaSwag WinoGrande RACE-m RACE-h ARC-e ARC-c 0.3B 65.8 42.7 44.6 34.6 51.2 41.7 30.9 51.5 21.3 0.3B/8E 67.5 41.2 41.2 34.8 54.4 43.1 31.4 52.4 22.1 0.3B/16E 67.2 44.1 56.6 34.9 54.1 43.9 31.1 54.8 24.9 0.3B/32E 68.2 43.0 58.0 34.7 53.4 42.7 32.0 57.4 26.3 1.5B 71.2 45.0 54.0 43.9 60.9 50.1 36.7 65.0 31.0 1.5B/8E 72.1 45.2 62.0 43.6 63.7 51.2 36.5 66.3 32.5 1.5B/16E 71.3 45.0 56.0 43.7 61.5 51.7 37.3 66.3 32.7 1.5B/32E 72.1 47.1 59.9 43.8 61.9 51.5 32.4 66.7 32.7 Closed-book QA Text Classification Avg Model NQ TQA AGNews Amazon SST-2 Yelp Fever MRPC 0.3B 4.7 8.8 30.3 53.6 54.6 66.0 47.6 62.0 41.8 0.3B/8E 5.3 9.0 38.4 52.3 54.6 62.6 56.6 59.0 42.7 0.3B/16E 6.0 10.2 36.3 75.6 53.3 64.0 57.0 65.0 45.8 0.3B/32E 5.3 10.2 47.3 64.0 55.3 73.3 55.7 56.0 46.0 1.5B 7.6 23.8 64.0 65.3 80.0 58.6 59.0 66.7 51.9 1.5B/8E 7.3 24.2 65.0 94.0 80.0 88.3 57.0 64.0 56.1 1.5B/16E 7.3 25.6 61.6 78.3 84.6 93.6 57.3 63.6 55.1 1.5B/32E 7.0 25.4 62.3 94.7 85.0 95.3 56.3 66.7 56.5 Table 1: We compare the Lory MoE models with the parameter-matched dense models on downstream tasks, including commonsense reasoning, reading comprehension, closed-book QA, and text classification. 4.2 Main Results Training efficiency and convergence. Figure 2 (left) shows the training loss curves of the dense model and our MoE models with different model sizes. First, we find that with the same amount of training tokens, our models clearly achieve better training loss compared to the dense model baseline. For the 0.3B and 1.5B models, our models with 32 experts achieve the same level of loss with fewer than half of the training tokens. This indicates that our approach achieves much better performance with the same training compute (see analysis of additional FLOPs from MoE layers in Appendix B). We also observe that when using more experts, we are able to gain more improvement. Language modeling. We evaluate trained models on language modeling evaluation sets. As shown in Figure 2 (right), our MoE models outperform the dense baseline in all domains, significantly reducing perplexity. For example, our 0.3B/32E model achieves a relative improvement of 13.9% on Books compared to the 0.3B dense model. We observe that the improvement is especially large in test domains that are markedly different from the domains of the training dataset (e.g. Python). We consider this as a strong indication of expert specialization in specific domains (We further study expert specialization in Section 5.4). 6 \fPreprint Downstream tasks. Table 1 shows the model performance on downstream tasks. We observe significant performance across all tasks. For example, our 0.3B/32E model achieves an average performance improvement of +3.7% in common sense reasoning, +3.3% in reading comprehension, +1.5% in reading comprehension, and +11.1% in text classification. 5 Analysis and Ablation Studies In this section, we conduct ablation studies and analysis to understand the essence of each component of our approach. 5.1 Importance of Causal Segment Routing We compare our causal segment routing strategy with an alternative prefix routing strategy for training. In prefix routing, expert merging is performed only once for each sequence based on the first segment. The merged FFN is then used to process the rest of the sequence without further updates. Figure 3 shows that using only a prefix for routing leads to much worse performance compared to causal segment routing. These results highlight the importance of using every segment to provide strong training signals for routers. 0 50 100 150 Billion of tokens 2.1 2.2 2.3 2.4 2.5 2.6 Log Perplexity 0.3B (dense) 0.3B/8E (causal segment routing) 0.3B/8E (prefix routing) Figure 3: Training curves of causal segment routing and prefix routing. The latter is a straightforward segment-level routing strategy that uses the first segment to route the entire input. 0 50 100 150 2.1 2.2 2.3 2.4 2.5 2.6 Log Perplexity 0.3B (sim batch) 0.3B/8E (sim batch) 0.3B (rand batch) 0.3B/8E (rand batch) 0 50 100 150 0.00 0.02 0.04 0.06 0.08 0.10 Loss Improvement (MoE over Dense) sim batch rand batch Billion of tokens Figure 4: Left: Training curves of similarity-based data batching (sim batch) or the standard random batching (rand batch). Right: Training loss difference between Lory and a dense model when using different batching strategies. Lory leads to a larger loss improvement over the dense model when using similarity-based data batching. 5.2 Importance of Similarity-based Data Batching To investigate the importance of similarity-based data batching, we compare the performance improvement of MoE models over dense models with and without this batching method. Figure 4 (left) shows the training loss of dense (0.3B) and MoE models with eight experts (0.3B/8E) using similarity-batched (sim batch) and randomly-batched (rand batch) data. MoE models consistently outperform dense models in both setups. However, the loss improvement (i.e., the difference in loss between dense and MoE models) is much larger with similarity-based batching, and this effect is amplified with more training data (Figure 4 (right)). These results strongly support the importance of similarity-based batching for effectively training our MoE model. 5.3 Comparison with Existing MoE Models We compare our approach with Expert Choice (EC) (Zhou et al., 2022), a state-of-theart MoE method that ensures balanced load during training by having each expert select top-k inputs according to the routing weights. We consider two variants of EC MoE models, both with a capacity factor of 1 to match the computation of our MoE models. First, we train a sparse EC MoE model using our segment routing strategy, where each expert selects top segments and processes all tokens within those segments. This variant allows us to directly compare our expert-merging strategy with the expert choice method while using the same segment-level routing approach. 7 \fPreprint 0 50 100 150 Billion of tokens 2.1 2.2 2.3 2.4 2.5 2.6 Log Perplexity 0.3B/8E (Lory) 0.3B/8E (EC, segment-level) 0.3B/8E (EC, token-level) Figure 5: Comparison with the state-ofthe-art MoE training technique Expert Choice (EC) with a segment-level or token-level routing. For both EC models, we use the capacity factor of 1 with the same amount of FLOPs as our training method for the fair comparison. Second, we consider the original EC setting with token-level routing to provide an end-to-end comparison with state-of-the-art MoE models using the same amount of training computation. Figure 5 shows the training loss curves. We observe that Lory (blue curve) significantly outperforms segment-level EC (orange curve) with the same routing setting, suggesting that a fully differentiable architecture is more effective than a sparse MoE when using the same routing strategy. Comparing Lory with the token-level EC model (green curve), we find that Lory achieves competitive results despite using segmentlevel routing and not requiring any advanced training techniques. These results highlight the significant potential of Lory. In Appendix G.1, we compare Lory and EC on held-out evaluation sets. We find Lory achieves much better perplexity compared to the token-level EC model, while performing similarly on other domains (arXiv, Books, Wiki, C4). Our analysis in Section 5.4 demonstrates that Lory learns experts specialized in specific domains (e.g., Python code), potentially improving performance in less frequent domains. 5.4 Expert Utilization and Specialization Utilization: How many experts are actively utilized? One potential issue of training MoE models is the models may collapse to dense models because most experts are under-utilized (e.g., some experts have never been activated). In Appendix G.2, we show although without using any auxiliary loss on load balancing, Lory is able to achieve high expert utilization, preventing the MoE models from collapsing to dense models. Specialization: What do experts learn? In order to study the expert specialization, we investigate the averaged routing weights at different layers of the 0.3B/8E model, on different domains (Books, arXiv, Python, and Wikipedia). Figure 6 shows the routing weights at layer 0, 11, and 23 (the first, middle, and last layer) of the 0.3B/8E model.7 First, we find that there exists clear domain-level expert specialization in our trained MoE models, even though no additional domain-level supervision is used during training. For instance, expert 7 at layer 11 is specialized to process inputs in the arXiv domain. We also observe that routing weights on arXiv and Python code are more similar compared to Books and Wikipedia, likely because LaTex code and Python code are dissimilar to natural language. Second, experts at the middle or high layers are more specialized in specific domains, while the routing weights at lower layers are similar and flat across domains. 0 2 4 6 0.0 0.1 0.2 0.3 0.4 Layer 0 0 2 4 6 Expert ID Layer 11 0 2 4 6 Layer 23 Averaged Weights Books arXiv Python Wikipedia Figure 6: Averaged routing weights at layer {0, 11, 23} of the 0.3B/8E model on different domains (Books, arXiv, Python, Wikipedia). We observe that the experts in our MoE models learn domain-level specialization, especially at middle and higher layers. 7In Appendix F, we show the averaged routing weights at all layers of the 0.3B/8E model. 8 \fPreprint It is worth noting that our learned experts behave differently from those of prior token-level MoE models, where shallow token-level specialization is observed. For example, some experts are specialized for a specific type of word (e.g., punctuations, articles), and few deep semantic features are captured by the learned routers (Jiang et al., 2024; Lewis et al., 2021; Zoph et al., 2022; Shazeer et al., 2017; Xue et al., 2024). Our models learn domain-level specialization, which we attribute to the segment-level routing strategy used during training. This strategy allows routers to capture global semantic features beyond the token level. The complementary nature of features captured by segment/sentence-level and token-level routing strategies suggests the possibility of combining them to build even stronger models, and we leave it for future work. 5.5 More Analysis and Discussion In Appendix G, we further show that (1) during inference of downstream tasks, routing the entire input prompt once or routing each segment does not make substantial differences on the tasks we evaluate; (2) warmup training is crucial to achieve high expert utilization, especially when training MoE models with a large number of experts. In addition, we discuss training parallelism strategies when further scaling up model sizes in Appendix H.1; and discuss the potential of converting Lory to sparse models for more efficient inference in Appendix H.2. 6 Related Work Mixture of Experts. Sparsely activated MoE models (Shazeer et al., 2017) have been proposed to demonstrate the potential of massively scaling up model sizes. GShard (Lepikhin et al., 2021) adapts the sparse MoE architecture into Transformer models and achieves strong results on machine translation. Recent work has extended it to general language models (Fedus et al., 2022; Zoph et al., 2022; Jiang et al., 2024; Dai et al., 2024; Zhou et al., 2022; Du et al., 2022; Artetxe et al., 2021; Xue et al., 2024). Traditional MoE models are trained to route given inputs to one or a few specialized expert modules, which introduces a non-differentiable, discrete decision-learning problem. These existing models are trained with the top-1 or top-2 routing strategy on a carefully designed load balancing objective (Lepikhin et al., 2021; Fedus et al., 2022; Zoph et al., 2022), or employ complicated assignment algorithms to distribute inputs (Lewis et al., 2021; Roller et al., 2021; Zhou et al., 2022). Training MoE models has been shown to be difficult, facing the issues of training instability, expert under-specialization, poor training efficiency (Zoph et al., 2022). Our approach enables end-to-end gradient back-propagation by employing fully differentiable MoE architectures. SMEAR (Muqeeth et al., 2023) proposes softly merging experts by taking a weighted average on the parameter space. However, SMEAR is only applied to text classification tasks with an encoder backbone. Although Lory shares a similar expert merging technique, it is the first approach that scales such architecture to autoregressive language model pre-training. Soft MoE (Puigcerver et al., 2024) is another fully-differentiable MoE architecture which enables end-to-end gradient back-propagation. However, it is only evaluated on vision tasks and does not apply to autoregressive language model pre-training either. We leave how to extend Soft MoE on decoder language models as the future work. Similarity-based data batching. There exists research that applies a similar data batching method during training. In-context pre-training (Shi et al., 2024) groups relevant documents together to encourage language models to leverage long-range contexts and improve the results of in-context learning and retrieval augmentation. Zhong et al. (2022) batch documents with high lexical similarity to collect more positive pairs in a contrastive learning framework to provide stronger training signals. Despite sharing a similar idea, the goal of our data batching method is to avoid routing irrelevant documents together, which may hurt the expert specialization. 9 \fPreprint 7 Conclusion In this paper, we propose Lory, a fully-differentiable MoE model designed for autoregressive language model pre-training. Our extensive experiments demonstrate that Lory significantly outperforms its dense counterpart on language modeling and downstream tasks. We also observe that trained experts are highly specialized and capable of capturing domain-level information. Future research includes further scaling up Lory, combining token-level routing and segment-level routing, and developing efficient decoding methods for Lory. Acknowledgements We appreciate useful comments and feedback from the members of the Princeton NLP group. We thank Weijia Shi for the help with the experiments and discussion related to the similarity-based data batching method. Ethics Statement This paper presents a new approach for building large language models. We would like to note that, similar to existing language models, the language models trained with our approach may have the same potential societal consequences. For example, language models can produce factually inaccurate outputs, which carry the risk of spreading misinformation (e.g., (Min et al., 2023)). Additionally, malicious users can extract training data used to train language models, potentially causing privacy and licensing issues (Carlini et al., 2021). We acknowledge these potential negative consequences and caution those who use our approach to build powerful language models."
17
+ }
title_10K/test_title_short_2405.03150v1.json ADDED
The diff for this file is too large to render. See raw diff
 
title_10K/test_title_short_2405.03188v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03188v1",
3
+ "title": "Hyperbolic Geometric Latent Diffusion Model for Graph Generation",
4
+ "abstract": "Diffusion models have made significant contributions to computer vision,\nsparking a growing interest in the community recently regarding the application\nof them to graph generation. Existing discrete graph diffusion models exhibit\nheightened computational complexity and diminished training efficiency. A\npreferable and natural way is to directly diffuse the graph within the latent\nspace. However, due to the non-Euclidean structure of graphs is not isotropic\nin the latent space, the existing latent diffusion models effectively make it\ndifficult to capture and preserve the topological information of graphs. To\naddress the above challenges, we propose a novel geometrically latent diffusion\nframework HypDiff. Specifically, we first establish a geometrically latent\nspace with interpretability measures based on hyperbolic geometry, to define\nanisotropic latent diffusion processes for graphs. Then, we propose a\ngeometrically latent diffusion process that is constrained by both radial and\nangular geometric properties, thereby ensuring the preservation of the original\ntopological properties in the generative graphs. Extensive experimental results\ndemonstrate the superior effectiveness of HypDiff for graph generation with\nvarious topologies.",
5
+ "authors": "Xingcheng Fu, Yisen Gao, Yuecen Wei, Qingyun Sun, Hao Peng, Jianxin Li, Xianxian Li",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Hyperbolic Geometric Latent Diffusion Model for Graph Generation",
15
+ "main_content": "Introduction Graphs in the real world contain variety and important of topologies, and these topological properties often reflect 1Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, China 2Institute of Artificial Intelligence, Beihang University, Beijing, China 3School of Software, Beihang University, Beijing, China 4Beijing Advanced Innovation Center for Big Data and Brain Computing, School of Computer Science and Engineering, Beihang University, Beijing, China. Correspondence to: Xingcheng Fu <[email protected]>, Jianxin Li <[email protected]>, Xianxian Li <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). physical laws and growth patterns, such as rich-clubs, smallworlds, hierarchies, fractal structures, etc. Traditional random graph models based on graph theory, such as ErdosRenyi (Erd\u02dd os et al., 1960), Watts-Strogatz (Watts & Strogatz, 1998) and Barabasi-Albert (Barab\u00b4 asi & Albert, 1999), etc., need artificial heuristics to build the algorithms for single nature topologies and lack the flexibility to model various complex graphs. Therefore, many deep learning models have been developed for graph generation, such as Variational Graph Auto-Encoder (VGAE) (Kipf & Welling, 2016), Generative Adversarial Networks(GAN) (Goodfellow et al., 2014), and other technologies. Recently, the Denoising Diffusion Probabilistic Model(DDPM) (Ho et al., 2020) have demonstrated great power and potential in image generation, attracting huge attention from the community of graph learning. For graph generation, a straightforward idea involves designing discretized diffusion methods for the graph structural information. (Vignac et al., 2022; Jo et al., 2022; Luo et al., 2022), and the other way is to develop advanced graph encoders to preserve structural information throughout the diffusion process within a continuous potential space (Xu et al., 2021; 2023). However, because of the irregular and non-Euclidean structure of graph data, the realization of the diffusion model for graphs still has two main limitations: (1) High computational complexity. The core to graph generation is to handle the discreteness, sparsity and other topological properties of the non-Euclidean structure. Since the Gaussian noise perturbation used in the vanilla diffusion model is not suitable for discrete data, the discrete graph diffusion model usually has high time and space complexity due to the problem of structural sparsity. Moreover, the discrete graph diffusion model relies on a continuous Gaussian noise process to create fully connected, noisy graphs (Zhang et al., 2023; Ingraham et al., 2019) which loses structural information and underlying topological properties. (2) Anisotropy of non-Euclidean structure. Different from the regular structure data (e.g. pixel matrix or grid structure), the \u201dirregular\u201d non-Euclidean structure embeddings of graph data are anisotropic in continuous latent space (Elhag et al., 2022). As shown in Figure 1(b), the node embeddings of a graph in Euclidean space exhibit significant anisotropy in several specific directions. Recently, some studies (Yang et al., 2023) have shown that isotropic 1 arXiv:2405.03188v1 [cs.LG] 6 May 2024 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation (a) Original structure. (b) Euclidean latent space. (c) Hyperbolic latent space. Figure 1. Visualization of node embeddings by singular value decomposition (SVD); (a) Original structure visualization of the NCAA football graph and different colors indicate different labels(teams); (b) Visualization of node embeddings in 2D Euclidean space and planar projection; (c) Visualization of node embeddings in 2D hyperbolic space and Poincar\u00b4 e disk projection. diffusion of the node embedding of the graph in the latent space will treat the anisotropic structural information as noise, and this useful structural information will be lost in the denoising process. Hyperbolic geometric space is widely recognized as an ideal continuous manifold for representing discrete tree-like or hierarchical structures (Cannon et al., 1997; Ungar, 1999; Krioukov et al., 2010; Sun et al., 2024b), and has been widely studied and applied to various graph learning tasks (Sun et al., 2021; Tifrea et al., 2019; Nickel & Kiela, 2017; Sala et al., 2018; Chami et al., 2019; Sun et al., 2024a). Inspired by these studies, we find that hyperbolic geometry has great potential to address non-Euclidean structural anisotropy in graph latent diffusion processes. As shown in Figure 1(c), in hyperbolic space, we can observe that the distribution of node embeddings tends to be isotropic globally, while anisotropy is preserved locally. In addition, hyperbolic geometry unifies angular and radial measures of polar coordinates as shown in Figure 2(a), and can provide geometric measures with physical semantics and interpretability (Papadopoulos et al., 2012). It is exciting that hyperbolic geometry can provide a geometrically latent space with graph geometric priors, able to help deal with the anisotropy of graph structures by special geometric measures. Based on the above insights, we aim to establish a suitable geometrically latent space based on hyperbolic geometry to design an efficient diffusion process to the non-Euclidean structure for topology-preserving graph generation tasks. However, there are two primary challenges: (1) the additivity of continuous Gaussian distributions is undefined in hyperbolic latent space; (2) devising an effective anisotropic diffusion process for non-Euclidean structures. Contributions. To address the challenges, we propose a novel Hyperbolic Geometric Latent Diffusion (HypDiff) model for the graph generation. For the additive issue of continuous Gaussian distribution in hyperbolic space, we propose an approximate diffusion process based on radial measures. Then the angular constraint was utilized to constrain the anisotropic noise to preserve more structural prior, guiding the diffusion model to finer details of the graph structure. Our contributions are summarized as: \u2022 We are the first to study the anisotropy of nonEuclidean structures for graph latent diffusion models from a geometric perspective, and propose a novel hyperbolic geometric latent diffusion model HypDiff. \u2022 We proposed a novel geometrically latent diffusion process based on radial and angular geometric constraints in hyperbolic space, and addresses the additivity of continuous Gaussian distributions and the issue of anisotropic noise addition in hyperbolic space. \u2022 Extensive experiments on synthetic and real-world datasets demonstrate a significant and consistent improvement of HypDiff and provide insightful analysis for graph generation. 2. Related Works 2.1. Graph Generative Diffusion Model Different from that learn to generate samples once, like GAN (Goodfellow et al., 2014; Wang et al., 2018; Dai et al., 2018), VGAE (Yu et al., 2018; Xu & Durrett, 2018; Grattarola et al., 2019) or GraphRNN (You et al., 2018), the diffusion model (Ho et al., 2020) aims to gradually convert the sample into pure noise by a parameterized Markov chain process. Some recent works (Xu et al., 2021; 2023) employ advanced graph encoders to effectively preserve the inherent structural information throughout the diffusion process within a continuous potential space. Gaussian noise is added on the distribution of nodes and edges of the graph (Vignac et al., 2022), and Gaussian processes are performed on the neighborhood or spectral domain of the graph (Vignac et al., 2022; Jo et al., 2022; Luo et al., 2022). However, existing discrete diffusion models have many challenges in capturing the non-Euclidean structure and preserving underlying topological properties. 2 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation (a) Geometric interpretation. (b) Hyperbolic latent diffusion. Figure 2. (a) Geometric interpretation of the hyperbolic geometry, which unifies the radius and angle measurements in polar coordinates and interprets as popularity and similarity respectively; (b) Hyperbolic latent diffusion processing with isotropic/anisotropic noise; 2.2. Hyperbolic Graph Learning Hyperbolic geometric space was introduced into complex networks earlier to represent the small-world and scale-free complex networks (Krioukov et al., 2010; Papadopoulos et al., 2012). With high capacity and hierarchical-structurepreserving ability, hyperbolic geometry is also used in NLP (Nickel & Kiela, 2017; Tifrea et al., 2019) to learn word representations with hypernym structure. For graph neural networks, hyperbolic space is recently introduced into graph neural networks (Liu et al., 2019; Chami et al., 2019; Sun et al., 2021; 2022). P-VAE (Mathieu et al., 2019) and Hyper-ANE (Liu et al., 2018) extend VAE and GAN into the hyperbolic versions to learn the hierarchical representations. To sum up, hyperbolic geometry provides an intuitive and efficient way of understanding the underlying structural properties of the graph. 3. Methodology In this section, we present our Hyperbolic geometric latent Diffusion model (HypDiff) for addressing the two main challenges. The key insight is that we leverage hyperbolic geometry to abstract the implicit hierarchy of nodes in the graph and introduce two geometric constraints to preserve important topological proprieties, such as scale-free, navigability, and modularity. Considering the successful experiences of graph latent diffusion models (Xu et al., 2023), we adopt a two-stage training strategy framework in our practice. We first train the hyperbolic autoencoder to obtain the pre-trained node embeddings, and then train the hyperbolic geometric latent diffusion process. The architecture is shown in Figure 3. 3.1. Hyperbolic Geometric Autoencoding We first need to embed the graph data G = (X, A) into a low-dimensional hyperbolic geometric space to improve the graph latent diffusion process. Hyperbolic Encoder and Decoder. We consider a hyperbolic variant of the auto-encoder, consisting of the hyperbolic geometric encoder and the Fermi-Dirac decoder. Where the hyperbolic geometric encoder encodes the graph G = (X, A) into the hyperbolic geometric space to obtain a suitable hyperbolic representation, and the Fermi-Dirac decoder decodes the hyperbolic representation back into the graph data domain. The hyperbolic manifold Hd and the tangent space Tx can be mapped to each other via exponential map and logarithmic map (Ganea et al., 2018b). Then, we can leverage Multi-Layer Perceptrons(MLP) or Graph Neural Networks(GNNs) by exponential and logarithmic mapping as hyperbolic geometric encoders. In this paper, we use Hyperbolic Graph Convolutional Neural Networks(HGCN) (Chami et al., 2019) as the hyperbolic geometric encoder. Optimization of Autoencoding. Due to the additive failure of the Gaussian distribution in hyperbolic space, we cannot directly use Riemannian normal distribution or wrapped normal distribution. Instead of hyperbolic diffusion embedding (Lin et al.) using the product space of multiple manifolds, we propose a new diffusion process in hyperbolic space, which will be described in detail in Section 3.2. Following P-VAE (Mathieu et al., 2019), for compute efficiency, the Gaussian distribution of hyperbolic space is approximated by the Gaussian distribution of the tangent plane T\u00b5. The optimization of hyperbolic geometric auto3 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Figure 3. An illustration of HypDiff architecture. encoding is as follows: LHAE = \u2212Eq\u03d5(zx|x)logmapc op\u03be (x|zx) , (1) where logc o is the logarithmic mapping of the north pole (origin) o of hyperbolic space to simplify the computation. 3.2. Hyperbolic Geometric Latent Diffusion Process Unlike the linear addition in Euclidean space, hyperbolic space utilizes M\u00a8 obius addition, posing challenges for diffusion over a hyperbolic manifold. Furthermore, the isotropic noise leads to a rapid reduction of signal-to-noise ratio making it difficult to preserve topological information, and for the detailed results and analysis please refer to Appendix B. In light of these issues, we propose a novel diffusion process to address both of them. Hyperbolic Anisotropic Diffusion. The anisotropy of the graph in the latent space contains an inductive bias of the graph structure, where the most critical challenge is how to determine the dominant directions of the anisotropic features. In additionally, on hyperbolic manifolds, neither the wrapped normal distribution of the isotropic setup nor the anisotropic setup satisfies this property: \u03b7 \u0338\u223c\u03b71 \u2295c \u03b72, \u03b7 \u223cN c H \u00000, (\u03c32 1 + \u03c32 2)I \u0001 , \u03b71 \u223cN c H \u00000, \u03c32 1I \u0001 , \u03b72 \u223cN c H \u00000, \u03c32 2I \u0001 . (2) where c is Hyperbolic curvature and N c H is the Wrapped Gaussian distribution. We propose a hyperbolic anisotropic diffusion framework to solve both challenges. The detailed proof process can be found in the Appendix C.1. The core idea is to select the main diffusion direction (i.e., angle) based on the similarity clustering of nodes, which is equivalent to dividing the hyperbolic latent space into multiple sectors. Then we project the nodes of each cluster onto its center\u2019s tangent plane for diffusion. Let h denote the embedding of the graph in the hyperbolic space and hi denote the i-th node in it. Let hi belong to the k-th cluster and its clustering center coordinates are \u00b5k, then the node hi is represented in the tangent space of \u00b5k as x0i: x0i = logmapc \u00b5k (hi) . (3) where \u00b5k is the central point of cluster k obtained by Hyperbolic-Kmeans (h-kmeans) (Hajri et al., 2019) algorithm. Note that the clusters can be obtained by any clustering algorithm based on similarity in the pre-processing stage. Moreover, the hyperbolic clustering parameter k has the following property: Theorem 3.1. Given the hyperbolic clustering parameter k \u2208[1, n], which represents the number of sectors dividing the hyperbolic space (disk). The hyperbolic anisotropic diffusion is equivalent to directional diffusion in the Klein model Kn c with multi-curvature ci\u2208|k|, which is an approximate projecting onto the tangent plane set Toi\u2208{|k|} of the centroids oi\u2208{|k|}. The proof is in the Appendix C.2. This property elegantly establishes the relationship between our approximation algorithm and the Klein model with multiple curvatures. Our algorithm exhibits specific behaviors based on the value of k, it allows for a more flexible and nuanced representation of anisotropy based on the underlying hyperbolic geometry, enabling improved accuracy and efficiency in subsequent noise addition and training. Geometric Constraints. Hyperbolic geometry can naturally and geometrically describe the connection pattern of nodes during graph growth (Papadopoulos et al., 2012). As shown in Figure 2(a), the popularity of a node can be abstracted by its radial coordinates and the similarity can be expressed by its angular coordinate distances in the hyperbolic space, and more detail can be referred to Appendix D. Our goal is to model a diffusion with geometric radial growth, and where this radial growth is consistent with hyperbolic properties. Considering that we need to maintain this kind of hyperbolic growth tendency in the tangent plane, 4 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation we use the following formulas: xt = \u221a\u03b1tx0 + \u221a 1 \u2212\u03b1t\u03f5 + \u03b4 tanh[\u221ac\u03bbc ot/T0]x0, (4) where \u03f5 is Gaussian noise and \u03b4 is the radial popularity coefficient that controls the diffusion strength of each node in hyperbolic space. T0 is a constant to control the speed of control of radial growth rate.\u03bbc x = 2 1+c\u2225x\u22252 Then, we discuss the content only on a cluster tangent plane. The main reason why the general diffusion model does not perform well on the graph is the decline of the fast signal-tonoise ratio. Inspired by directional diffusion model (Yang et al., 2023), we designate the direction of the geodesic between each cluster\u2019s center point and the north pole o as the target diffusion direction while imposing constraints for forward diffusion processes. Specifically, the angular similarity constraints for each node i can be obtained by: z = sgn (logmapc o (h\u00b5i)) \u2217\u03f5, \u03f5 \u223cN (0, I) , (5) where z represents the angle constrained noise,\u03f5 is the Gaussian noise, h\u00b5i is the clustering center corresponding to the i-th node. Combining the radial and angular constraints, our geometric diffusion process can be described as: xt = \u221a\u03b1tx0 + \u221a 1 \u2212\u03b1tz + \u03b4 tanh[\u221ac\u03bbc ot/T0]x0, (6) Theorem 3.2. Let xt indicate the node x at the t-step in the forward diffusion process Eq (6). As t \u2192\u221e, the lowdimensional latent representation xt of node x satisfies: lim t\u2192\u221ext \u223cNf (\u03b4x0, I) . (7) where Nf is an approximate folded normal distribution. More detail and proof can be referred to in the Appendix E. Figure 2(b) illustrates examples of the diffusion process with/without geometric constraints in hyperbolic space. We can observe that by adding isotropic noise to the hyperbolic latent diffusion process, the final diffusion result is completely random noise. In contrast, the hyperbolic latent diffusion process with geometric constraints can significantly preserve the anisotropy of the graph. In other words, after the graph diffusion, the result still preserves the important inductive bias of the graph below rather than the completely random noise, which will directly affect the performance and generation quality of the denoising process Training and generation. Then, we follow the standard denoising process (Ho et al., 2020; Yang et al., 2023) and train a denoising network to simulate the process of reverse diffusion. We use a denoising network architecture of DDM based on UNET for training to predict x0, as follows: LHDM = E \u2225f\u03b8 (Xt, A, t) \u2212X0\u22252 . (8) Algorithm 1 Training of HypDiff Input: Graph G = {X, A}; Number of training epochs E; Parameter: \u03b8 initialization; Output:Predicted raw embedding \u02c6 xH Encoding node to hyperbolic space xH \u2190Eq. (1); Compute k-clusters by h-Kmeans; Project the embeddings onto each Toi\u2208{|k|} for e = 1 to E do Get the embeddings xHt of t-steps Eq. (6) ; Predict the raw embeddings \u02c6 xH ; Compute the loss L = LHDM\u2190Eq. (8); Update \u03b8 \u2190\u03b8 \u2212\u03b7\u2207\u03b8. end for Note that the loss function of our geometric diffusion model remains consistent with DDPM (Ho et al., 2020) based on Theorem 3.2. The proof refers to the Appendix F. Regarding the generation, we propose an efficient sampling method based on theorem 3.1. Furthermore, we demonstrate that it is possible to sample at once in the same tangent space instead of sampling in different cluster center tangent spaces to improve efficiency. As to the denoising process, we adopt a denoising process that can be used in generalized diffusion models(Yang et al., 2023). Specifically, where a recovery operator and a noise addition operator are abstracted for use in various diffusion methods. All the specifics regarding each stage of the diffusion process, along with the theoretical derivation, are documented in the Appendix F. Similar to other hyperbolic learning model (Krioukov et al., 2010; Chami et al., 2019; Ganea et al., 2018a), we utilize the Fermi-Dirac decoder (Krioukov et al., 2010; Nickel & Kiela, 2017) to compute the connection probability. The diffusion and reverse processes are summarized in Algorithm 1 and Algorithm 2. Complexity Analysis Let G = (X, E) be one of the graphs set Gs, where X is the n-dimensional node eigenvector and E is the m \u2217m-dimensional adjacency matrix of the graph. s is the number of graphs in the graph set Gs. Time complexity: The time complexity of hyperbolic graph encoding is O((1(t) + k)md). For the forward diffusion process, the complexity is O(md). The training of denoising networks is essentially the same as other diffusion models and does not require additional computing time as O(md)\u22171(t). Overall, the total time complexity of the diffusion process is O(1(t) \u22172md) + O((k + 2)md) in one epoch. Space complexity In our approach, since we embed the graphs in hyperbolic space, each graph is represented as a m \u2217ddimensional vector in the hyperbolic space, which means that our diffusion scale is O(smd). For a more detailed complexity analysis please refer to Appendix G. 5 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation 4. Experiment In this section, we conduct comprehensive experiments to demonstrate the effectiveness and adaptability of HypDiff 1 in various datasets and tasks. We first presented the experimental settings and then showcased the results. 4.1. Datasets We estimate the capabilities of HypDiff in various downstream tasks while conducting experiments on synthetic and real-world datasets. In addition, we construct and apply node-level and graph-level datasets for node classification and graph generation tasks. Statistics of the real-world datasets Table H can be found in Appendix H. We elaborate on more details as follows. Synthetic Datasets. We first use two well-accepted graph theoretical models, Stochastic Block Model (SBM) and Barab\u00b4 asi-Albert (BA), to generate a node-level synthetic dataset with 1000 nodes for node classification, respectively. (1) SBM portrays five equally partitioned communities with the edge creation of intra-community p = 0.21 and intercommunity q = 0.025 probabilities. (2) BA is grown by attaching new nodes each with random edges between 1 and 10. Then we employ four generic datasets with different scales of nodes |V | for graph generation tasks. Then, four datasets are generated for the graph-level task. (3) Community contains 500 two-community small graphs with 12 \u2264|V | \u226420. Each graph is generated by the Erd\u02dd osR\u00b4 enyi model with the probability for edge creation p = 0.3 and added 0.05 |V | inter-community edges with uniform probability. (4) Ego comprises 1050 3-hop ego-networks extracted from the PubMed network with |V | \u226420. Nodes indicate documents and edges represent their citation relationship. (5) Barab\u00b4 asi-Albert (G) is a generated graphlevel dataset by the Barab\u00b4 asi-Albert model (aka. BA-G to distinct node-level BA) with 500 graphs where the degree of each node is greater than four. (6) Grid describes 100 standard 2D grid graphs which have each node connected to its four nearest neighbors. Real-world Datasets. We also carry out our experiments on several real-world datasets. For the node classification task, we utilize (1) two citation networks of academic papers including Cora and Citeseer, where nodes express documents and edges represent citation links, and (2) Polblogs dataset which is political blogs and is a larger size dataset we used. With the graph generation task, we exploit four datasets from different fields. (3) MUTAG is a molecular network whose each graph denotes a nitro compound molecule. (4) IMDB-B is a social network, symbolizing the co-starring of the actors. (5) PROTEINS is a protein network in which 1The code is available at https://github.com/ RingBDStack/HypDiff. nodes represent the amino acids and two nodes are connected by an edge if they are less than 6 Angstroms apart. (6) COLLAB is a scientific collaboration dataset, reflecting the collaboration of the scientists. 4.2. Experimental Setup Baselines. To evaluate the proposed HypDiff , we compare it with well-known or state-of-the-art graph learning methods which include: (1) Euclidean graph representation methods: VGAE (Kipf & Welling, 2016) designs a variational autoencoder for graph representation learning. ANE (Dai et al., 2018) trains a discriminator to align the embedding distribution with a predetermined fixed prior. GraphGAN (Wang et al., 2018) learns the sampling distribution for negative node sampling from the graph. (2) Hyperbolic graph representation learning: P-VAE (Mathieu et al., 2019) is a variational autoencoder utilizing the Poincar\u00b4 e ball model within hyperbolic geometric space. Hype-ANE (Liu et al., 2018) is a hyperbolic adversarial network embedding model that extends ANE into hyperbolic geometric space. (3) Deep graph generative models: VGAE (Kipf & Welling, 2016) can be used for graph generation tasks by treating each graph as a batch size. GraphRNN (You et al., 2018) is a deep auto-regressive generative model that focuses on graph representations under different node orderings. (4) Graph diffusion generative models: GDSS (Jo et al., 2022) simultaneously diffuses node features and adjacency matrices to learn their scoring functions within the neural network correspondingly. DiGress (Vignac et al., 2022) is a discrete denoising diffusion model that progressively recovers graph properties by manipulating edges. GraphGDP (Huang et al., 2022) is a position-enhanced graph score-based diffusion model for graph generation. EDGE (Chen et al., 2023) is a discrete diffusion process for large graph generation. Settings. A fair parameter setting for the baselines is the default value in the original papers and for the training on new datasets make appropriate adjustments. For HypDiff, the encoder is 2-layer HGCN with 256 representation dimensions, the edge dropping probability to 2%, the learning rate to 0.001, and hyperbolic curvature c = 1. Additionally, the diffusion processing set diffusion strength \u03b4 as 0.5, and the number of 6 latent layers in denoising is 64, 128, 256, 128, 256, 128. We use Adam as an optimizer and set L2 regularization strength as 1e-5. For the metric, we use the F1 scores of the node classification task and the maximum mean discrepancy scores of Degree, Cluster, and Spectre and the F1 score of precision-recall and density-coverage (F1 pr and F1 dc) to evaluate graph generation results. The richer experimental results under the other indicators are shown in Appendix J. All experiments adopt the implementations from the PyTorch Geometric Library and Deep 6 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Table 1. Summary of node classification Micro-F1 and Macro-F1 scores (%) based on the average of five runs on synthetic and real-world datasets. (Result: average score \u00b1 standard deviation (rank); Bold: best; Underline: runner-up.) Method Synthetic Datasets Real-world Datasets Avg. R. SBM BA Cora Citeseer Polblogs Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 Mi-F1 Ma-F1 VGAE 20.5\u00b12.1 15.4\u00b11.1 37.4\u00b11.7 15.9\u00b12.3 79.7\u00b10.4 78.1\u00b10.2 63.8\u00b11.4 55.5\u00b11.3 79.4\u00b10.8 79.4\u00b10.8 4.6 ANE 39.9\u00b11.1 33.9\u00b11.8 46.0\u00b13.0 19.3\u00b12.7 69.3\u00b10.1 66.4\u00b10.1 50.2\u00b10.1 49.5\u00b10.6 80.8\u00b10.1 80.7\u00b10.1 4.3 GraphGAN 38.6\u00b10.5 38.9\u00b10.3 43.6\u00b10.6 24.6\u00b10.5 71.7\u00b10.1 69.8\u00b10.1 49.8\u00b11.0 45.7\u00b10.1 77.5\u00b10.6 76.9\u00b10.4 4.8 P-VAE 57.9\u00b11.3 53.0\u00b11.5 38.4\u00b11.4 20.0\u00b10.3 79.6\u00b12.2 77.5\u00b12.5 67.9\u00b11.7 60.2\u00b11.9 79.4\u00b10.1 79.4\u00b10.1 3.2 Hype-ANE 18.8\u00b10.3 11.9\u00b10.1 56.9\u00b12.4 31.6\u00b11.2 80.7\u00b10.1 79.2\u00b10.3 64.4\u00b10.3 58.7\u00b10.0 83.6\u00b10.4 83.6\u00b10.4 3.0 HypDiff 70.5\u00b10.1 69.4\u00b10.1 58.3\u00b10.1 40.0\u00b10.1 82.4\u00b10.1 81.2\u00b10.1 67.8\u00b10.2 60.4\u00b10.3 85.7\u00b10.1 85.4\u00b10.1 1.1 Table 2. Generation results about the MMD distance between the original and generated graphs. (Result: scores (rank) and average rank;Bold: best; Underline: runner-up.) Method Synthetic Datasets Real-world Datasets Community BA-G MUTAG PROTRINS Degree Cluster Spectre Degree Cluster Spectre Degree Cluster Spectre Degree Cluster Spectre VGAE 0.365 0.025 0.507 0.775 1.214 0.398 0.255 2.000 0.744 0.705 0.979 0.700 GraphRNN 0.002 0.027 0.004 0.122 0.262 0.007 0.537 0.013 0.476 0.009 0.071 0.017 GDSS 0.094 0.031 0.052 0.978 0.468 0.917 0.074 0.021 0.003 1.463 0.168 0.013 DiGress 0.226 0.158 0.194 0.654 1.171 0.268 0.100 0.351 0.082 0.108 0.062 0.079 GraphGDP 0.046 0.016 0.042 0.698 0.188 0.053 0.127 0.057 0.050 0.103 0.240 0.088 EDGE 0.021 0.013 0.040 0.282 0.010 0.090 0.024 0.597 0.468 0.033 0.523 0.024 HypDiff 0.002 0.010 0.028 0.216 0.021 0.004 0.048 0.001 0.040 0.133 0.004 0.012 Graph Library. The reported results are the average scores and standard deviations over 5 runs. All models were trained and tested on a single Nvidia A100 40GB GPU. 4.3. Performance Evaluation We show the F1 scores of the node classification task in Table 1 and the statistics of MMD distance and F1 scores between the original and generated graph in the graph generation task in Table 2 and Table C.4. A higher score reported in F1 indicates a more accurate prediction of the node and fidelity of the generated graph. At the same time, a smaller MMD distance suggests better generative capabilities of the model from the perspective of graph topological properties. Node classification. HypDiff demonstrates superior performance which outperforms nearly all baseline models, achieving the highest ranking and revealing excellent generalization. This implies that HypDiff can preserve essential properties within complex structures, enabling better distinctive and utility of the dependencies between nodes across hierarchical levels in hyperbolic space. Graph Generation. Successively, we focused on validating the graph generation capability of HypDiff. Using the finer-grained metrics, we consistently observed our approach\u2019s outstanding performance. More results are shown in Table C.3. We are further concerned with the fidelity and diversity of the generated results which yielded conclusions consistent with the previous and are reported in Table C.4. Specifically, HypDiff depicts superior overall performance compared to the state-of-the-art model autoregressive model GraphRNN and discrete diffusion method DiGress. Furthermore, our model can effectively capture the local structure through similarity constraints and achieve competitive performance on highly connected graph data (Community). 4.4. Analysis of HypDiff In this subsection, we present the experimental results to intuitively convey our discovery and initiate a series of discussions and analyses. Ablation Study. This study is to highlight the role of radial popularity diffusion and angular similarity diffusion constraints of HypDiff. We conducted experiments on three real-world datasets to validate the node classification performance and removed radial popularity (HypDiff (w/o P)), angular similarity (HypDiff (w/o S)) and total geometric prior(HypDiff (w/o PS)) components as the variant models. We show the results in Figure 4. The radial popularity is evident in facilitating hyperbolic diffusion processes, thereby showcasing the advantage of hyperbolic geometry in capturing the underlying graph topology. Furthermore, the angular 7 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation Figure 4. Ablation study results. Figure 5. Sensitivity analysis of geometric constraints. 11 12 13 14 15 Average Time of 1000 Timesteps (s) 3000 4000 5000 6000 7000 GPU Memory (MB) HypDiff GPU: 2519MB Time: 11.2 s GDSS GPU: 3501MB Time: 12.5 s GraphGDP GPU: 5902MB Time: 13.6 s EDGE GPU: 6205MB Time: 11.8 s DiGress GPU: 5800MB Time: 12.1 s Figure 6. Efficiency analysis on IMDB-B for graph generation. similarity also significantly preserves the local structure of the graph, compensating for the limitations of hyperbolic space in capturing local connectivity patterns. In summary, the hyperbolic geometric prior plays a crucial role in capturing non-Euclidean structures. Sensitivity Analysis of Geometric Constraints. To investigate the impact of both the number of clusters k and the geometric prior coefficient \u03b4 on the model performance, we conducted the sensitivity analysis on the real-world and synthetic graph datasets, respectively. The number of clusters k can be understood as the strength of the angular constraint, the results of three datasets with different structures are shown in Fig 5 (Left). Specifically, Cora has a realworld connected structure, SBM has a complex community structure, and Fractal has self-similarity and hierarchy properties. It can be observed that k has different sensitivities in different structured datasets, indicating that different graph structures have different approximate accuracies for anisotropy capture. Correspondingly, the geometric prior coefficient \u03b4 can be understood as the strength of the radial constraint, the results of three real-world datasets are shown in Fig 5 (Right). The stronger the constraint, the smaller the diffusion step in the radial direction of the hyperbolic space. It can be observed that the data set with a tree-like structure requires lower radial constraints, while the graph with high connectivity requires stronger radial constraints. For the experimental setup and a more detailed analysis of the results please refer to Appendix I. Diffusion Efficiency Analysis. We report the training time for our HypDiff and other graph diffusion baselines with the same configurations on IMDB-B. We conduct experiments with the hardware and software configurations listed in Section 4.2. We comprehensively report the results from the time and space costs of the diffusion process. The result is shown in Figure 6, our HypDiff comprehensively outperforms other baselines in diffusion time and GPU memory cost. Compared with the discrete graph diffusion model, our model directly diffuses each node of the graph with structure-preserving based on the latent diffusion model, so the space complexity is much lower than that of the direct diffusion of discrete and sparse structural information(e.g. adjacent/Laplace matrix). The performance of each dataset is in the Appendix K, Visualization. We compare the contributions of two diffusion generation models, HypDiff and GDSS, to graph generation tasks by visualizing networks generated by five well-accepted graph theoretical models. We discuss and show the visualization as Figure C.3 in the Appendix J.3. 5. Conclusion In this paper, we introduce hyperbolic geometric before solving the conflict problem between discrete graph data and continuous diffusion model, and propose a novel hyperbolic geometric diffusion model named HypDiff. We propose an improved hyperbolic Gaussian noise generation method based on radial popularity to deal with the additive failure of Gaussian distributions in hyperbolic space. The geometric constraints of angular similarity are applied to the anisotropic diffusion process, to preserve as much various local structure information as possible. Extensive experiments conducted on both synthetic and real-world graphs demonstrate the comprehensive capability of HypDiff. 8 \fHyperbolic Geometric Latent Diffusion Model for Graph Generation 6. Acknowledgments The corresponding authors are Jianxin Li and Xianxian Li. This paper is supported by the National Science and Technology Major Project of China (No.2022ZD0117800), and the National Natural Science Foundation of China (No.U21A20474 and 62302023). We owe sincere thanks to all co-authors for their valuable efforts and contributions. 7. Impact Statements This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here."
16
+ }
title_10K/test_title_short_2405.03251v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03251v1",
3
+ "title": "Exploring the Frontiers of Softmax: Provable Optimization, Applications in Diffusion Model, and Beyond",
4
+ "abstract": "The softmax activation function plays a crucial role in the success of large\nlanguage models (LLMs), particularly in the self-attention mechanism of the\nwidely adopted Transformer architecture. However, the underlying learning\ndynamics that contribute to the effectiveness of softmax remain largely\nunexplored. As a step towards better understanding, this paper provides a\ntheoretical study of the optimization and generalization properties of\ntwo-layer softmax neural networks, providing theoretical insights into their\nsuperior performance as other activation functions, such as ReLU and\nexponential. Leveraging the Neural Tangent Kernel (NTK) framework, our analysis\nreveals that the normalization effect of the softmax function leads to a good\nperturbation property of the induced NTK matrix, resulting in a good convex\nregion of the loss landscape. Consequently, softmax neural networks can learn\nthe target function in the over-parametrization regime. To demonstrate the\nbroad applicability of our theoretical findings, we apply them to the task of\nlearning score estimation functions in diffusion models, a promising approach\nfor generative modeling. Our analysis shows that gradient-based algorithms can\nlearn the score function with a provable accuracy. Our work provides a deeper\nunderstanding of the effectiveness of softmax neural networks and their\npotential in various domains, paving the way for further advancements in\nnatural language processing and beyond.",
5
+ "authors": "Jiuxiang Gu, Chenyang Li, Yingyu Liang, Zhenmei Shi, Zhao Song",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Exploring the Frontiers of Softmax: Provable Optimization, Applications in Diffusion Model, and Beyond",
16
+ "main_content": "Introduction 3 2 Related Work 4 3 Preliminary 5 3.1 Neural Tangent Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Main Results 7 5 Proof Sketch 8 6 Application in Di\ufb00usion 9 6.1 Preliminary of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 6.2 Main Result of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 7 Discussion and Future Work 11 8 Conclusion 12 A De\ufb01nition 13 B Basic Concentration 14 B.1 Some Concentration Basic Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 B.2 Kernel Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Induction 19 C.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 C.2 Induction Part 1. For Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.3 Induction Part 2. For Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C.4 Induction Part 3. For Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D Induction Part 1: For Weights 22 D.1 Bounding the Gradient at any Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D.2 Bounding the Initialization Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 E Induction Part 2: For Loss 23 E.1 Decomposition for \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.2 Choice of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 E.3 Bounding C0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 E.4 Bounding C1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 E.5 Bounding C2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 E.6 Bounding \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 F NTK Regression 35 F.1 Equivalence between Trained Net and Kernel Regression . . . . . . . . . . . . . . . . 35 1 \fG Di\ufb00usion 39 G.1 Main Result of Di\ufb00usion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 G.2 Tools From Previous Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2 \f1 Introduction Large Language Models (LLMs) like GPT4 [AAA+23] from OpenAI and Claude 3 [Ant24] from Anthropic have widely and profoundly changed the world. Some researchers believe they split human history into two parts, the Pre-LLM Era and the LLM Era. The LLMs have been widely used in human activities, such as education [KSK+23], law [Sun23], \ufb01nance [LWDC23], bio-informatics [TTE+23], coding [HZL+24], and even top AI conference reviews such as ICML, ICLR, and NeurIPS [LIZ+24]. To make LLMs successful, one of the cores of LLMs is the Transformer model architecture [VSP+17], which has many advantages, including faster-parallelized inference rather than sequential inference like RNN [HS97]; being easy to scale up the model capacity to support the scaling laws in neural language models [KMH+20], i.e. since the input and output dimension of each Transformer blocks is the same, we can stack an arbitrary number of layers as we want. The kernel design of the Transformer block is self-attention layers, where each block has many attention heads and each head has its three important private parameter matrices for key, query, and value operation. Many papers believe that the self-attention operation is the critical reason for emergent ability [WTB+22], including in-context learning [OEN+22, Red24] and compositional ability to solve complex task [DLS+24, LPC+24]. The Transformer is so successful and has been widely certi\ufb01ed that this architecture can be adopted in many other modalities such as tabular data, image/video generation, e.g. the video di\ufb00usion model SORA [Ope24] from OpenAI using Transformer [PX23] as its backbone. When we delve into the self-attention mechanism, we \ufb01nd the softmax function plays a crucial role [VSP+17]. It enables the model to focus on the most related information among input sequences by giving higher attention scores to the positions that are more relevant for the current position\u2019s representation and to capture dependencies between positions. [CLJ20] \ufb01nd that softmax attention is more expressive and performs better than any convolutional layer. [DSZ23] exhibits softmax attention outperforms linear attention in most scenarios. Although the softmax function code has been executed every second on thousands of servers, there is a limited understanding of the following question: (\u2217) What is the learning mechanism that makes softmax so powerful? To demystify the black box, in this paper, we analyze the Gradient Descent (GD) training dynamics for two-layer Neural Networks (NN) with softmax activation function for multi-dimensional regression, i.e., F(W, x, a) \u2208Rd and F(W, x, a)\u2113:= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121 \u2200\u2113\u2208{1, . . . , d}, where m is number of hidden neurons, exp(\u00b7) is element-wise exponential function, a\u2113, W are the \ufb01rst and second layer weights respectively and x is the input data. Note that, the self-attention could be written as F(W KX, W QX, W V X) \u2208Rd\u00d7n\u2032, where W K, W Q, W V \u2208Rd\u00d7d denotes key, query, and value matrix and X \u2208Rd\u00d7n\u2032 is a sequence of n\u2032 tokens. Thus, studying the two-layer softmax network is the prerequisite to understanding self-attention. See more discussion in Section 7. There is a rich line of work studying two-layer NN learning trajectory under ReLU activation function ([LL18, DZPS19, AZLS19a, ADH+19a, SY19, MMM19, SYZ21, BPSW21, MOSW22, CB20, ZGJ21, LLWA21, CCBG22] and many more) or exponential activation function from the latest work [GMS23]. As far as we know, our work is the \ufb01rst to theoretically study the optimization and generalization of the two-layer softmax network and it is a \ufb01rst step on understanding the power of softmax. 3 \fReLU ([MOSW22]) exp ([GMS23]) Softmax (ours) m \u2126(\u03bb\u22122n2 log(n)) \u2126(\u03bb\u22122n2+o(1) log2(n)) \u2126(\u03bb\u22122n2+o(1) log2(n)) b T \u2126(\u03bb\u22122n2 log(n/\u01eb)) \u2126(\u03bb\u22122n2+o(1) log(n/\u01eb)) \u2126(\u03bb\u22122n2+o(1) log(n/\u01eb)) Table 1: Comparing hidden neuron number m in two-layer neural networks and training steps b T are required under di\ufb00erent activation functions to guarantee that, for any \u01eb > 0, with probability at least 0.99, the training loss is smaller or equal to \u01eb. Here, n is the number of training samples and \u03bb is the smallest eigenvalue for the matrix of neural tangent kernel, where n > 1 and \u03bb < 1. We can see that the two-layer NN with softmax activation function requires almost the same number of neurons and training steps to converge as that with ReLU or exponential activation functions. More details: Theorem 3.6 in [MOSW22] for ReLU; Theorem 1.1 in [GMS23] for exp; Corollary 4.3 in our paper for softmax. One popular analysis method for studying over-parameterized NN is Neural Tangent Kernel (NTK) [JGH18], where overparameterized networks are approximately linear models around their initialization, so the network training is almost convex. To answer our (\u2217) question above, we adopt the powerful NTK analysis paradigm in this work. Our analysis shows that, because of the normalization e\ufb00ect of the denominator, the Neural Tangent Kernel induced by the softmax has a good perturbation property (Lemma 5.1), which means the loss landscape of softmax version has a large convex region. Thus, the softmax NN requires almost the same number of neurons and training steps to \ufb01t the data and converge as ReLU or exponential NN, which is illustrated in Table 1 clearly (Theorem 4.2). To demonstrate the broad applicability of our theoretical \ufb01ndings, we apply our analysis in a practical case study to show the generalization ability of softmax NN, where the task is learning score estimation functions in di\ufb00usion models with noisy labels, a promising approach for generative modeling, as we can smartly transfer it to a multi-dimensional regression task (Theorem 6.5). Thus, we show that gradient-based algorithms can learn the score function with a provable accuracy. Our paper\u2019s contributions are summarized as follows: \u2022 Softmax NTK: We build up the \ufb01rst NTK analysis framework for two-layer NN with softmax activation function. Furthermore, our multi-dimensional regression setting is more general than previous work [MOSW22, GMS23] (ReLU and exp) and can be degenerated to the linear regression setting. \u2022 Di\ufb00usion Models Case Study: We apply our results in learning score estimation functions in di\ufb00usion models with noisy labels to verify our analysis e\ufb00ectiveness. 2 Related Work Softmax and Attention in LLMs. Recently, signi\ufb01cant advances have been achieved in language modeling, particularly with the introduction of Transformer architectures and attention mechanisms [VSP+17]. Self-attention to capture long-range dependencies in text, revolutionizing the \ufb01eld of NLP, e.g., BERT [DCLT19], PaLM [CND+22], LLaMA [TLI+23], LLaMA 2 [TMS+23], ChatGPT [Ope22], GPT4 [AAA+23], Claude 3 [Ant24] and so on. Many works demonstrate the softmax is beyond other activation functions such as ReLU attention or linear attention in di\ufb00erent aspects, e.g, approximation power [DSZ23, SHT24, NLL+24, GLL+24a], prompt tuning [ORST23], in-context learning ability [GSX23, SWXL23, CPM+24, CSWY24], compositional ability[XSL24]. Many works study to generalize the softmax into high order attention [AS24b] or 4 \fto accelerate softmax computation [WLK+20, CLD+20, SZZ+21, QSD+21, AS23, BSZ24, AS24a, HJK+24, HLSL24, DSY24, SYZ24, GSY23, GSYZ23, KMZ23, GLL+24b]. Another line of work analyzes a one-layer softmax network trained on the linear regression task [LSX+23, DLMS23, DLS23, CSY24, GSWY23, SCWZ24], while our work studies a two-layer softmax setting. Neural Tangent Kernel. Recently many studies show that the analysis of optimization and generalization for deep learning should be interwoven together. One line of work uses the \ufb01rst-order Tyler expansion to study su\ufb03ciently over-parameterized neural networks around its initialization like NTK, e.g. [MRH+18, ZCZG18, JGH18, LL18, AZLS19b, ZG19, OS19, LXS+19, NXL+19, Yan19, SY19, DLL+19, AZLS19a, COB19, OFLS19, ADH+19a, CG19, JT19, AZLL19, OS20, CFW+20, ZCZG20, GSJW20, BPSW21, MZ22, MOSW22, GMS23, QSS23, QMS+23, QSY23, SY23, GQSW24, SZZ24] and more. Thus, the neural network optimization can be a convex problem. The NTK method has been widely used in di\ufb00erent scenarios, such as preprocessing analysis [SYZ21, HSWZ22, ALS+23, SCL+23, SSLL23, SSL24, GQSW24], federated learning [LSY23], LoRA adaptation [HWAZ+21, XSW+24, SMF+23] of LLMs [MWY+23], and learning score estimation functions in di\ufb00usion models [HRX24]. Di\ufb00usion Model. Score-based generative di\ufb00usion models can generate high-quality image samples comparable to GANs which requires adversarial optimization [HJA20, SSDK+21, KLL+24]. Based on the U-Net [RFB15], stable di\ufb00usion can successfully generate business-used images. Based on the softmax-based self-attention [PX23], OpenAI released a video di\ufb00usion model, SORA [Ope24], with a surprising performance. Another line of work studying how to train the di\ufb00usion models to have a better theoretical guarantee [SE19, SE20, SK21, SGSE20, SDME21, LLT22, KFL22, SDCS23, LKB+23, CLL23, CDD23, CHZW23, SCK23, YFZ+23, BDD23, GKL24, CCL+24, GLB+24, WCL+24, CKS24]. In this work, we adapt our analysis in di\ufb00usion models. 3 Preliminary We \ufb01rst introduce some notations. Then, we will introduce our problem setup. Notations. We use N(\u00b5, \u03a3) to denote the Gaussian distribution with \u00b5 and covariance \u03a3. For any positive integer n, we use [n] to denote set {1, 2, \u00b7 \u00b7 \u00b7 , n}. Let a vector z \u2208Rn. We denote the \u21132 norm as \u2225z\u22252 := (Pn i=1 z2 i )1/2, the \u21131 norm as \u2225z\u22251 := Pn i=1 |zi|, \u2225z\u22250 as the number of non-zero entries in z, \u2225z\u2225\u221eas maxi\u2208[n] |zi|. We use z\u22a4to denote the transpose of a z. We use \u27e8\u00b7, \u00b7\u27e9to denote the inner product. Let A \u2208Rn\u00d7d, we use vec(A) to denote a length nd vector. We denote the Frobenius norm as \u2225A\u2225F := (P i\u2208[n],j\u2208[d] A2 i,j)1/2. For a function f(x), we say f is L-Lipschitz if \u2225f(x)\u2212f(y)\u22252 \u2264L\u00b7\u2225x\u2212y\u22252. Let D denote a distribution. We use x \u223cD to denote that we sample a random variable x from distribution D. We use E[] to denote expectation and Pr[] to denote probability. We use p.s.d. to denote the positive-semide\ufb01nite matrix. As we have multiple index, to avoid confusion, we usually use i, j \u2208[n] to index the training data, \u2113\u2208[d] to index the output dimension, r \u2208[m] to index neuron number. Models. We consider a two-layer softmax neural network. The hidden layer has m neurons, and we use the softmax function as the activation function, F(W, \u00b7, a) : Rd1 \u2192Rd2 and F(W, x, a)\u2113:= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121 \u2200\u2113\u2208[d2], (1) where exp(\u00b7) is element-wise exponential function. We use m as a normalization factor. Note that we can reduce the d2 to 1 for the linear regression setting. To simplify the proof we let d1 = d2. 5 \fNote that our proof can generalize to di\ufb00erent d1, d2 easily. We only optimizing W and not both W and a simultaneously as many previous works to simplify optimization, e.g., [DZPS19, SY19, MOSW22], where x \u2208Rd represents the input, w1, \u00b7 \u00b7 \u00b7 , wm \u2208Rd are weight vectors in the \ufb01rst layer, i.e., W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m, and a1, \u00b7 \u00b7 \u00b7 , ad \u2208Rm are weights in the second layer. We can simplify the notation as F(W, x) when the context is clear. Data. We have n training data points Dn = {(xi, yi)}n i=1, where x \u2208Rd and y \u2208Rd.1 We denote X = [x1, . . . , xn] \u2208Rd\u00d7n and Y = [y1, . . . , yn] \u2208Rd\u00d7n. We assume that \u2225xi\u22252 \u22641 and \u2225yi\u22252 \u22641, \u2200i \u2208[n]. We have the softmax function S \u2208Rm\u00d7n, where Si \u2208Rm denotes \u27e8exp(W \u22a4xi), 1m\u27e9\u22121 \u00b7 exp(W \u22a4xi) and Si,r \u2208R denotes \u27e8exp(W \u22a4xi), 1m\u27e9\u22121 \u00b7exp(w\u22a4 r xi), \u2200r \u2208[m], \u2200i \u2208[n]. For simplicity, we denote \u03b1i as \u27e81m, exp(W \u22a4xi)\u27e9, expi as exp(W \u22a4xi) and expi,r as exp(w\u22a4 r xi), \u2200r \u2208[m], \u2200i \u2208[n], when the context is clear. Gradient Descent. We use er to denote a vector where the r-th coordinate is 1 and everywhere else is 0. \u2200r \u2208[m], \u2200\u2113\u2208[d], we have \u2202F (W,x,a)\u2113 \u2202wr \u2208Rd can be written as \u2202F(W, x, a)\u2113 \u2202wr = + m\u27e8a\u2113\u25e6er, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121x \u2212m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22122 \u00b7 \u27e8exp(W \u22a4x), er \u25e61m\u27e9x = + m\u27e8a\u2113\u25e6er, S\u27e9\u00b7 x \u2212m\u27e8a\u2113, S\u27e9\u00b7 \u27e8S, er \u25e61m\u27e9x. (2) We use W(\u03c4) to denote the weights of the \ufb01rst layer on the timestamp \u03c4 and similar for S(\u03c4) and F(\u03c4) when the context is clear. Now, we introduce some necessary de\ufb01nition used. De\ufb01nition 3.1 (F(\u03c4), dynamic prediction). We de\ufb01ne Fi(\u03c4) \u2208Rd, for any timestamp \u03c4, as F\u2113,i(\u03c4) := m\u27e8a\u2113, exp(W(\u03c4)\u22a4xi)\u27e9\u00b7 \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9\u22121. Here xi \u2208Rd. It can be rewritten as F\u2113,i(\u03c4) = m\u27e8a\u2113, Si(\u03c4)\u27e9. We consider d-dimensional MSE loss. De\ufb01nition 3.2 (Loss function over time). We de\ufb01ne the objective function L as below: L(W(\u03c4)) := 1 2 X i\u2208[n] X \u2113\u2208[d] (F\u2113,i(\u03c4) \u2212y\u2113,i)2. Thus, we de\ufb01ne the gradient of w. De\ufb01nition 3.3 (\u2206wr(\u03c4)). For any r \u2208[m], we de\ufb01ne \u2206wr(\u03c4) \u2208Rd as below: \u2206wr(\u03c4) := m n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 \u27e8a\u2113\u25e6er, Si(\u03c4)\u27e9\u2212\u27e8a\u2113, Si(\u03c4)\u27e9\u00b7 \u27e8Si(\u03c4), er \u25e61m\u27e9 \u0011 \u00b7 xi where Si(\u03c4) = \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9\u22121 \u00b7 exp(W(\u03c4)\u22a4xi) \u2208Rm. Note that we can simplify the gradient calculation by the fact 1 = \u27e81m, Si(\u03c4)\u27e9. Thus, we have the following claim. Claim 3.4. \u2206wr(\u03c4) := m Pn i=1 Pd \u2113=1(F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (\u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi. 1Our analysis can extend to xi \u2208Rd1 and yi \u2208Rd2 easily. 6 \fWe use the gradient descent (GD) algorithm with the learning rate \u03b7 to train the network. As we only train the hidden layer W and \ufb01x a, we have the following gradient update rule. De\ufb01nition 3.5 (Gradient descent). The gradient descent algorithm for optimizing the weight matrix W is de\ufb01ned as: W(\u03c4 + 1) = W(\u03c4) \u2212\u03b7\u2206W(\u03c4). where \u2206W(\u03c4) \u2208Rd\u00d7m and \u2206wr(\u03c4) \u2208Rd is the r-th column of \u2206W(\u03c4) de\ufb01ned in De\ufb01nition 3.3. 3.1 Neural Tangent Kernel Now, we are ready to introduce our key tools, Neural Tangent Kernel induced by the softmax. We de\ufb01ne the kernel with respect to timestamp \u03c4. De\ufb01nition 3.6 (Kernel function). For simplicity, we denote S(W \u22a4xi) as Si \u2208Rm \u22650 and v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm. We de\ufb01ne the function (Gram matrix) H : Rd\u00d7m \u2192Rnd\u00d7nd as following H(W) := \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 H1,1 H1,2 \u00b7 \u00b7 \u00b7 H1,d H2,1 H2,2 \u00b7 \u00b7 \u00b7 H2,d . . . . . . ... . . . Hd,1 Hd,2 \u00b7 \u00b7 \u00b7 Hd,d \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb, and for each \u21131, \u21132 \u2208[d], we have H\u21131,\u21132 \u2208Rn\u00d7n is de\ufb01ned as [H\u21131,\u21132]i,j(W) := 1 mx\u22a4 i xj m X r=1 \u27e8v\u21131,r, Si\u27e9\u00b7 mSi,r \u00b7 \u27e8v\u21132,r, Sj\u27e9\u00b7 mSj,r. For any timestamp \u03c4, for simplicity, we denote H(\u03c4) := H(W(\u03c4)) and denote H(0) as H\u2217. Note that H\u2217is a positive semi-de\ufb01nite matrix, and we denote its minimum eigenvalue as \u03bb := \u03bbmin(H\u2217). Initialization. We use symmetric initialization, which is widely used in previous works [DM20, DLS22, MOSW22, SWL22, SWL24]. De\ufb01nition 3.7 (Symmetric initialization). For each r \u2208[m/2], we initialize weights as below \u2022 We draw w2r\u22121 from N(0, \u03c32Id) and uniformly draw a2r\u22121 from {\u22121, +1}d. \u2022 We assign a2r = \u2212a2r\u22121 and w2r\u22121 = w2r. Due to symmetric initialization, we can easily see that F(W(0), x) = 0, \u2200x \u2208Rd. 4 Main Results We \ufb01rst de\ufb01ne a constant we used. De\ufb01nition 4.1. Let C > 10 denote a su\ufb03ciently large constant. We de\ufb01ne parameter B as follows B := max{C\u03c3 p log(nd/\u03b4), 1}. Now, we are ready to present our main result, whose complete proof is in Appendix C.1. 7 \fTheorem 4.2 (Main result). Let \u03bb = \u03bbmin(H\u2217) > 0, m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)), \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) and b T = \u2126((m\u03b7\u03bb)\u22121 log(nd/\u01eb)) = \u2126(\u03bb\u22122n2d2 exp(16B) \u00b7 log(nd/\u01eb)). For any \u01eb, \u03b4 \u2208(0, 0.1), after b T iterations, with probability at least 1 \u2212\u03b4, we have \u2225F( b T) \u2212Y \u22252 F \u2264\u01eb. If we \ufb01x \u03b4 and \u03c3 in B de\ufb01ned in the De\ufb01nition 4.1, since exp(\u0398(B)) = (nd)o(1), we can simplify the m = \u2126(\u03bb\u22122(nd)2+o(1)) and b T = \u2126(\u03bb\u22122(nd)2+o(1)). The Theorem 4.2 means that as we have poly(nd) number of neurons and training steps, the softmax NN can \ufb01t any training datasets with n number of d-dim training samples on d-dim regression task. Corollary 4.3. Consider the 1-dimension linear regression setting, i.e., d1 = d and d2 = 1. Let \u03bb = \u03bbmin(H\u2217) > 0, m = \u2126(\u03bb\u22122n2 exp(18B) log2(n/\u03b4)), \u03b7 = 0.1\u03bb/(mn2 exp(16B)) and b T = \u2126((m\u03b7\u03bb)\u22121 log(n/\u01eb)) = \u2126(\u03bb\u22122n2 exp(16B) \u00b7 log(n/\u01eb)). For any \u01eb, \u03b4 \u2208(0, 0.1), after b T iterations, with probability at least 1 \u2212\u03b4, we have \u2225F( b T ) \u2212Y \u22252 2 \u2264\u01eb. Proof. Directly follow Theorem 4.2. As shown in Table 1, our two-layer softmax network needs the same number of training steps b T and number of neurons m as two-layer ReLU networks or two-layer exponential networks. 5 Proof Sketch We \ufb01rst show a key Lemma below, showing that the weight w perturbation will not change the Neural Tangent Kernel too much. Lemma 5.1 (Weight value perturbation \u21d2kernel value perturbation). Let R \u2208(0, 0.01). If the following conditions hold \u2022 Let f W = [ e w1, \u00b7 \u00b7 \u00b7 , e wm] \u2208Rd\u00d7m, where e w1, \u00b7 \u00b7 \u00b7 , e wm are i.i.d. draw from N(0, \u03c32Id). \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m and satisfy \u2225e wr \u2212wr\u22252 \u2264R for any r \u2208[m]. Then with probability at least 1 \u2212\u03b4, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rnd exp(10B). Please see Appendix B.2 for the proof of Lemma 5.1. We can see that the kernel matrix has a small perturbation when the weights w perturb. Note that in Lemma 4.2 [MOSW22], they have \u2225H(W)\u2212H(f W )\u2225F \u22642Rn for the ReLU activation function and in Lemma 6.7 [GMS23], they have \u2225H(W)\u2212H(f W)\u2225F \u22643Rn1+o(1) for the exp activation function. When we consider the 1-dimension linear regression task, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rn1+o(1), which is almost the same as the other two cases. Remark 5.2. In the proof of Lemma B.2, we do not use concentration bound as previous work [SY19, MOSW22, GMS23]. The reason is that we consider the worst case. In general, E[H(W)\u2212H(f W)] \u0338= 0nd\u00d7nd. Thus, using the concentration bound may not gain any bene\ufb01ts. 8 \fBased on Lemma 5.1, we can use math induction to \ufb01nish the proof of our main Theorem. We show the induction statement below. Lemma 5.3 (Induction). Let \u03c4 be a \ufb01xed integer. Assume the same condition as Theorem 4.2. Let D be de\ufb01ned as De\ufb01nition A.2 and D < R. If the following conditions hold \u2022 Weights Property. \u2225wr(i) \u2212wr(0)\u22252 \u2264R, \u2200i \u2208[\u03c4] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 for all r \u2208[m], \u2200i \u2208[\u03c4] Then, for \u03c4 + 1 and \u2200r \u2208[m], we have \u2022 Weights Induction. \u2225wr(\u03c4 + 1) \u2212wr(0)\u22252 \u2264D. \u2022 Loss Induction. \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/4)\u03c4+1 \u00b7 \u2225F(0) \u2212Y \u22252 F . \u2022 Gradient Induction. \u03b7\u2225\u2206wr(\u03c4 + 1)\u22252 \u22640.01, \u2200r \u2208[m]. Please refer to Appendix C.2, Appendix C.3 and Appendix C.4 for the proof of weights, loss, gradient induction in Lemma 5.3 respectively. Lemma 5.3 means that, at a \ufb01xed timestamp \u03c4, if the weights w(\u03c4) is close to its initialization, the loss is decreasing and the gradient is also small, then we can conclude at timestamp \u03c4 + 1, these conditions still hold as local convexity proved by Lemma 5.1. Thus, after checking the initial condition, we can conclude Theorem 4.2. 6 Application in Di\ufb00usion Now, we apply our results in learning score estimation functions in di\ufb00usion models with noisy labels. We introduce problem setup in Section 6.1 and show our results in Section 6.2. 6.1 Preliminary of Di\ufb00usion In this section, we brie\ufb02y introduce the di\ufb00usion model proposed in [SSDK+21]. Forward Process. During the forward process, we progressively inject the noise into the original data distribution, which can be characterized by the following Stochastic Di\ufb00erential Equation (SDE) [SE20, HJA20]: dx(t) = \u22121 2g(t)x(t) dt + p g(t)dBt, x(0) \u223cp0, (3) where x(t) is the data at the di\ufb00usion process time t, g(t) > 0 is a deterministic weighting function; and (Bt)t\u22650 is a standard d-dimensional Brownian motion/noise. The p0 represents the original/target data distribution that we learn, and we only have few number of accesses to it, i.e., n times. We denote pt as the distribution of x(t) at di\ufb00usion process time t. Then, we can write the explicit solution to Eq. (3) as x(t) = e\u2212 R t 0 1 2g(s)dsx(0) + e\u2212 R t 0 1 2g(s)ds Z t 0 e R s 0 1 2g(u)dup g(s)dBs. 9 \fBackward Process. We denote y(t) = x(T \u2212t) to reverse the forward process in time [HP86, F\u00a8 ol05, CCGL21] that transforms noise into samples from the target distribution. We have a backward process associated to Eq. (3) as: dy(t) = (1 2g(T \u2212t)y(t) + g(T \u2212t)\u2207log pT\u2212t(y(t)))dt + p g(T \u2212t)d \u00af Bt, y(0) \u223cq0. (4) where ( \u00af Bt)t\u22650 is another d-dim Brownian motion/noise. Following the literature, we call \u2207log pt(\u00b7) as \u201cscore function\u201d [SSDK+21]. We have q0 is the initial distribution of the backward process and the score function \u2207log pt(\u00b7) as the gradient of log density of x(t). However, In practice, Eq.(4) cannot be directly used as both the score function and the distribution pT are unknown. To solve the problem, we (1) randomly select a noise distribution as the initial distribution of the backward process pT ; (2) replace the ground-truth score function \u2207log pt(x(t)) by an estimator s\u03b8(x(t), t). The parameterized estimator s\u03b8 is learned by a neural network such as U-Net [HJA20, RBL+22] and Transformer [PX23]. Thus, we obtain a practically implementable approximation of the backward SDE: dy(t) = (1 2g(T \u2212t)y(t) + g(T \u2212t)s\u03b8(y(t), t))dt + p g(T \u2212t)d \u00af Bt, y(0) \u223cN(0, Id), which can be used for sampling/data generation [SE20, CHZW23, CCL+23] Score Matching. When estimate the score function, usually we use L2 loss between the estimated and actual score: min \u03b8 1 T Z T 0 \u03bb(t)E[\u2225s\u03b8(x(t), t) \u2212\u2207log pt(x(t))\u22252 2]dt, (5) where \u03bb(t) is the weighting function that captures time inhomogeneity. As the hardness of estimate \u2207log pt term in Eq. (5), equivalently, we minimize the following denoising score matching [Vin11]: min \u03b8 1 T \u2212T0 Z T T0 \u03bb(t)E[\u2225s\u03b8(x(t), t) \u2212\u2207log pt|0(x(t) | x(0))\u22252 2]dt. (6) In practice, the estimator of the score function is parameterized by a neural network and we have the following sampling procedure for any i \u2208[n], x(0)i \u223cp0, ti \u223cUnif(0, T), x(ti)i \u223cpti|0(\u00b7|x(0)i), and we get the training dataset {x(0)i, (ti, x(ti)i)}n i=1, where x(0)i \u2208Rd and (ti, x(ti)i) \u2208Rd+1. We denote x(0) as the noisy label and E[x(0)|x(t)] as the true label. For simplicity, we denote x(0)i as yi \u2208Rd and (ti, x(ti)i) as xi \u2208Rd+1 and the training dataset as Dn = {(xi, yi)}n i=1. Here, y denotes the image from a dataset and x denotes the noised image with its di\ufb00usion process time t. Neural Network Parameterization. Recall that we consider a two-layer network with softmax activation function as the di\ufb00usion model in Eq. (1), satisfying \u2200\u2113\u2208[d], F(W, x, a)\u2113= m\u27e8a\u2113, exp(W \u22a4x)\u27e9\u00b7 \u27e8exp(W \u22a4x), 1m\u27e9\u22121. Note that, we do not train the top-layer weights a, so we can denote it as Fnn(W, x). Then, similar as [HJA20, HRX24], our loss function Eq. (6) can be rewrite as min W L(W) := 1 2 N X j=1 \u2225Fnn(W, xj) \u2212yj\u22252 2. We denote the target function as F\u2217(t, x(t)) := E[y | (t, x(t))]. Let H be the reproducing Hilbert space (RKHS) induced by the NTK [CDVTU10, JGH18] and let FH in the RKHS H such that \u2225FH\u22252 H \u2264RH. 10 \f6.2 Main Result of Di\ufb00usion We \ufb01rst introduce some natural assumptions we used. Assumption 6.1. Based on normalization, we assume \u2225yi\u22252 \u22641, \u2225xi\u22252 \u22641, \u2200i \u2208[n]. Assumption 6.2. Assume \u03bb = \u03bbmin(H\u2217) > 0. Assumption 6.3. The function g is almost everywhere continuous and bounded on [0, \u221e). Assumption 6.4. For all (t, x(t)) \u2208(0, \u221e) \u00d7 Rd, the function F\u2217(t, x(t)) is \u03b2x-Lipschitz in x, i.e., \u2225F\u2217(t, x(t)) \u2212F\u2217(t, x\u2032(t))\u22252 \u2264\u03b2x\u2225x(t) \u2212x\u2032(t)\u22252. We denote A(RH) := c1\u039b( \u221aRH \u039b )\u22122 d log( \u221aRH \u039b ) and \u039b = O( \u221a d) and \u0393\u03b4 := 2d2A(RH) \u03bb log3/2(e(dn)3/2A(RH) \u03bb ) + 1 \u221an !2 + d2A2(RH) \u03bb2 (log(1/\u03b4) + log(log n)). Now, we are ready to present our main Theorem for di\ufb00usion. Theorem 6.5 (Main results of score estimation and generalization). Suppose Assumptions 6.1, 6.2, 6.3, 6.4 hold and we set m = \u2126(\u03bb\u22122n3d3 exp(18B) log2(nd/\u03b4)) and \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)). Moreover, suppose b T satis\ufb01es Assumption G.3 with corresponding \u01eb(n, b T). Then for large enough RH, with probability at least 1 \u2212\u03b4, it holds that 1 T Z T 0 \u03bb(t)E[\u2225sW ( b T)(t, x(t)) \u2212\u2207log pt(Xt)\u22252 2]dt \u2264O \u0012 1 \u03bb\u221an + \u01eb(n, b T) + dA2(RH) + dA(RH) + p dA(RH)\u0393\u03b4 + \u0393\u03b4 \u0013 . Please refer to Appendix G.1 for the complete proof. Here we provide a proof sketch. Proof sketch of Theorem 6.5. In Theorem F.2, we show the \u201cequivalence\u201d between softmax NN learning and corresponding neural tangent kernel regression, i.e., the gap between them is always small. Then, we can borrow the generalization ability of kernel regression to the generalization ability of two-layer softmax NN. On the other hand, by Claim G.1, we can decompose the loss into a coupling gap, a label mismatch gap, an early stopping gap, and an approximation gap. By using our Theorem 4.2, Theorem F.2 with some tools from [HRX24], we \ufb01nish the proof. From Theorem 6.5, we know that, under some natural assumptions, the GD algorithm trained two-layer softmax NN can learn a provable accuracy on the score estimation functions in the di\ufb00usion model with noisy labels. We use this practical case study to demonstrate the broad applicability of our theoretical \ufb01ndings. 7 Discussion and Future Work Self-attention Learning. The self-attention can be written as F(W KX, W QX, W V X) \u2208Rd\u00d7n\u2032, (7) where W K, W Q, W V \u2208Rd\u00d7d denotes key, query, and value matrix respectively and X \u2208Rd\u00d7n\u2032 is a sequence of n\u2032 tokens. As our work is a \ufb01rst step to understanding softmax, it is natural to consider 11 \fhow to extend our results to self-attention. It is well-known that using two reformulation tricks: tensor-trick and SVM-trick [GSWY23, GSX23, AS24a], any analysis for softmax function can be naturally generalized to attention function F(W KX, W QX, W V X). Therefore, we conjecture that we can borrow the idea from [GSWY23, GSX23, AS24a] to decouple Eq (7) into the value term and the softmax term. And, we can alternatively optimize the weights for the softmax term (W k, W Q) and the value term (W V ). We leave this valuable direction as a future work. Feature Learning. Recently, there is a line of work showing that feature learning may be beyond NTK on sample complexity or time complexity, e.g., [AZL19, WLLM19, HN19, AZLL19, DM20, CBL+20, YH20, HY20, LMZ20, GMMM20, RGKZ21, MKAS21, LXMZ21, DLS22, SWL22, SWL24] and many more. It is worth studying the feature learning ability of two-layer softmax NN to \ufb01gure out what feature pattern the softmax prefers to learn and how it happens. We leave this valuable direction as a future work. 8 Conclusion This paper provides a theoretical analysis of the optimization and generalization properties of twolayer neural networks with softmax activation function. We apply our results in learning score estimation functions in di\ufb00usion models with noisy labels to verify our analysis e\ufb00ectiveness. Our \ufb01ndings contribute to a deeper understanding of the power of softmax neural networks and their potential to self-attention, advance LLMs, and generative modeling. Acknowledgement Research is partially supported by the National Science Foundation (NSF) Grants 2023239-DMS, CCF-2046710, and Air Force Grant FA9550-18-1-0166. The authors would like to thank Yufa Zhou for his helpful suggestions and feedback. 12 \fAppendix Roadmap. In Section A, we introduce some de\ufb01nitions that will be used in the proof. In Section B, we provide the basic concentration. In Section C, we provide the proof of our inductions. In Section D, we establish a bound for the weight of induction Part 1. In Section E, we establish a bound for the loss of induction Part 2. In Section F, we introduce the NTK regression. In Section G, we introduce the di\ufb00usion. A De\ufb01nition Claim A.1 (Restatement of Claim 3.4). We have \u2206wr(\u03c4) := m n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (\u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi Proof of Claim 3.4. We can show that \u2206wr(\u03c4)/m = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 (\u27e8a\u2113\u25e6er \u2212a\u2113\u00b7 Si,r(\u03c4), Si(\u03c4)\u27e9)xi = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 (a\u2113,r \u2212\u27e8a\u2113, Si(\u03c4)\u27e9) \u00b7 Si,r(\u03c4) \u0011 \u00b7 xi = n X i=1 d X \u2113=1 (F\u2113,i(\u03c4) \u2212y\u2113,i) \u00b7 \u0010 \u27e8a\u2113,r \u00b7 1m \u2212a\u2113 | {z } m\u00d71 , Si(\u03c4) | {z } m\u00d71 \u27e9\u00b7 Si,r(\u03c4) \u0011 \u00b7 xi, where the \ufb01rst step follows from the de\ufb01nition of \u2206wr(\u03c4), the second step follows from \u27e8a\u2113\u25e6er, x\u27e9= a\u2113,rxr, and the last step is due to the Fact A.4. We present the following de\ufb01nition to simplify the notation. De\ufb01nition A.2. We de\ufb01ne D D := 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F Fact A.3. For any vectors u, v \u2208Rn, the squared Euclidean distance between u and v can be expressed as: \u2225u \u2212v\u22252 2 = \u2225u\u22252 2 \u22122u\u22a4v + \u2225v\u22252 2. Fact A.4. Let 1m be a vector of dimension m consisting of all ones, and Si(\u03c4) \u2208Rm \u22650 be the indicator of some function \u03c4 at position i. We have: 1 = \u27e81m, Si(\u03c4)\u27e9 Fact A.5. For any real number |x| \u22640.1, the following inequality holds: (1 \u2212x)1/2 \u22641 \u22120.5x 13 \fFact A.6. For any real number |x| \u22640.1, we have | exp(x) \u22121| \u22642|x| Fact A.7. For any x \u2208(0, 0.1), we have \u221e X i=0 xi \u2264 1 1 \u2212x Fact A.8. For any |x| \u22640.01, we have exp(x) = 1 + x + \u0398(1)x2 We state the standard Hoe\ufb00ding inequality, Lemma A.9 (Hoe\ufb00ding inequality [Hoe63]). If the below conditions are true \u2022 Let x1, \u00b7 \u00b7 \u00b7 , xn denote n independent variables \u2022 xi \u2208[\u03b1i, \u03b2i], for all i \u2208[n] \u2022 Let x = Pn i=1 xi. Then we have Pr[|x \u2212E[x]| \u2265t] \u22642 exp \u2212 2t2 P i\u2208[n](\u03b2i \u2212\u03b1i)2 ! . Lemma A.10 (Hanson-Wright inequality [HW71, RV13]). Let x \u2208Rn denote a random vector with independent entries xi with E[xi] = 0 and |xi| \u2264K. Let A be an n \u00d7 n matrix. Then, for every t \u22650, Pr[|x\u22a4Ax \u2212E[x\u22a4Ax]| > t] \u22642 \u00b7 exp(\u2212c min{t2/(K4\u2225A\u22252 F ), t/(K2\u2225A\u2225)}). B Basic Concentration In Section B.1, we introduce some concentration basic tools. In Section B.2, given w perturbation within a small ball, we bound the changes of H. B.1 Some Concentration Basic Tools The goal of this section is to prove Lemma B.1. Lemma B.1. If the following conditions hold \u2022 Let B > 1 denote a parameter be de\ufb01ned as De\ufb01nition 4.1. \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] and wr be random Gaussian vectors from N(0, \u03c32Id). \u2022 Let V = [v1, \u00b7 \u00b7 \u00b7 , vm] and vr denote the vector where \u2225vr \u2212wr\u22252 \u2264R, \u2200r \u2208[m]. \u2022 Let xi \u2208Rd and \u2225xi\u22252 \u22641, \u2200i \u2208[n]. \u2022 Let R \u2208(0, 0.01). 14 \f\u2022 Let Si and e Si be the softmax function corresponding to W and V respectively. \u2022 Let \u03b1i = \u27e81m, exp(W \u22a4xi)\u27e9and e \u03b1i = \u27e81m, exp(V \u22a4xi)\u27e9, \u2200i \u2208[n]. Then, with probability at least 1 \u2212\u03b4/ poly(nd), we have \u2022 Standard inner product \u2013 Part 1. |\u27e8wr, xi\u27e9| \u2264B, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 2. |\u27e8vr, xi\u27e9| \u2264B + R, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 3. |\u27e8wr \u2212vr, xi + xj\u27e9| \u22642R, \u2200i, j \u2208[n], \u2200r \u2208[m] \u2022 exp function \u2013 Part 4. exp(\u2212B) \u2264exp(\u27e8wr, xi\u27e9) \u2264exp(B), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 5. exp(\u2212B \u2212R) \u2264exp(\u27e8vr, xi\u27e9) \u2264exp(B + R), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 6. | exp(\u27e8wr \u2212vr, xi + xj\u27e9) \u22121| \u22644R, \u2200i, j \u2208[n], \u2200r \u2208[m] \u2013 Part 7. | exp(\u27e8wr, xi\u27e9) \u2212exp(\u27e8vr, xi\u27e9)| \u2264R exp(B + R), \u2200i \u2208[n], \u2200r \u2208[m] \u2022 softmax S function \u2013 Part 8. |\u03b1i \u2212e \u03b1i| \u2264mR exp(B + R), \u2200i \u2208[n] \u2013 Part 9. |\u03b1\u22121 i \u2212e \u03b1\u22121 i | \u2264R m exp(3B + 2R), \u2200i \u2208[n] \u2013 Part 10. |Si,r| \u2264exp(2B)/m, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 11. | e Si,r| \u2264exp(2B + 2R)/m, \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 12. |Si,r \u2212e Si,r| \u2264R m exp(4B + 3R), \u2200i \u2208[n], \u2200r \u2208[m] \u2013 Part 13. for any z \u2208Rm and \u2225z\u2225\u221e\u22641, we have |\u27e8z, Si\u27e9\u2212\u27e8z, e Si\u27e9| \u2264R exp(4B+3R), \u2200i \u2208 [n] Proof. As eventually we choose m = poly(nd), we use B > 0 de\ufb01ned in De\ufb01nition 4.1. Proof of Part 1, 2, 4 and 5. We can get the proof by Gaussian tail bound. Proof of Part 3 and 6. Due to \u2225xi\u22252 \u22641 and \u2225xj\u22252 \u22641 and \u2225\u2206wr\u22252 \u2264R, we can have |\u27e8\u2206wr, (xi + xj)\u27e9| \u22642R \u22640.1. (8) Then, we have | exp(\u27e8\u2206wr, (xi + xj)\u27e9) \u22121| \u22642|\u27e8\u2206wr, (xi + xj)\u27e9| \u22644R where the \ufb01rst step follows from the Fact A.6, and the last step follows from Eq. (8). Proof of Part 7. Because \u2225xi\u22252 \u22641 and \u2225\u2206wr\u22252 \u2264R, we can have |\u27e8\u2206wr, xi\u27e9| \u2264R \u22640.1. (9) By convex increasing property of exp function, we have | exp(\u27e8wr, xi\u27e9) \u2212exp(\u27e8vr, xi\u27e9)| \u2264max{exp\u2032(\u27e8wr, xi\u27e9), exp\u2032(\u27e8vr, xi\u27e9} \u00b7 |\u27e8\u2206wr, xi\u27e9| 15 \f\u2264exp(B + R) \u00b7 |\u27e8\u2206wr, xi\u27e9| \u2264exp(B + R)R. where the \ufb01rst step follows from Taylor expansion and exp\u2032 denote the derivative of exp, the second step follows from Part 4 and Part 5 and the last step follows from Eq. (9). Proof of Part 8. |\u03b1i \u2212e \u03b1i| = | X r\u2208[m] expi,r \u2212g X r\u2208[m]expi,r| \u2264 X r\u2208[m] |expi,r \u2212g expi,r| \u2264mR exp(B + R), where the third step is due to Part 7. Proof of Part 9. Similarly, we have |\u03b1\u22121 i \u2212e \u03b1\u22121 i | = | e \u03b1i \u2212\u03b1i \u03b1ie \u03b1i | \u2264mR exp(B + R) |\u03b1ie \u03b1i| \u2264 mR exp(B + R) |m exp(\u2212B)m exp(\u2212B \u2212R)| = R m exp(3B + 2R). where the \ufb01rst step is due to simple algebra, the second step is from Part 8, the third step follows Part 4, 5, and the last step is because of simple algebra. Proof of Part 10 and 11. Trivially follows Part 4 and Part 5. Proof of Part 12. |Si,r \u2212e Si,r| = |\u03b1\u22121 i expi,r \u2212e \u03b1\u22121 i g expi,r| \u2264|\u03b1\u22121 i expi,r \u2212\u03b1\u22121 i g expi,r| + |\u03b1\u22121 i g expi,r \u2212e \u03b1\u22121 i g expi,r| For the \ufb01rst part, we have |\u03b1\u22121 i expi,r \u2212\u03b1\u22121 i g expi,r| = \u03b1\u22121 i | expi,r \u2212g expi,r| \u2264\u03b1\u22121 i exp(B + R)R \u2264exp(B + R)R m exp(\u2212B) = R m exp(2B + R), where the second step follows Part 7 and the third step follows Part 4. 16 \fFor the second part, we have |\u03b1\u22121 i g expi,r \u2212e \u03b1\u22121 i g expi,r| = g expi,r|\u03b1\u22121 i \u2212e \u03b1\u22121 i | \u2264g expi,r R m exp(3B + 2R) \u2264exp(B + R) R m exp(3B + 2R) = R m exp(4B + 3R), where the second step follows Part 9, and the third step follows Part 5. Thus, we have |Si,r \u2212e Si,r| \u2264R m exp(4B + 3R). Proof of Part 13. Note that \u2225z\u2225\u221e\u22641. We have |\u27e8z, Si\u27e9\u2212\u27e8z, e Si\u27e9| = |\u27e8z, Si \u2212e Si\u27e9| \u2264m\u2225Si \u2212e Si\u2225\u221e \u2264R exp(4B + 3R) where the \ufb01rst step follows from simple algebra, the second step follows from |\u27e8a, b\u27e9| \u2264m \u00b7 maxi\u2208[m] |aibi|, and the last step is due to Part 12. B.2 Kernel Perturbation The purpose of this section is to prove Lemma B.2. In the proof, we do not use concentration inequality. Please see Remark 5.2 for more details. Lemma B.2 (Restatement of Lemma 5.1). If the following conditions hold \u2022 Let B \u22651 denote a parameter be de\ufb01ned as De\ufb01nition 4.1. \u2022 Let R \u2208(0, 0.01). \u2022 Let xi \u2208Rd and \u2225xi\u22252 \u22641 for all i \u2208[n]. \u2022 Let f W = [ e w1, \u00b7 \u00b7 \u00b7 , e wm] \u2208Rd\u00d7m, where e w1, \u00b7 \u00b7 \u00b7 , e wm are are i.i.d. draw from N(0, \u03c32Id). \u2022 Let W = [w1, \u00b7 \u00b7 \u00b7 , wm] \u2208Rd\u00d7m and satisfy \u2225e wr \u2212wr\u22252 \u2264R for any r \u2208[m]. \u2022 Let v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm, for any \u2113\u2208[d] and for any r \u2208[m]. Note that a\u2113,r is the r-th in a\u2113. \u2022 Let \u03b1i = \u27e81m, exp(W \u22a4xi)\u27e9and e \u03b1i = \u27e81m, exp(V \u22a4xi)\u27e9, \u2200i \u2208[n]. \u2022 Let H be de\ufb01ned as De\ufb01nition 3.6. Then, we have \u2022 Part 1. Then with probability at least 1 \u2212\u03b4/ poly(nd), |[H\u21131,\u21132]i,j(W) \u2212[H\u21131,\u21132]i,j(f W)| \u2264R \u00b7 exp(10B). 17 \f\u2022 Part 2. Then with probability at least 1 \u2212\u03b4, we have \u2225H(W) \u2212H(f W)\u2225F \u2264Rnd \u00b7 exp(10B). Proof of Lemma 5.1. We de\ufb01ne \ufb01ve real numbers B1, B2, B3, B4, B5 \u2208R as follows, B1 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9expi,r expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9g expi,rg expj,r B2 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B3 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B4 := \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212\u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r B5 := \u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r \u2212e \u03b1\u22121 i e \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r Thus, we have |[H\u21131,\u21132]i,j(W) \u2212[H\u21131,\u21132]i,j(f W)|/m2 \u2264|B1| + |B2| + |B3| + |B4| + |B5|. To bound B1 We rewrite B1 as B1 = \u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9(exp(w\u22a4 r (xi + xj)) \u2212exp( e w\u22a4 r (xi + xj))). Recall that \u2225v\u21131,r\u2225\u221e\u22642 and \u2225Si\u22251 \u22641. Thus, |\u27e8v\u21131,r, Si\u27e9| \u22642. By Fact A.4, we know that |\u27e8v\u21131,r, Si\u27e9\u27e8v\u21132,r, Sj\u27e9| \u22642 \u00b7 2 = 4. By Part 4 of Lemma B.1, with probability 1 \u2212\u03b4/ poly(nd), we know that |\u03b1\u22121 i | \u22641 m exp(B). We will condition on the above event is holding in the rest of the proof. By Part 7 of Lemma B.1, | exp( e w\u22a4 r (xi + xj)) \u2212exp(w\u22a4 r (xi + xj))| \u22642R exp(2B + 2R). Finally, we know that |B1| \u22648R m2 exp(5B). To bound B2 and B3 We can rewrite B2 as follows |B2| = |\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 \u27e8v\u21131,r, Si\u27e9g expi,rg expj,r(\u27e8v\u21132,r, Sj\u27e9\u2212\u27e8v\u21132,r, e Sj\u27e9)| \u2264\u03b1\u22121 i \u03b1\u22121 j 1 m m X r=1 |\u27e8v\u21131,r, Si\u27e9|g expi,rg expj,r|(\u27e8v\u21132,r, Sj\u27e9\u2212\u27e8v\u21132,r, e Sj\u27e9)|. 18 \fFollowing the similar strategy as B1, by Part 13 of Lemma B.1, we know that |B2| \u22641 m exp(B) \u00b7 1 m exp(B) \u00b7 2 \u00b7 exp(B + R) \u00b7 exp(B + R) \u00b7 4R exp(4B + 3R) \u22648R m2 exp(9B). Similarly, we have |B3| \u22648R m2 exp(9B). To bound B4 and B5 For the term B4, we can rewrite |B4| = |(\u03b1\u22121 j \u2212e \u03b1\u22121 j ) \u00b7 \u03b1\u22121 i 1 m m X r=1 \u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9g expi,rg expj,r| \u2264|\u03b1\u22121 j \u2212e \u03b1\u22121 j | \u00b7 \u03b1\u22121 i 1 m m X r=1 |\u27e8v\u21131,r, e Si\u27e9\u27e8v\u21132,r, e Sj\u27e9|g expi,rg expj,r. Thus, by Part 9 of Lemma B.1, using similar proof strategy as B1 as know |B4| \u2264R m exp(3B + 2R) \u00b7 1 m exp(B) \u00b7 2 \u00b7 2 \u00b7 exp(B + R) \u00b7 exp(B + R) \u22644R m2 exp(7B). Similarly, we have |B5| \u22644R m2 exp(7B). C Induction In Section C.1, we provide the proof of our main result. In Section C.2, we provide an induction lemma for weights part. In Section C.3, we provide an induction lemma for loss part. In Section C.4, we provide an induction lemma for gradient part. C.1 Main Result Our main result is presented as follows. Theorem C.1 (Main result. Restatement of Theorem 4.2). For any \u01eb, \u03b4 \u2208(0, 0.1), if the following conditions hold \u2022 Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 Let m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Let \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) 19 \f\u2022 Let b T = \u2126((m\u03b7\u03bb)\u22121 log(nd/\u01eb)) = \u2126(\u03bb\u22122n2d2 exp(16B) \u00b7 log(nd/\u01eb)) Then, after b T iterations, we have \u2225F( b T) \u2212Y \u22252 F \u2264\u01eb. Proof of Theorem 4.2. Let \u03c3 = 1. We have \u2225F(0) \u2212Y \u22252 F \u2264nd by Lemma D.3. Using the choice of b T, it follows directly from the alternative application of Lemma C.3 and Lemma C.2. Since exp(\u0398(B)) = (nd)o(1), we can simplify the nd exp(\u0398(B)) = (nd)1+o(1). C.2 Induction Part 1. For Weights We provide an induction lemma for weights part. Lemma C.2 (Induction Part 1. For Weights). Let \u03c4 be a \ufb01xed integer. If the below conditions are true \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(i) \u2212wr(0)\u22252 \u2264R for all i \u2208[\u03c4] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then, for \u03c4 + 1 and \u2200r \u2208[m], we have \u2225wr(\u03c4 + 1) \u2212wr(0)\u22252 \u2264D. Proof. We have \u03b7 \u221e X i=0 (1 \u2212m\u03b7\u03bb/2)i/2 \u2264\u03b7 \u221e X i=0 (1 \u2212m\u03b7\u03bb/4)i \u2264\u03b7 1 m\u03b7\u03bb/4 \u2264 4 m\u03bb (10) where the \ufb01rst step is due to the Fact A.5, the second stepis due to the Fact A.7, the last step is because of simple algebra. 20 \fWe use the gradient\u2019s norm to measure the weights di\ufb00erence: \u2225wr(0) \u2212wr(\u03c4 + 1)\u22252 \u2264\u03b7 \u03c4 X i=0 \u2225\u2206wr(i)\u22252 \u2264\u03b7 \u03c4 X i=0 exp(3B) \u221a nd \u00b7 \u2225F(i) \u2212Y \u2225F \u2264\u03b7 exp(3B) \u221a nd \u03c4 X i=0 (1 \u2212m\u03b7\u03bb/2)i/2 \u00b7 \u2225F(0) \u2212Y \u2225F \u22644m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F = D where the \ufb01rst step follows from wr(i + 1) \u2212wr(i) = \u03b7 \u00b7 \u2206wr(i), the second step follows from Lemma D.1 for \u03c4 times, the third step follows from Loss Property in Lemma statement, the fourth step follows from Eq. (10), the last step is from General Property 3 in Lemma statement. C.3 Induction Part 2. For Loss We provide an induction lemma for loss part. Lemma C.3 (Induction Part 2. For Loss). Let \u03c4 be a \ufb01xed integer. If the following conditions hold \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264D < R, \u2200r \u2208[m] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then we have \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/4)\u03c4+1 \u00b7 \u2225F(0) \u2212Y \u22252 F . Proof. We have \u2225F(\u03c4) \u2212Y \u22252 F \u2264\u2225F(\u03c4 \u22121) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2) which follows Lemma E.2. Thus, we complete the proof by induction. 21 \fC.4 Induction Part 3. For Gradient We provide an induction lemma for gradient part. Lemma C.4 (Induction Part 3. For Gradient). Let \u03c4 be a \ufb01xed integer. If the following conditions hold \u2022 General Property 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 General Property 2. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 General Property 3. Let D be de\ufb01ned as De\ufb01nition A.2 \u2022 General Property 4. D < R = \u03bb/(2nd exp(10B)) \u2022 General Property 5. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Weights Property. \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264D < R, \u2200r \u2208[m] \u2022 Loss Property. \u2225F(i) \u2212Y \u22252 F \u2264\u2225F(0) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2)i, \u2200i \u2208[\u03c4] \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01 \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then we have \u03b7\u2225\u2206wr(\u03c4 + 1)\u22252 \u22640.01, \u2200r \u2208[m] Proof. This is trivially follows from Lemma D.1 and Lemma D.2. D Induction Part 1: For Weights In Section D.1, we propose the lemma for bounding gradient and its corresponding proof. In Section D.2, we propose the bounding initialization loss and its corresponding proof. D.1 Bounding the Gradient at any Time In this section, we bound the gradient. Lemma D.1. If the following condition hold, \u2022 Let B > 1 denote a parameter be de\ufb01ned as De\ufb01nition 4.1 \u2022 Let R \u2208(0, 0.01) \u2022 \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264R \u2022 Let v\u2113,r = a\u2113,r \u00b7 1m \u2212a\u2113\u2208Rm, for any \u2113\u2208[d] and for any r \u2208[m] For any timestamp \u03c4, we have \u2225\u2206wr(\u03c4)\u22252 \u2264exp(3B) \u221a nd \u00b7 \u2225F(\u03c4) \u2212Y \u2225F . 22 \fProof. We have \u2225\u2206wr(\u03c4)\u22252 = \r \r \r \r \rm n X i=1 d X \u2113=1 (y\u2113,i \u2212F\u2113,i) \u00b7 xi \u00b7 \u27e8v\u2113,r, Si(\u03c4)\u27e9\u00b7 Si,r(\u03c4) \r \r \r \r \r 2 \u2264exp(3B) n X i=1 d X \u2113=1 |y\u2113,i \u2212F\u2113,i(\u03c4)| \u2264exp(3B) \u221a nd \u00b7 \u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step follows from Claim 3.4 and De\ufb01nition 3.3, the second step follows from |\u27e8v\u2113,r, Si\u27e9| \u22642 and |Si,r| \u2264exp(2B + 2R)/m by Part 11 of Lemma B.1, the last step follows from Cauchy-Schwartz inequality. Lemma D.2. If the following conditions hold, \u2022 \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 \u2225wr(\u03c4) \u2212wr(0)\u22252 \u2264R Then, for any timestamp \u03c4, we have \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01 Proof. This trivially follows from Lemma D.1 and choice of \u03b7. D.2 Bounding the Initialization Loss In this section, we bound the initialization loss. Lemma D.3. We have \u2225F(0) \u2212Y \u2225F \u2264O( \u221a nd). Proof. This trivially follows from \u2225yi\u2225\u22641, \u2200i \u2208[n] and symmetric initialization from De\ufb01nition 3.7. E Induction Part 2: For Loss In Section E.1, we decompose the loss \u2225F(k + 1) \u2212Y \u22252 F into four parts, namely C0, C1, C2, and C3. In Section E.2, we show our choices of m and \u03b7. In Section E.3, we establish bounds for C0. In Section E.4, we establish bounds for C1. In Section E.5, we establish bounds for C2. In Section E.6, we establish bounds for C3. 23 \fE.1 Decomposition for \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 Here, we decompose the loss \u2225vec(F(\u03c4 + 1) \u2212Y )\u22252 2 into four parts C0, C1, C2 and C3. Lemma E.1. Assuming the following condition is met: \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9. \u2022 Let scalar v0,\u2113,i \u2208R be de\ufb01ned as follows v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) \u2022 Let scalar v1,\u2113,i \u2208R be de\ufb01ned as follows v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u2022 Let scalar v2,\u2113,i \u2208R be de\ufb01ned as follows v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] \u2022 C0 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v0)\u27e9 \u2022 C1 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v1)\u27e9 \u2022 C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 \u2022 C3 = \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F then \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(t) \u2212Y \u22252 F + C0 + C1 + C2 + C3. Proof. The expression \u2225Y \u2212F(\u03c4 + 1)\u22252 F = \u2225vec(Y \u2212F(\u03c4 + 1))\u22252 2 can be rewritten in the following: \u2225vec(Y \u2212F(\u03c4 + 1))\u22252 2 = \u2225vec(Y \u2212F(\u03c4) \u2212(F(\u03c4 + 1) \u2212F(\u03c4)))\u22252 2 = \u2225vec(Y \u2212F(\u03c4))\u22252 2 \u22122 vec(Y \u2212F(\u03c4))\u22a4vec(F(\u03c4 + 1) \u2212F(\u03c4)) + \u2225vec(F(\u03c4 + 1) \u2212F(\u03c4))\u22252 2. (11) where the \ufb01rst step follows from simple algebra, the last step follows from Fact A.3. Recall the update rule (De\ufb01nition 3.5), wr(\u03c4 + 1) = wr(\u03c4) \u2212\u03b7 \u00b7 \u2206wr(\u03c4) In the following manner, \u2200\u2113\u2208[d], we can express F\u2113(\u03c4 + 1) \u2212F\u2113(\u03c4) \u2208Rn: 24 \fF\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4) = m X r\u2208[m] a\u2113,r \u00b7 (\u03b1i(\u03c4 + 1)\u22121 exp(\u27e8wr(\u03c4 + 1), xi\u27e9) \u2212\u03b1i(\u03c4)\u22121 exp(\u27e8wr(\u03c4), xi\u27e9)) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r\u03b1i(\u03c4)\u22121 \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9) \u2212exp(\u27e8wr(\u03c4), xi\u27e9)) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (exp(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u22121) = + m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) + m X r\u2208[m] a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((wr(\u03c4)\u22a4xi) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9+ \u0398(1)\u03b72\u27e8\u2206wr(\u03c4), xi\u27e92) = v0,\u2113,i + v1,\u2113,i + v2,\u2113,i where the \ufb01rst step is due to the de\ufb01nition of F\u2113,i(\u03c4), the second step is from the simple algebra, the third step is due to |\u03b7\u2206wr(\u03c4)\u22a4xi| \u22640.01 (due to Gradient Property and \u2225xi\u22252 \u22641), the fourth step follows from the Fact A.8, the last step follows from v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 Here v0,\u2113,i and v1,\u2113,i are linear in \u03b7 and v2,\u2113,i is quadratic in \u03b7. Thus, v0,\u2113,i and v1,\u2113,i are the \ufb01rst order term, and v2,\u2113,i is the second order term. We can rewrite the second term in the Eq. (11) above as below: \u27e8vec(Y \u2212F(\u03c4)), vec(F(\u03c4 + 1) \u2212F(\u03c4))\u27e9 = \u27e8vec(Y \u2212F(\u03c4)), vec(v0 + v1 + v2)\u27e9 = \u27e8vec(Y \u2212F(\u03c4)), vec(v0)\u27e9+ \u27e8vec(Y \u2212F(\u03c4)), vec(v1)\u27e9+ \u27e8vec(Y \u2212F(\u03c4)), vec(v2)\u27e9 Therefore, we can conclude that \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(\u03c4) \u2212Y \u22252 F + C0 + C1 + C2 + C3. 25 \fE.2 Choice of Parameters Here, we show our choice of parameters m, \u03b7, R, B. Lemma E.2. If the below conditions are true \u2022 Condition 1. Let \u03bb = \u03bbmin(H\u2217) > 0 \u2022 Condition 2. m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)) \u2022 Condition 3. \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)) \u2022 Condition 4. R = \u03bb/(2nd exp(10B)) \u2013 Required by Claim E.5 \u2022 Condition 5. B = max{C\u03c3 p log(nd/\u03b4), 1} \u2022 Condition 6. D = 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F \u2022 Condition 7. D < R \u2022 Condition 8. \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01, \u2200r \u2208[m] \u2013 Required by Lemma E.1, Claim E.3 and Claim E.7 Then it holds that \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264\u2225F(\u03c4) \u2212Y \u22252 F \u00b7 (1 \u2212m\u03b7\u03bb/2) holds with probability at least 1 \u2212\u03b4. Proof. We can show \u2225F(\u03c4 + 1) \u2212Y \u22252 F = \u2225F(\u03c4) \u2212Y \u22252 F + C0 + C1 + C2 + C3 \u2264(1 \u22120.8m\u03b7\u03bb + 0.1m\u03b7\u03bb + 2m\u03b72n2d2 exp(9B) + \u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F \u2264(1 \u22120.7m\u03b7\u03bb + 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . where the \ufb01rst step follows from Lemma E.1, the second step follows from Lemma E.3 for C0, Lemma E.4, Claim E.5 for C1, Claim E.6 for C2 and Claim E.7 for C3, the last step follows from the simple algebra. Choice of \u03b7. Next, we want to choose \u03b7 such that (1 \u22120.7m\u03b7\u03bb + 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B)) \u2264(1 \u2212m\u03b7\u03bb/2). (12) Using the choice of \u03b7 in Condition 3 2\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u22640.2m\u03b7\u03bb This indicates: \u2225F(\u03c4 + 1) \u2212Y \u22252 F \u2264(1 \u2212m\u03b7\u03bb/2) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . (13) 26 \fLower bound for m, over-parametrization size. We require the following conditions \u2022 m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)) (required by Lemma E.3) \u2022 m \u2265\u2126(\u03bb\u22122n2d exp(12B) log2(nd/\u03b4)) (required by Lemma E.4) \u2022 D = 4m\u22121\u03bb\u22121 exp(3B) \u221a nd \u00b7 \u2225F(0) \u2212Y \u2225F < R = \u03bb/(2nd exp(10B))} (required by Condition 7.) Therefore, by \u2225Y \u2212F(0)\u2225F = O( \u221a nd) from Lemma D.3, it su\ufb03ces to choose: m = \u2126(\u03bb\u22122n2d2 exp(18B) log2(nd/\u03b4)). E.3 Bounding C0 Here, we explain about how to bound C0. Lemma E.3. If the following conditions hold \u2022 Let scalar v0,\u2113,i \u2208R be de\ufb01ned as follows v0,\u2113,i := m X r\u2208[m] a\u2113,r(\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121) \u00b7 (exp(\u27e8wr(\u03c4 + 1), xi\u27e9)) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9. \u2022 Let m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)) \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] \u2022 We de\ufb01ne C0 as follows C0 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v0)\u27e9 Here vec(v0) \u2208Rnd is the vectorization of v0 \u2208Rn\u00d7d and vec(F(\u03c4) \u2212Y ) \u2208Rnd is the vectorization of F(\u03c4) \u2212Y \u2208Rn\u00d7d. Then we have |C0| \u22640.1m\u03b7\u03bb \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F Proof. We can rewrite v0,\u2113,i as follows: v0,\u2113,i = m m X r=1 a\u2113,r((\u03b1i(\u03c4 + 1))\u22121 \u2212\u03b1i(\u03c4)\u22121) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121 \u00b7 (\u27e81m, exp(W(\u03c4 + 1)xi) \u2212exp(W(\u03c4)xi)\u27e9) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121( m X r2=1 exp(wr2(\u03c4 + 1)xi) \u2212exp(wr2(\u03c4)xi)) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) 27 \f= m m X r=1 a\u2113,r\u03b1i(\u03c4 + 1)\u22121\u03b1i(\u03c4)\u22121 m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9exp(wr2(\u03c4)xi) exp(\u27e8wr(\u03c4 + 1), xi\u27e9) = m( m X r=1 a\u2113,r m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) | {z } \ufb01rst order term + \u03b72\u22062 | {z } second order term ) (14) where the \ufb01rst step follows from lemma statement, the second step follows from a\u22121 \u2212b\u22121 = b\u2212a ab , the third step follows from simple algebra, the fourth step follows from simple algebra, and the last step follows from |\u03b7\u2206wr(\u03c4)\u22a4xi| \u22640.01 (due to Gradient Property and \u2225xi\u22252 \u22641). The second order term \u03b72\u22062 in Eq. (14) can be bounded in a similar way as the proof of Claim E.6. Further, we can rewrite the \ufb01rst-order term in Eq. (14) m m X r=1 a\u2113,r m X r2=1 \u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) = m2(Q1,i,\u2113+ Q2,i,\u2113) (15) where Q1,i,\u2113:= 1 m m X r=1 a\u2113,r(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9)Si,r(\u03c4) \u00b7 Si,r(\u03c4 + 1) Q2,i,\u2113:= 1 m m X r=1 a\u2113,r X r2\u0338=r (\u2212\u03b7\u27e8\u2206wr2(\u03c4), xi\u27e9)Si,r2(\u03c4) \u00b7 Si,r(\u03c4 + 1) Let us consider how to handle the \ufb01rst term in Eq. (14), Q1,i,\u2113= 1 m m X r=1 a\u2113,r(\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9)Si,r(\u03c4) \u00b7 Si,r(\u03c4 + 1) = m X r=1 a\u2113,rSi,r \u00b7 Si,r(\u03c4 + 1)(\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi where the second step follows from computing \u2206wr(\u03c4) explicitly (see Claim 3.4). Similarly as proof of Lemma E.4, we can use concentration to bound n X i=1 d X \u2113=1 Q1,i,\u2113(F\u2113,i \u2212y\u2113,i) Note that 0 < Sj,r < exp(3B) m by Part 11 of Lemma B.1. The above small term is equivalent to \u2212\u03b7exp(9B) m3 \u00b7 n X i=1 n X j=1 m X r=1 d X \u2113=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u03c3i,j,r,\u2113,\u21132 \u00b7 Ci,j,r,\u2113,\u21132 \u00b7 (F\u2113,i(\u03c4) \u2212y\u2113,i), where \u03c3i,\u2113,\u21132,j,r \u223c[\u22121, +1] and |Ci,\u2113,\u21132,j,r| \u226410. We de\ufb01ne P1,r,\u2113,\u21132 := (F\u21132,j \u2212y\u21132,j)\u03c3i,j,r,\u2113,\u21132Ci,j,r,\u2113,\u21132(F\u2113,i \u2212y\u2113,i) 28 \fSimilarly as Lemma E.4, for each \ufb01xed i, j \u2208[n], using Hanson-Wright inequality (Lemma A.10), we can show Pr[| m X r=1 d X \u2113=1 d X \u21132=1 P1,r,\u2113,\u21132| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 \u221a md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). By mean inequality, we have n X i=1 n X j=1 \u2225Fj \u2212yj\u22252 \u00b7 \u2225Fi \u2212yi\u22252 \u2264n\u2225F \u2212y\u22252 F . Thus, we have the \ufb01rst term with probability at least 1 \u2212poly(nd), such that | n X i=1 d X \u2113=1 Q1,i,\u2113(F\u2113,i \u2212y\u2113,i)| \u2264\u03b7n exp(9B) m3 \u2225F \u2212y\u22252 F \u221a md log(nd/\u03b4) Similarly, we can compute n X i=1 d X \u2113=1 Q2,i,\u2113(F\u2113,i \u2212y\u2113,i) Using Hanson-Wright inequality (Lemma A.10), we have the second term with probability at least 1 \u2212poly(nd), such that | n X i=1 d X \u2113=1 Q2,i,\u2113(F\u2113,i \u2212y\u2113,i)| \u2264\u03b7n exp(9B) m2 \u2225F \u2212y\u22252 F \u221a md log(nd/\u03b4) Thus, we can complete the proof by the Lemma statement m \u2265\u2126(\u03bb\u22122n2d exp(18B) log2(nd/\u03b4)). E.4 Bounding C1 Here, we give the bound of the \ufb01rst order term C1. Note that this term is making progress. Lemma E.4. Assuming the following condition is met: \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9 \u2022 Let m \u2265\u2126(\u03bb\u22122n2d exp(12B) log2(nd/\u03b4)) \u2022 Let scalar v1,\u2113,i \u2208R be de\ufb01ned as follows v1,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 (\u2212\u03b7\u27e8\u2206wr(\u03c4), xi\u27e9) \u2022 C1 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v1)\u27e9 29 \fthen C1 \u2264\u22121.6m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ). Proof. To simplify the notation, we omit writing (\u03c4) in Si,r(\u03c4). Then, we can express v1,\u2113,i \u2208R as follows: v1,\u2113,i = m X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u2212\u03b7\u27e8xi, \u2206wr(\u03c4)\u27e9) = m2 X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi = m2(Q1,\u2113,i + Q2,\u2113,i) (16) where the second step using equation for \u2206wr(\u03c4) (see Claim 3.4). Note that \u27e8a\u2113,r \u00b7 1m, Si\u27e9= a\u2113,r, so in the above equation, Q1,\u2113,i := X r\u2208[m] \u27e8a\u2113,r \u00b7 1m \u2212a\u2113, Si\u27e9\u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi Q2,\u2113,i := X r\u2208[m] \u27e8a\u2113, Si\u27e9\u00b7 Si,r \u00b7 (\u2212\u03b7 n X j=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u0010 (\u27e8a\u21132,r \u00b7 1m \u2212a\u21132, Sj\u27e9) \u00b7 Sj,r \u0011 \u00b7 x\u22a4 j )xi The quantity P i\u2208[n] P \u2113\u2208[d] Q1,\u2113,i(F\u2113,i \u2212Y\u2113,i) is corresponding to \ufb01rst term (Q1,\u2113,i) in Eq. (16). It is X i\u2208[n] X \u2113\u2208[d] Q1,\u2113,i(F\u2113,i \u2212Y\u2113,i) = \u22121 m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4)\u22a4vec(F(\u03c4) \u2212Y ) (17) The quantity P i\u2208[n] P \u2113\u2208[d] Q2,\u2113,i(F\u2113,i \u2212Y\u2113,i) is corresponding to second term (Q2,\u2113,i) in Eq. (16). Note that 0 < Sj,r < exp(3B) m by Part 11 of Lemma B.1. The quantity, X i\u2208[n] X \u2113\u2208[d] Q2,\u2113,i(F\u2113,i \u2212Y\u2113,i) (18) is equivalent to \u2212\u03b7exp(6B) m2 \u00b7 n X i=1 n X j=1 m X r=1 d X \u2113=1 d X \u21132=1 (F\u21132,j(\u03c4) \u2212y\u21132,j) \u00b7 \u03c3i,j,r,\u2113,\u21132 \u00b7 Ci,j,r,\u2113,\u21132 \u00b7 (F\u2113,i(\u03c4) \u2212y\u2113,i), where \u03c3i,j,r,\u2113,\u21132 \u2208{\u22121, +1} and |Ci,j,r,\u2113,\u21132| \u226410. Note that there are four cases \u2022 i = j, \u2113= \u21132, this is a p.s.d. case that always makes progress, thus we can drop it. \u2022 i \u0338= j, \u2113= \u21132 we will use random variable P1 to handle \u2022 i = j, \u2113\u0338= \u21132 we will use random variable P2 to handle \u2022 i \u0338= j, \u2113\u0338= \u21132 we will use random variable P2 to handle 30 \fFor each \ufb01xed i, j \u2208[n]. We de\ufb01ne P1,r,\u2113:= (F\u2113,j \u2212y\u2113,j)\u03c3i,j,r,\u2113Ci,j,r,\u2113(F\u2113,i \u2212y\u2113,i) P2,r,\u2113,\u21132 := (F\u21132,j \u2212y\u21132,j)\u03c3i,j,r,\u2113,\u21132Ci,j,r,\u2113,\u21132(F\u2113,i \u2212y\u2113,i) The random variables related to P1,r,\u2113are the following m X r=1 d X \u2113=1 P1,r,\u2113 The random variables related to P2,r,\u2113,\u21132 are the following m X r=1 d X \u2113=1 d X \u21132=1 P2,r,\u2113,\u21132 For each i \u0338= j \u2208[n] and \u2113= \u21132, using Hoe\ufb00ding inequality (see Lemma A.9), we can show Pr[| m X r=1 d X \u2113=1 P1,r,\u2113| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 p md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). Similarly, we consider i = j and \u2113\u0338= \u21132 by Hanson-Wright inequality (Lemma A.10), we have Pr[| m X r=1 d X \u2113=1 d X \u21132=1 P2,r,\u2113,\u21132| \u2264100\u2225Fj \u2212yj\u22252\u2225Fi \u2212yi\u22252 \u00b7 \u221a md log(nd/\u03b4)] \u22651 \u2212\u03b4/ poly(nd). By mean inequality, we have n X i=1 n X j=1 \u2225Fj \u2212yj\u22252 \u00b7 \u2225Fi \u2212yi\u22252 \u2264n\u2225F \u2212y\u22252 F . Note that by Lemma condition, we have 1 m\u03bb \u2273n exp(6B) m2 \u00b7 \u221a md log(nd/\u03b4) \u21d0 \u21d2m \u2273\u03bb\u22122, the equation (Eq. (17) and the bound for Eq. (18)) above indicates that \u27e8vec(Y \u2212F(\u03c4)), vec(v1)\u27e9 can be expressed as vec(v1)\u22a4vec(Y \u2212F(\u03c4)) \u22650.8m\u03b7 \u00b7 vec(F(\u03c4) \u2212Y )\u22a4 | {z } 1\u00d7nd H(\u03c4)\u22a4 | {z } nd\u00d7nd vec(F(\u03c4) \u2212Y ). (19) We \ufb01nish the proof. Claim E.5. If the below conditions are true \u2022 Let B \u22651 be de\ufb01ned as De\ufb01nition 4.1 \u2022 Let \u03bb = \u03bbmin(H\u2217) > 0 31 \f\u2022 C1 = \u2212m\u03b7 vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ). \u2022 R = \u03bb/(2nd exp(10B)) Then, we have C1 \u2264\u22121 2m\u03b7\u03bb \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F and \u03bbmin(H(\u03c4)) \u2265\u03bb/2. holds with probability at least 1 \u2212\u03b4. Proof. By Lemma 5.1, with probability at least 1 \u2212\u03b4, we have \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264Rnd \u00b7 exp(10B) \u2264\u03bb/2 (20) where the \ufb01rst step follows from the de\ufb01nition of H(\u03c4), the last step comes from choice of \u03bb (see Claim Statement). Given that \u03bb = \u03bbmin(H\u2217), by eigenvalue perturbation theory \u03bbmin(H(\u03c4)) \u2265\u03bbmin(H\u2217) \u2212\u2225H\u2217\u2212H(\u03c4)\u2225 \u2265\u03bbmin(H\u2217) \u2212\u2225H\u2217\u2212H(\u03c4)\u2225F \u2265\u03bbmin(H\u2217) \u2212\u03bb/2 \u2265\u03bb/2. where the \ufb01rst step comes from triangle inequality, the second step is due to Frobenius norm, the third step is due to Eq.(20), the last step follows from \u03bbmin(H\u2217) = \u03bb. Finally, we have vec(F(\u03c4) \u2212Y )\u22a4H(\u03c4) vec(F(\u03c4) \u2212Y ) \u2265\u03bb/2 \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . Thus, we complete the proof. E.5 Bounding C2 Here, we give the bound of the second order term C2. Claim E.6. If the below conditions are true \u2022 Let \u03bb = \u03bbmin(H\u2217) \u2022 Let \u03b1i(\u03c4) := \u27e8exp(W(\u03c4)\u22a4xi), 1m\u27e9 \u2022 Let scalar v2,\u2113,i \u2208R be de\ufb01ned as follows v2,\u2113,i := m m X r=1 a\u2113,r \u00b7 \u03b1i(\u03c4)\u22121 exp((\u27e8wr(\u03c4), xi\u27e9) \u00b7 \u03b72 \u00b7 \u0398(1) \u00b7 \u27e8\u2206wr(\u03c4), xi\u27e92 32 \f\u2022 C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 Then we can conclude that C2 \u22642m\u03b72n2d2 exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F . with probability at least 1 \u2212n \u00b7 exp(\u2212mR). Proof. Let pi,r \u2208[\u22121, 1]. We have |v2,\u2113,i| = m X r\u2208[m] a\u2113,r \u00b7 Si,r \u00b7 (\u03b72pi,r\u27e8xi, \u2206wr(\u03c4)\u27e92) \u2264m\u03b72nd exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F , where the last step follows Lemma D.1 and Part 11 of Lemma B.1. Thus, C2 = 2\u27e8vec(F(\u03c4) \u2212Y ), vec(v2)\u27e9 \u22642\u2225F(\u03c4) \u2212Y \u2225F \u2225v2\u2225F \u22642m\u03b72n2d2 exp(9B)\u2225F(\u03c4) \u2212Y \u22252 F , where the \ufb01rst step follows Cauchy-Schwartz inequality, and the second step follows \u2225F(\u03c4)\u2212Y \u2225F \u2264 O( \u221a nd) by induction statement (See Lemma C.3). E.6 Bounding \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F Here, we give the bound of the third order term C3. Claim E.7. If the below conditions are true \u2022 Let B \u22651 be de\ufb01ned as De\ufb01nition 4.1 \u2022 C3 = \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F . \u2022 R \u2208(0, 0.01) \u2022 Gradient Property. \u03b7\u2225\u2206wr(i)\u22252 \u22640.01, \u2200r \u2208[m], \u2200i \u2208[\u03c4] Then with probability at least 1 \u2212\u03b4, we have C3 \u2264\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F . Proof. Note that we denote \u03b1i as \u27e81m, exp(W \u22a4xi)\u27e9. According to de\ufb01nition of F\u2113,i(\u03c4), we have F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4) = ma\u22a4 \u2113( + \u03b1i(\u03c4 + 1)\u22121 exp((W(\u03c4 + 1)\u22a4xi) \u2212\u03b1i(\u03c4)\u22121 exp((W(\u03c4 + 1)\u22a4xi) + \u03b1i(\u03c4)\u22121 exp((W(\u03c4 + 1)\u22a4xi) \u2212\u03b1i(\u03c4)\u22121 exp((W(\u03c4)\u22a4xi) ) 33 \fThen we have |F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4)| (21) \u2264m m X r=1 |\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| exp(wr(\u03c4 + 1)\u22a4xi) + m m X r=1 \u03b1i(\u03c4)\u22121 exp(wr(\u03c4)\u22a4xi) \u00b7 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| where it follows from triangle inequality. For the second term in Eq. (21), we have m m X r=1 \u03b1i(\u03c4)\u22121 exp(wr(\u03c4)\u22a4xi) \u00b7 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| \u2264exp(B + R) exp(B + R) m X r=1 | exp(\u2212\u03b7\u2206wr(\u03c4)\u22a4xi) \u22121| \u2264exp(2B + 2R) m X r=1 2\u03b7\u2225\u2206wr(\u03c4)\u22252 = 2\u03b7 exp(2B + 2R) m X r=1 \u2225\u2206wr(\u03c4)\u22252 \u22642\u03b7 exp(2B + 2R) \u00b7 m \u00b7 exp(3B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F \u2264\u03b7m exp(6B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step comes from Lemma B.1, the second step is due to \u03b7\u2225\u2206wr(\u03c4)\u22252 \u22640.01 (this is stated in Claim assumption) and Fact A.8, the third step is from simple algebra, the fourth step is due to Lemma D.1, the last step follows from simple algebra. Similarly, for the \ufb01rst term in Eq. (21) we have m m X r=1 |\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| exp(wr(\u03c4 + 1)\u22a4xi) \u2264m2 exp(B + R)|\u03b1i(\u03c4 + 1)\u22121 \u2212\u03b1i(\u03c4)\u22121| \u2264m exp(B + R)|\u03b7\u2206wr(\u03c4)\u22a4xi| exp(3B + 2R) \u2264\u03b7m exp(4B + 3R)\u2225\u2206wr(\u03c4)\u22252 \u2264\u03b7m exp(7B + 3R) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F where the \ufb01rst step follows from Part 5 of Lemma B.1, the second step follows from Part 9 of Lemma B.1 where R = |\u03b7\u2206wr(\u03c4)\u22a4xi|, the third step follows from simple algebra, and the last step follows from Lemma D.1. Thus we have |F\u2113,i(\u03c4 + 1) \u2212F\u2113,i(\u03c4)| \u2264\u03b7m exp(8B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F . (22) Finally, we get \u2225F(\u03c4 + 1) \u2212F(\u03c4)\u22252 F \u2264nd \u00b7 (\u03b7m exp(8B) \u221a nd\u2225F(\u03c4) \u2212Y \u2225F )2 \u2264\u03b72m2 \u00b7 n2d2 \u00b7 exp(16B) \u00b7 \u2225F(\u03c4) \u2212Y \u22252 F where the \ufb01rst step is because of Eq. (22), the last step comes from simple algebra. 34 \fF NTK Regression In this section, we introduce the NTK regression, as we will show that the neural network is \u201cequivalent\u201d to this regression so that we can give a \ufb01nal guarantee on the test data. To clarify the function, we use Fnn to denote F as a neural network function. We use xte \u2208Rd to denote the test data. We would like to control the error between the neural network Fnn and the function Fntk. For convenience, we call this error \u201ccoupling error\u201d, which is the di\ufb00erence between the trained neural network and its corresponding NTK regression. Recall that, by De\ufb01nition 3.6, we have the H\u2217= H(W(0)). Recall [H\u2217]i,j \u2208Rd\u00d7d is the kernel between xi and xj. Similarly, \u2200\u21131, \u21132 \u2208[d], for test data, we can de\ufb01ne the NTK induced feature map as [K\u2217 \u21131,\u21132]te,j := 1 mx\u22a4 texj m X r=1 \u27e8v\u21131,r, Ste(0)\u27e9\u00b7 mSte,r(0) \u00b7 \u27e8v\u21132,r, Sj(0)\u27e9\u00b7 mSj,r(0) [K(\u03c4)\u21131,\u21132]te,j := 1 mx\u22a4 texj m X r=1 \u27e8v\u21131,r, Ste(\u03c4)\u27e9\u00b7 mSte,r(\u03c4) \u00b7 \u27e8v\u21132,r, Sj(\u03c4)\u27e9\u00b7 mSj,r(\u03c4), where K\u2217 te, Kte(\u03c4) \u2208Rd\u00d7nd. Similarly, we have K\u2217 i = [H\u2217]i \u2208Rd\u00d7nd, Ki(\u03c4) = [H(\u03c4)]i \u2208Rd\u00d7nd for training data xi. Then, we de\ufb01ne the kernel regression predictor. De\ufb01nition F.1 (NTK regression predictor). We de\ufb01ne NTK regression predictor as Fntk(\u03b3(\u03c4), xte) :=mK\u2217 te\u03b3(\u03c4), (23) where \u03b3(\u03c4) \u2208Rnd is the parameter at timestamp \u03c4. Recall that we have a training dataset Dn = {(xi, yi)}n i=1. Then, we denote the corresponding objective function for Fntk as Lntk(\u03b3(\u03c4)) = 1 2 n X i=1 \u2225Fntk(\u03b3(\u03c4), xi) \u2212yi\u22252 2. (24) Thus, based on Eq. (24), the gradient desent (GD) updating rule of \u03b3(\u03c4) is given by \u03b3(\u03c4 + 1) | {z } nd\u00d71 = \u03b3(\u03c4) |{z} nd\u00d71 \u2212\u03b7 \u00b7 (m H\u2217 |{z} nd\u00d7nd \u03b3(\u03c4) |{z} nd\u00d71 \u2212vec(Y ) | {z } nd\u00d71 ), \u03b3(0) = 0nd, (25) where the Eq. (25) is according to \u03b3(\u03c4 + 1) = \u03b3(\u03c4) \u2212\u03b7\u2207\u03b3Lntk(\u03b3(\u03c4)). F.1 Equivalence between Trained Net and Kernel Regression We provide a stronger bound between Fntk and Fnn result compared to Lemma F.1 in [ADH+19b]. Our following statement is stronger in the two following senses: their result only holds when t \u2192\u221e, and our result holds for all t \u2208[0, \u221e); also their result only works for 1 dimension output space, our result holds arbitrary d dimensional output space. Theorem F.2 (Kernel value perturbation \u21d2prediction perturbation). Fix \u01ebH \u22641 2\u03bb. If for all \u03c4 \u22650, \u2225K\u2217 \u2113,te \u2212K\u2113,te(\u03c4)\u2225F \u2264\u01eb\u2113,test and \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264\u01ebH, then for any xte \u2208Rd, \u2113\u2208[d] and \u03c4 \u22650, we have |Fntk(\u03b3(\u03c4), xte)\u2113\u2212Fnn(W(\u03c4), xte)\u2113| \u2264O \u221a nd \u03bb \u01eb\u2113,test + \u221a nd \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . 35 \fProof of Theorem F.2. Our proof relies on a careful analysis of the trajectories induced by gradient \ufb02ow for optimizing the neural network predictor Fnn and the NTK predictor Fntk. Then, we can have a similar argument to gradient descent at any timestamp \u03c4. Recall that for any xte, xi \u2208Rd, we have K\u2217 te, K\u2217 i \u2208Rd\u00d7nd be the feature map induced by NTK. For any x \u2208Rd, we de\ufb01ne \u03c6(x) \u2208Rd\u00d7d as following, for any \u2113\u2208[d], \u03c6(x)\u2113= 1 \u221amx m X r=1 \u27e8v\u2113,r, S(0)\u27e9\u00b7 mSr(0). We denote \u03c6(X) \u2208Rd\u00d7nd as the stack of feature map of X \u2208Rd\u00d7n. Note the optimal solution in Eq. (23) can be rewritten as min \u03b3 \u2225\u03b3\u22252 such that mK\u2217 i \u03b3 = yi for i = 1, . . . , n. We have the optimal solution for kernel regression is \u03b3\u2217:= m\u22121(H\u2217)\u22121 vec(Y ) and its corresponding prediction for xte will be Fntk(\u03b3(\u03c4), xte) = K\u2217 te(H\u2217)\u22121 vec(Y ). The solution to this program can be rewritten as applying gradient \ufb02ow on the min \u03b2 n X i=1 \u2225\u221am\u03c6(xi)\u22a4\u03b2 \u2212yi\u22252 2 with initialization \u03b2(0) = 0d. We use \u03b2(\u03c4) to denote this parameter at timestamp \u03c4 trained by gradient \ufb02ow. We denote Fntk2(\u03b2(\u03c4), xte) := \u221am\u03c6(xte)\u22a4\u03b2(\u03c4) where Fntk2(\u03b2(\u03c4), xte) be the predictor for xte at time \u03c4. Then we have Fntk2(\u03b2(\u03c4), xte) = \u221am \u03c6(xte)\u22a4 | {z } Rd\u00d7d \u03b2(\u03c4) |{z} Rd = \u221am \u03c6(xte)\u22a4 | {z } Rd\u00d7d (\u221am \u03c6(X) | {z } Rd\u00d7nd ) \u03b3(\u03c4) |{z} Rnd = m K\u2217 te |{z} Rd\u00d7nd \u03b3(\u03c4) = Fntk(\u03b3(\u03c4), xte) where the second step follows \u03b2(\u03c4) = \u221am\u03c6(X)\u03b3(\u03c4) the third step follows K\u2217 te = \u03c6(xte)\u22a4\u03c6(X). With these notations, as \u03c4 goes to in\ufb01nity, we denote, for any \u2113\u2208[d], Fntk2(xte)\u2113= Z \u221e \u03c4=0 dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 d\u03c4 where we have used the fact that the initial prediction is 0 as \u03b2(0) = 0d. Similarly for Fnn(xte)\u2113. Let Fntk2,i(\u03c4) = Fntk2(\u03b2(\u03c4), xi) and Fntk2(\u03c4) \u2208Rd\u00d7n. Similarly, for the NN predictor Fnn. Now we take a closer look at the time derivative: dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 = \u001c\u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , d\u03b2(\u03c4) d\u03c4 \u001d 36 \f= \u001c\u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , \u2212\u2202L(\u03b2(\u03c4), {xi}n i=1) \u2202\u03b2(\u03c4) \u001d = \u2212 * \u2202Fntk2(\u03b2(\u03c4), xte)\u2113 \u2202\u03b2(\u03c4) , n X i=1 d X \u21132=1 (Fntk2,i,\u21132(\u03c4) \u2212yi,\u21132) \u2202Fntk2(\u03b2(\u03c4), xi)\u21132 \u2202\u03b2(\u03c4) + = \u2212m * \u03c6(xte)\u2113, n X i=1 d X \u21132=1 (Fntk2,i,\u21132(\u03c4) \u2212yi,\u21132)\u03c6(xi)\u21132 + = \u2212m vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) (26) where the \ufb01rst step follows from simple algebra, the second step follows from ODE formulation (we remark that this is a very standard step in all the NTK literature), the third step follows from Eq. (24), the fourth step follows from the de\ufb01nition of \u03c6(xte)\u2113, the last step follows from simple algebra. We can obtain a time derivative of the same form for Fnn. dFnn(W(\u03c4), xte)\u2113 d\u03c4 = \u001c\u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , dW(\u03c4) d\u03c4 \u001d = \u001c\u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , \u2212\u2202L(W(\u03c4), {xi}n i=1) \u2202W(\u03c4) \u001d = \u2212 * \u2202Fnn(W(\u03c4), xte)\u2113 \u2202W(\u03c4) , n X i=1 d X \u21132=1 (Fnn,i,\u21132(\u03c4) \u2212yi,\u21132)\u2202Fnn(W(\u03c4), xi)\u21132 \u2202W(\u03c4) + = \u2212m vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) (27) where the \ufb01rst step follows from simple algebra, the second step is standard in NTK literature, the third step follows from Eq. (24), the last step follows from simple algebra. Thus we analyze the di\ufb00erence between the NN predictor and NTK predictor via this integral form |Fnn(xte)\u2113\u2212Fntk2(xte)\u2113| = \f \f \f \fFnn(W(0), xte)\u2113+ Z \u221e \u03c4=0 \u0012dFnn(W(\u03c4), xte)\u2113 d\u03c4 \u2212dFntk2(\u03b2(\u03c4), xte)\u2113 d\u03c4 \u0013 d\u03c4 \f \f \f \f = |Fnn(W(0), xte)\u2113| + \f \f \f \f\u2212m Z \u221e \u03c4=0 \u0010 vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) \u2212vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) \u0011 d\u03c4 \f \f \f \f = \f \f \f \f\u2212m Z \u221e \u03c4=0 \u0010 vec(K\u2113,te(\u03c4))\u22a4vec(Fnn(\u03c4) \u2212Y ) \u2212vec(K\u2217 \u2113,te)\u22a4vec(Fntk2(\u03c4) \u2212Y ) \u0011 d\u03c4 \f \f \f \f \u2264m \f \f \f \f Z \u221e \u03c4=0 vec(K\u2113,te(\u03c4) \u2212K\u2217 \u2113,te)\u22a4vec(Fnn(\u03c4) \u2212Y )d\u03c4 \f \f \f \f + m \f \f \f \f Z \u221e \u03c4=0 vec(K\u2217 \u2113,te)\u22a4vec(Fnn(\u03c4) \u2212Fntk2(\u03c4))d\u03c4 \f \f \f \f \u2264m max 0\u2264t\u2264\u221e\u2225K\u2113,te(\u03c4) \u2212K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + m max 0\u2264t\u2264\u221e\u2225K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264m\u01eb\u2113,test Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + m max 0\u2264t\u2264\u221e\u2225K\u2217 \u2113,te\u2225F Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4, where the \ufb01rst step follows from the di\ufb00erence between the NN predictor and NTK predictor, the second step follows from Eq. (26) and Eq. (27), the third step follows |Fnn(W(0), xte)\u2113| = 0 by 37 \fsymmetric initialization from De\ufb01nition 3.7, the fourth step follows from simple algebra, the \ufb01fth step follows from Frobenius norm, the last step follows from simple algebra. For the \ufb01rst term, recall \u2225H\u2217\u2212H(\u03c4)\u2225F \u2264\u01ebH and, by Claim E.5, we have \u03bbmin(H(\u03c4)) \u22651 2\u03bb. Using this fact we know \u2225Fnn(\u03c4) \u2212Y \u2225F \u2264exp(\u2212m 2 \u03bb\u03c4)\u2225Fnn(0) \u2212Y \u2225F (The reason to obtain this is due to solve ODE). Therefore, by Lemma D.3, we can bound Z \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 = Z \u221e \u03c4=0 exp \u0010 \u2212m 2 \u03bb\u03c4 \u0011 \u2225Fnn(0) \u2212Y \u2225F d\u03c4 = O( \u221a nd m\u03bb ). To bound R \u221e \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4, we observe that Fnn(\u03c4) \u2192y and Fntk2(\u03c4) \u2192y with linear convergence rate. Therefore, we can choose some \u03c40 = C m\u03bb log \u0010 nd \u01ebH\u00b7m\u03bb \u0011 so that Z \u221e \u03c40 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264 Z \u221e \u03c40 \u2225Fnn(\u03c4) \u2212Y \u2225F d\u03c4 + Z \u221e \u03c40 \u2225Fntk2(\u03c4) \u2212Y \u2225F d\u03c4 \u2264O \u0012 1 m\u03bb(\u2225Fnn(\u03c40) \u2212Y \u2225F + \u2225Fntk2(\u03c40) \u2212Y \u2225F ) \u0013 \u2264O \u221a nd m\u03bb exp (\u2212m\u03bb\u03c40) ! \u2264O(\u01ebH). where the \ufb01rst step follows from simple algebra, the second step follows from integral range is \u03c40, the third step follows from Lemma D.3, the last step follows from choice of \u03c40. Thus it su\ufb03ces to bound R \u03c40 \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264\u03c40 max0\u2264t\u2264\u03c40 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F . First observe that \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F \u2264\u2225Fnn(0)\u2225F + Z \u03c4 s=0 \r \r \r \r d(Fnn(s) \u2212Fntk2(s)) ds \r \r \r \r F ds = Z \u03c4 s=0 \r \r \r \r d(Fnn(s) \u2212Fntk2(s)) ds \r \r \r \r F ds, where the last step follows symmetric initialization from De\ufb01nition 3.7. Note d(Fnn(\u03c4) \u2212Fntk2(\u03c4)) d\u03c4 = \u2212mH(\u03c4) vec(Fnn(\u03c4) \u2212Y ) + mH\u2217vec(Fntk2(\u03c4) \u2212Y ) = \u2212mH\u2217vec(Fnn(\u03c4) \u2212Fntk2(\u03c4)) + m(H\u2217\u2212H(\u03c4)) vec(Fnn(\u03c4) \u2212Y ) where the \ufb01rst step follows from de\ufb01nition of Fnn and Fntk2. Since H\u2217is positive semide\ufb01nite, \u2212H\u2217vec(Fnn(\u03c4) \u2212Fntk2(\u03c4)) term only makes \u2225Fnn(\u03c4) \u2212 Fntk2(\u03c4)\u2225F smaller. Therefore, we have \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F \u2264m Z \u03c4 s=0 \u2225Fnn(s) \u2212Y \u2225F \u2225H(\u03c4) \u2212H\u2217\u2225F ds 38 \f\u2264m\u03c4\u2225Fnn(0) \u2212Y \u2225F \u01ebH \u2264O \u0010 m\u03c4 \u221a nd\u01ebH \u0011 , where the last step is by Lemma D.3. Therefore, we have Z \u03c40 \u03c4=0 \u2225Fnn(\u03c4) \u2212Fntk2(\u03c4)\u2225F d\u03c4 \u2264O \u0010 m\u03c4 2 0 \u221a nd\u01ebH \u0011 = O \u221a nd m\u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . where the \ufb01rst step follows from integral range is \u03c40, the second step follows from the choice of \u03c40. Lastly, as Fntk2(xte)\u2113= Fntk(xte)\u2113, we put things together and get |Fntk(xte)\u2113\u2212Fnn(xte)\u2113| \u2264O \u221a nd \u03bb \u01eb\u2113,test + \u221a nd \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH ! . From the above, after we change the integration from (0, \u221e) to (0, \u03c4), the statement still holds. Then, based on the gradient \ufb02ow version, we can have a gradient descent version with a constant error factor by replacing integral with geometric summarization (for example P\u221e i=0 ai < 2, when a \u2208(0, 0.5) ). G Di\ufb00usion In Section G.1, we provide the proof of our main result of di\ufb00usion. In Section G.2, we provide some tools from previous works. We \ufb01rst de\ufb01ne an auxiliary function e Fntk of the same functional form as Fntk, but trained on a pseudo dataset e S := {e yi, xi}n i=1 with e yi := FH(xi) + \u01ebi and \u01ebi := yi \u2212F\u2217(xi). Then, we have the following claim. Claim G.1 (Loss decomposition). We can decompose our target function as the following 1 T Z T 0 E[\u2225Fnn(W(\u03c4), (t, x(t))) \u2212F\u2217(t, x(t))\u22252 2]dt \u2264Z1 + Z2 + Z3 + Z4, where Z1 = 1 T Z T 0 E[\u2225Fnn(W(\u03c4), (t, x(t))) \u2212Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt (coupling) Z2 = 1 T Z T 0 E[\u2225Fntk(\u03b3(\u03c4), (t, x(t))) \u2212e Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt (label mismatch) Z3 = 1 T Z T 0 E[\u2225e Fntk(\u03b3(\u03c4), (t, x(t))) \u2212FH(t, x(t))\u22252 2]dt (early stopping) Z4 = 1 T Z T 0 E[\u2225FH(t, x(t)) \u2212F\u2217(t, x(t))\u22252 2]dt. (approximation). The coupling error term is the gap between neural networks Fnn and a kernel function Fntk. The approximation error term is the gap between the target function F\u2217and its corresponding RKHS function FH. These two terms transfer the problem of neural networks training into the problem of kernel regression. 39 \fG.1 Main Result of Di\ufb00usion In this section, we prove the main result of di\ufb00usion. Theorem G.2 (Restatement of Theorem 6.5). Suppose Assumptions 6.1, 6.2, 6.3, 6.4 hold and we set m = \u2126(\u03bb\u22122n3d3 exp(18B) log2(nd/\u03b4)) and \u03b7 = 0.1\u03bb/(mn2d2 exp(16B)). Moreover, suppose b T satis\ufb01es Assumption G.3 with corresponding \u01eb(n, b T). Then for large enough RH, with probability at least 1 \u2212\u03b4, it holds that 1 T Z T 0 \u03bb(t)E[\u2225sW ( b T)(t, x(t)) \u2212\u2207log pt(Xt)\u22252 2]dt \u2264O \u0012 1 \u03bb\u221an + \u01eb(n, b T) + dA2(RH) + dA(RH) + p dA(RH)\u0393\u03b4 + \u0393\u03b4 \u0013 . Proof of Theorem 6.5. Note that the m and \u03b7 satisfy the conditions in Theorem 4.2. The reason about a di\ufb00erent m is that we choose a di\ufb00erent R and apply Lemma E.2 one more time. Recall the \u01eb\u2113,test and \u01ebH are de\ufb01ned in Theorem F.2. Note that H\u2217= H(0). By Lemma 5.1, Part 2, let R = \u03bb/(2n2d2 exp(10B)), we have with probability at least 1 \u2212\u03b4 such that \u2225H\u2217 |{z} nd\u00d7nd \u2212H(\u03c4) | {z } nd\u00d7nd \u2225F \u2264\u01ebH = \u03bb 2nd. Note that K\u2217 \u2113,te and K\u2113,te share the same weight perturbation as H\u2217and H(\u03c4). Thus, by using the same proof as Lemma 5.1, Part 1, we have \u2225K\u2217 \u2113,te |{z} n\u00d7d \u2212K\u2113,te |{z} n\u00d7d \u2225F \u2264\u01eb\u2113,test = \u03bb 2n1.5d1.5 . We have \u2225Fntk(\u03b3(\u03c4), xte) \u2212Fnn(W(\u03c4), xte)\u22252 \u2264 \u221a d max \u2113\u2208d |Fntk(\u03b3(\u03c4)\u2113, xte) \u2212Fnn(W(\u03c4), xte)\u2113| \u2264O \u0012\u221and \u03bb max \u2113\u2208[d] \u01eb\u2113,test + \u221and \u03bb2 log2 \u0012 nd \u01ebHm\u03bb \u0013 \u01ebH \u0013 \u2264O \u0012\u221and \u03bb \u03bb n1.5d1.5 + \u221and \u03bb2 log2 \u0012 nd m\u03bb \u0013 \u03bb nd \u0013 \u2264O \u0012 1 \u03bb\u221an log2 \u0012 nd m\u03bb \u0013\u0013 \u2264O \u0012 1 \u03bb\u221an \u0013 where the \ufb01rst step follows from simple algebra, the second step is by Theorem F.2. Thus, we \ufb01nish the proof by Claim G.1, where coupling is from above, label mismatch is from Theorem G.5, early stopping is from Assumption G.3 and approximation is from Theorem G.4. 40 \fG.2 Tools From Previous Works We have the following assumption and statements from previous works [HRX24]. Assumption G.3 (Assumption 3.11 in [HRX24]). Fix any FH \u2208H with \u2225FH\u22252 H \u2264RH and assume labels are generated as e yj = FH(xj) + \u01ebj. Suppose e Fntk(\u03b3( b T), \u00b7) is obtained by GD-trained kernel regression with the number of iterations b T. We assume there exists \u01eb such that 1 T Z T 0 E[ e Fntk(\u03b3( b T ), (t, x(t))) \u2212FH(t, x(t))\u22252 2]dt \u2264\u01eb(n, b T), and \u01eb(n, b T) \u21920 as n \u2192\u221e. Theorem G.4 (Theorem 3.6 in [HRX24], universal approximation of score function). Suppose Assumptions 6.1, 6.3 and 6.4 hold. Let RH be larger than a constant c1, i.e., C(d + 1, 0) in Proposition 6 of [Bac17], which depends only on d. There exists a function FH \u2208H such that \u2225FH\u22252 H \u2264dRH and 1 T Z T 0 E[\u2225FH(t, x(t)) \u2212F\u2217(t, x(t))\u22252 2]dt \u2264dA2(RH). Theorem G.5 (Theorem 3.10 in [HRX24], label mismatch). Suppose Assumptions 6.1 and 6.2 hold. If we initialize both Fntk and e Fntk properly, then with probability at least 1 \u2212\u03b4 it holds simultaneously for all \u03c4 that 1 T Z T 0 E[\u2225Fntk(\u03b3(\u03c4), (t, x(t))) \u2212e Fntk(\u03b3(\u03c4), (t, x(t)))\u22252 2]dt \u2264dA(RH) + C0( p dA(RH)\u0393\u03b4 + \u0393\u03b4) where C0 is a constant de\ufb01ned in Theorem 1 of [RK20]."
17
+ }
title_10K/test_title_short_2405.03280v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03280v1",
3
+ "title": "Animate Your Thoughts: Decoupled Reconstruction of Dynamic Natural Vision from Slow Brain Activity",
4
+ "abstract": "Reconstructing human dynamic vision from brain activity is a challenging task\nwith great scientific significance. The difficulty stems from two primary\nissues: (1) vision-processing mechanisms in the brain are highly intricate and\nnot fully revealed, making it challenging to directly learn a mapping between\nfMRI and video; (2) the temporal resolution of fMRI is significantly lower than\nthat of natural videos. To overcome these issues, this paper propose a\ntwo-stage model named Mind-Animator, which achieves state-of-the-art\nperformance on three public datasets. Specifically, during the fMRI-to-feature\nstage, we decouple semantic, structural, and motion features from fMRI through\nfMRI-vision-language tri-modal contrastive learning and sparse causal\nattention. In the feature-to-video stage, these features are merged to videos\nby an inflated Stable Diffusion. We substantiate that the reconstructed video\ndynamics are indeed derived from fMRI, rather than hallucinations of the\ngenerative model, through permutation tests. Additionally, the visualization of\nvoxel-wise and ROI-wise importance maps confirms the neurobiological\ninterpretability of our model.",
5
+ "authors": "Yizhuo Lu, Changde Du, Chong Wang, Xuanliu Zhu, Liuyun Jiang, Huiguang He",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Animate Your Thoughts: Decoupled Reconstruction of Dynamic Natural Vision from Slow Brain Activity",
16
+ "main_content": "Introduction Researchers in computational neuroscience and the field of artificial intelligence have long sought to decipher and simulate the brain\u2019s visual information processing mechanisms to advance the development of brain-inspired models [1\u20133]. In recent years, functional magnetic resonance imaging (fMRI) has emerged as a reliable tool for measuring brain activity due to its high spatial resolution as a non-invasive brain signal recording technique [4]. fMRI-based neural decoders, which map brain signals to visual stimuli, facilitate a deeper understanding of the human visual perception system. Neural decoding can be categorized into classification, identification, and reconstruction, with this study focusing on the most challenging aspect: reconstruction. Prior research has made significant strides in the classification [3, 5\u20138] and identification [4, 9\u201311] of static stimulus images, and remarkably, some researchers have advanced to the point where they can reconstruct [12\u201322] images from brain signals that closely resemble the original stimulus images. \u2217Equal contributions \u2020Huiguang He is the corresponding author. Preprint. Under review. arXiv:2405.03280v1 [cs.CV] 6 May 2024 \fIn reality, the majority of visual stimuli we encounter in daily life are continuous and dynamic. As depicted in Figure 1, when a subject views dynamic scenes, the primary visual cortex firstly processes low-level structural information like location, shape, size, and color [23], leading to the preliminary recognition of a black silhouette at the edge of a yellow background. Subsequently, motion information of the object is perceived [24], noting that the silhouette is moving from right to left. Lastly, in the higher visual cortex, the interpretation of category and description gives rise to high-level semantic understanding [25], comprehending the scene as a soldier walking from right to left in a desert. What is this scenario? (High-level Semantic Information) -A soldier is walking in a desert. Where is the object in the scenario, what is its size, color, shape? (Low-level Structure Information) -A black figure on the edge of a yellow background. How the objects in the scene move? (Motion Information) -The figure moves from right to left in the movie. Figure 1: The human brain\u2019s comprehension of dynamic visual scenes. When receiving dynamic visual information, human brain gradually comprehends low-level structural details such as position, shape and color in the primary visual cortex, discerns motion information, and ultimately constructs high-level semantic information in the higher visual cortex, such as an overall description of the scene. Due to the inherent nature of fMRI, which relies on the slow blood oxygenation level dependent (BOLD) [26, 27]signal, the sampling frequency is restricted to around 0.5Hz. This frequency is notably lower than the typical 30Hz frame rate of most videos. As a result, a significant discrepancy exists between the temporal resolution of fMRI and the nature video. In fact, each fMRI signal integrates information from approximately 60 video frames. This disparity makes the task of reconstructing video from fMRI signals an exceedingly complex challenge. To address this challenge, Nishimoto [28] transforms the video reconstruction task into a identification problem, employing the Motion-Energy model [29] and Bayesian inference to reconstruct videos from a predefined video library. Subsequently, Han [30] and Wen [31] et al. map brain responses to the feature spaces of deep neural network (DNN) to reconstruct down-sampled (with the frame rate reduced to 1Hz) video stimuli. Wang [32] et al. develope an f-CVGAN that learns temporal and spatial information in fMRI through separate discriminators [33]. To mitigate the scarcity of video-fMRI data, Kupershmidt [34] et al. utilize self-supervised learning [35] to incorporate a large amount of unpaired video data. These efforts have validated the feasibility of video reconstruction from fMRI, albeit with a lack of explicit semantic information in the results. Chen [36] et al. utilize contrastive learning to map fMRI to the Contrastive Language-Image Pre-Training (CLIP) [37] representation space and fine-tuned inflated Stable Diffusion [38, 39] on a video-text dataset as a video generation model, successfully reconstructing coherent videos with clear semantic information for the first time. However, this work does not consider structure information such as color and position, and it is uncertain whether the motion information in the reconstructed videos originated from the fMRI or the video generation model. Method Semantic Structure Motion Frame rate Resolution Nishimoto [28] (Current Biology 2011) \u00d7 \u00d7 \u2713 \u2014\u2014 \u2014\u2014 Wen [31] (Cerebral Cortex 2017) \u00d7 \u2713 \u00d7 \u2014\u2014 64x64 Han [30] (NeuroImage 2019) \u00d7 \u2713 \u00d7 \u2014\u2014 128x128 Kupershmidt [34] \u00d7 \u2713 (\u2713) 4Hz 112x112 Wang [32] (Cerebral Cortex 2022) \u00d7 \u2713 \u2713 4Hz 64x64 Chen [36] (NeurIPS 2023 Oral) \u2713 \u00d7 (\u2713) 3Hz 256x256 Ours \u2713 \u2713 \u2713 4Hz 512x512 Table 1: Comparison of modal information used in Mind-Animator and related works. Parentheses indicate the utilization of external video data in the decoding of this feature. In summary, current video reconstruction models face two challenges: (1) As shown in Table 1, they fail to simultaneously capture semantic, structure, and motion information within the reconstructed videos. Moreover, the resolution of the video is low. 2 \f(2) The reliance on external video datasets and video generation models introduces uncertainty regarding the true source of motion information, leading to the possibility that the reconstructed videos may represent a \"hallucination\" of the video generation model rather than an accurate dynamic decoding from the fMRI data. To address the aforementioned issues, we introduce Mind-Animator, a video reconstruction model that decouples semantic, structure, and motion information from fMRI, as illustrated in Figure 2. Specifically, we map fMRI to the CLIP representation space and the Vector Quantized-Variational Autoencoder (VQ-VAE) [40] latent space to capture semantic and structure information. We design a Transformer-based [41] motion decoder to extract motion information frame by frame from fMRI through a next-frame-prediction task. Finally, the decoded semantic, structure, and motion information is fed into an inflated Stable Diffusion [38, 39] without any fine-tuning to generate each frame of the video, ensuring that all information is derived solely from the fMRI data. The contributions are summarized as follows: (1) We propose Mind-Animator, which for the first time successfully decouples semantic, structure, and motion information from fMRI to enable video reconstruction. To extract the motion and spatial information from fMRI, we propose temporal and spatial attention modules respectively, which decode subtle but significant motion information. (2) We validate through a permutation test that the motion information in our reconstructed videos indeed originates from the fMRI, rather than being a \"hallucination\" generated by the video generation model. (3) We introduce seven evaluation metrics that comprehensively assess the reconstruction results of our model and all previous models across three dimensions\u2014semantic, structure, and spatiotemporal consistency\u2014on three publicly available high-quality video-fMRI datasets. Our model achieves state-of-the-art (SOTA) performance in five of these metrics and secures second place in the remaining two, with a notable 76% improvement in Structural Similarity Index (SSIM) over the previous SOTA. This establishes our work as the first comprehensive and unbiased benchmark for subsequent researchers. The code and data have been anonymously released at: https://github.com/Zuskd/Mind-Animator. 2 Methodology 2.1 Overview Figure 2 presents the overall architecture of the proposed Mind-Animator, a video reconstruction model based on fMRI. The model consists of two stages: fMRI-to-feature and feature-to-video. In the fMRI-to-feature stage, as depicted in Figure 1, we begin by emulating the human visual system\u2019s approach to interpreting dynamic visual stimuli. This process involves the decomposition of video stimuli into high-level semantic feature, low-level structural feature, and motion feature. Then three separate decoders are trained to decode these features from fMRI: (a) for decoding semantic feature, we employ a contrastive learning loss to map fMRI into the visual-linguistic embedding space of CLIP[37], (b) we utilize the frame token extracted by VQ-VAE[40] as the video\u2019s structural feature[42], followed by a simple Multi-Layer Perceptron (MLP) to fit it, and (c) we design a Transformer-based Consistency Motion Generator for decoding motion information. After training with a next-frame-prediction task, this module sequentially generates each subsequent frame token based on the first frame token decoded in section (b). In the feature-to-video stage, depicted in Figure 2 (d), the decoded features are input into an inflated Text-to-Image (T2I) model, facilitating the reconstruction of the stimulus video without the interference of external training videos. 2.2 Problem Statement We aim to decode videos from brain activity recorded with fMRI when healthy participants watch a sequence of natural videos. Let X and Y denote the voxel space and pixel space, respectively. Let Xi \u2208R1\u00d7n be the fMRI signal when a video Vi,j \u2208R1\u00d73\u00d7512\u00d7512 is presented to the participant, where n is the number of fMRI voxels, j is the frame ID of video i and i \u2208[1, N], j \u2208[1, 8], with N 3 \fConsistency Motion Generator Inflated T2I Denoised U-net \u00d7T VQ-VAE Decoder Semantic Decoder Structure Decoder \ud835\udc6a \ud835\udc81 \ud835\udc81\ud835\udc84 \ud835\udc6e\ud835\udc82\ud835\udc96\ud835\udc94\ud835\udc94\ud835\udc8a\ud835\udc82\ud835\udc8f\ud835\udc8f\ud835\udc90\ud835\udc8a\ud835\udc94\ud835\udc86 Video stimulus Reconstructed video fMRI signal (d) Inference pipeline of Mind-Animator \ud835\udc53 1 \u2219\ud835\udc63\ud835\udc5b \ud835\udc53 \ud835\udc5b\u2219\ud835\udc631 \ud835\udc53 \ud835\udc5b\u2219\ud835\udc632 \ud835\udc53 \ud835\udc5b\u2219\ud835\udc633 \u22ef \ud835\udc87\ud835\udc8f\u2219\ud835\udc97\ud835\udc8f \u22ef \ud835\udc53 2 \u2219\ud835\udc63\ud835\udc5b \ud835\udc53 3 \u2219\ud835\udc63\ud835\udc5b \ud835\udc87\ud835\udfcf\u2219\ud835\udc95\ud835\udfcf\ud835\udc53 1 \u2219\ud835\udc612 \ud835\udc53 1 \u2219\ud835\udc613 \u22ef \ud835\udc53 1 \u2219\ud835\udc61\ud835\udc5b \ud835\udc53 2 \u2219\ud835\udc611\ud835\udc87\ud835\udfd0\u2219\ud835\udc95\ud835\udfd0\ud835\udc53 2 \u2219\ud835\udc613 \u22ef \ud835\udc53 2 \u2219\ud835\udc61\ud835\udc5b \ud835\udc53 3 \u2219\ud835\udc611 \ud835\udc53 3 \u2219\ud835\udc612 \ud835\udc87\ud835\udfd1\u2219\ud835\udc95\ud835\udfd1\u22ef \ud835\udc53 3 \u2219\ud835\udc61\ud835\udc5b \ud835\udc53 \ud835\udc5b\u2219\ud835\udc611 \ud835\udc53 \ud835\udc5b\u2219\ud835\udc612 \ud835\udc53 \ud835\udc5b\u2219\ud835\udc613 \u22ef \ud835\udc87\ud835\udc8f\u2219\ud835\udc95\ud835\udc8f \u22ef \u22ef \u22ef \u22ef \u22ef Semantic Decoder CLIP vision encoder CLIP text encoder \ud835\udc95\ud835\udfcf \ud835\udc95\ud835\udfd0 \ud835\udc95\ud835\udfd1 \ud835\udc95\ud835\udc8f \u22ef \ud835\udc97\ud835\udc8f \ud835\udc87\ud835\udfcf \ud835\udc87\ud835\udfd0 \ud835\udc87\ud835\udfd1 \ud835\udc87\ud835\udc8f \u22ef \ud835\udc87\ud835\udfcf \ud835\udc87\ud835\udfd0 \ud835\udc87\ud835\udfd1 \ud835\udc87\ud835\udc8f \u22ef fMRI signals \u22ef Video stimuli Text captions \ud835\udc3f\ud835\udc60\ud835\udc52\ud835\udc5a\ud835\udc4e\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc50=\ud835\udefc\u2219\ud835\udc3f\ud835\udc53\ud835\udc61+ (1 \u2212\ud835\udefc) \u2219\ud835\udc3f\ud835\udc53\ud835\udc63 Structure Decoder VQ-VAE Encoder fMRI signals The first frame of video \ud835\udc3f\ud835\udc46\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc50\ud835\udc61\ud835\udc62\ud835\udc5f\ud835\udc52 \ud835\udc6a \ud835\udc81 Consistency Motion Generator Spatial Module Temporal Module Embedding Module Masking Random Masking \ud835\udc3f\ud835\udc50\ud835\udc5c\ud835\udc5b\ud835\udc60\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc50\ud835\udc66 \ud835\udc81\ud835\udc84 fMRI signal Semantic feature Structure feature Motion feature Trainable Frozen (a) Training: Semantic Decoder (b) Training: Structure Decoder (c) Training: Consistency Motion Generator Stage 1: fMRI-to-feature Stage 2: feature-to-video Figure 2: The overall architecture of Mind-Animator, a two-stage video reconstruction model based on fMRI. As illustrated in subfigures (a), (b), and (c), three decoders are trained during the fMRI-tofeature stage to disentangle semantic, structural, and motion information from fMRI, respectively. Subfigure (d) demonstrates that, in the feature-to-video stage, the decoded information is input into an inflated Text-to-Image (T2I) model for video reconstruction. the total number of videos. Let Z(k) denote the feature space, k \u2208{semantic, structure, motion}. The goal of fMRI-to-feature stage is to train decoders D(k) : X \u2192Z(k), and the goal of featureto-video stage is to construct a video generation model G : Z(semantic) \u00d7 Z(structure) \u00d7 Z(motion) \u2192Y , without introducing motion information from external video data. 2.3 fMRI-to-feature Stage Semantic Decoder Due to the low signal-to-noise ratio of the fMRI signal Xi and the substantial dimension discrepancy with the text condition ci \u2208R1\u00d720\u00d7768, directly learning a mapping between them is prone to overfitting. Considering the robust semantic information embedded in the latent space of CLIP[43], and given that CLIP has been shown to outperform various single-modal DNNs in explaining cortical activity[44, 45], we employ bidirectional InfoNCE loss to align the fMRI with the latent space of CLIP (Vit-B/32)\u2208R512, followed by a two-layer MLP to map it to text condition ci, LBiInfoNCE = \u22121 B B X i=1 \u0010 log exp(s(\u02c6 zi, zi)/\u03c4) PB j=1 exp(s(\u02c6 zi, zj)/\u03c4) + log exp(s(\u02c6 zi, zi)/\u03c4) PB k=1 exp(s(\u02c6 zi, zk)/\u03c4) \u0011 . (1) where s is the cosine similarity, zi and \u02c6 zi are the latent representation from two modalities, B is the batch size, and \u03c4 is a learned temperature parameter. Then, given f \u2208RB\u00d7512, v \u2208RB\u00d7512, and t \u2208RB\u00d7512 as the respective representations of fMRI, video, and text embeddings, the fMRI-visionlanguage trimodal loss is: LSemantic = \u03b1 \u00b7 LBiInfoNCE(f, t) + (1 \u2212\u03b1) \u00b7 LBiInfoNCE(f, v). (2) Subsequently, to map the fMRI embedding fi to the text condition ci for the purpose of conditioning generative image models, a projection loss is utilized, LP rojection = 1 B B X i=1 \u2225MLP(fi) \u2212ci\u22252 2. (3) Finally, we combine the Semantic and Projection losses using tuned hyperparameters \u03bb1, \u03bb2, LCombined = \u03bb1 \u00b7 LSemantic + \u03bb2 \u00b7 LP rojection. (4) 4 \fStructure Decoder For a short video clip, it can be assumed that the low-level information (size, shape, and color) contained in each frame remains largely consistent with that of the first frame. Consequently, we utilize the token extracted from the first frame by VQ-VAE as structural information and train the structural decoder using the standard mean squared error (MSE) loss function. Let \u03a6 denote the encoder of VQVAE, the structure loss is defined as: LStructure = 1 B B X i=1 \u2225DStructure(fi) \u2212\u03a6(Vi,1)\u22252 2. (5) Consistency Motion Generator Inspired by natural language processing, we treat each video frame token as a word embedding, and develop an L-layer Transformer-based Consistency Motion Generator. For a more detailed introduction, please refer to Appendix B.1. Visible frame tokens Positional Encoding Layer Norm Sparse Causal Self-attention Temporal Module Layer Norm Cross attention Spatial Module Predicted tokens fMRI signal FFN Q K V Training mask Inference mask Fixed Masking: -\u221e Random Masking\uff1a-\u221e Unmasking: 0 \u00d7 L Embedding layer Q K V Figure 3: The detailed architectural diagram of the Consistency Motion Generator. In the Temporal Module, visible video frame tokens \u03a6(Vi) \u2208Rm\u00d7dtoken and positional encoding Epos \u2208Rm\u00d7dtoken are jointly input into a Sparse Causal Self-Attention (SCSA) layer to learn inter-frame temporal information. This attention layer incorporates a specially designed Sparse Causal Mask to ensure sparsity between frames and accelerate training. As illustrated in Figure 3, the mask is divided into fixed and random components. The fixed mask ensures that each frame cannot access information from subsequent frames, while the random mask maintains sparsity among visible frames, preventing the model from taking shortcuts[52]. During inference, we eliminate the random mask. As shown in Eq. 6, the model also applies residual connections and layer normalization (LN) to the variable zl, z0 =[\u03a6(Vi,1), \u03a6(Vi,2), . . . , \u03a6(Vi,m)] + Epos, zl =LN(SCSA(zl\u22121)) + zl\u22121. l = 1, 2, . . . , L (6) As shown in Eq. 7, in the Spatial Module, the embedding of the visible frames zl serves as the Query, while the fMRI signal f, after passing through an embedding layer, serves as the Key and Value in the cross-attention block. Following residual connections and layer normalization, zl is input into the Feed Forward Network (FFN) to predict the subsequent unseen frame tokens \u02c6 \u03a6(Vi,j), j \u2208[m + 1, 8]: zl =CrossAttention(Q, K, V ), l = 1, 2, . . . , L (7) Q =W l Q \u00b7 zl, K = W l K \u00b7 Emb(f), V = W l V \u00b7 Emb(f), zl =FFN(LN(zl) + zl\u22121). l = 1, 2, . . . , L (8) Then, the final motion consistency loss is defined as: LConsistency = 1 B B X i=1 8 X j=m+1 \u2225 \u02c6 \u03a6(Vi,j) \u2212\u03a6(Vi,j)\u22252 2. (9) 2.4 Feature-to-video Stage Inflated Stable Diffusion for Video Reconstruction Despite the rapid development of video generation models capable of producing vivid videos from text conditions, it is crucial to emphasize that the objective of our project is to disentangle semantic, structural, and motion information from 5 \ffMRI to fully reconstruct the stimulus video. Utilizing pre-trained video generation models could obscure whether the motion information in the reconstructed video originates from the fMRI or external video data. To address this issue, we employ the network inflation[39, 46, 47] technique to implement an inflated Stable Diffusion, which is used to reconstruct each frame of the video without introducing additional motion information. For further details, please refer to the Appendix B.2. 3 Experiment 3.1 Datasets In this study, we utilize three publicly available video-fMRI datasets, which encompass paired stimulus videos and their corresponding fMRI responses. As depicted in Table 2, these datasets collectively comprise brain signals recorded from multiple healthy subjects while they are viewing the videos. The video stimuli are diverse, covering animals, humans, and natural scenery. For detailed information on the datasets and preprocessing steps, please refer to Appendix C. Dataset Adopted participants TR Train samples Test samples CC2017[31] 3 2s 4320 1200 HCP[48] 3 1s 2736 304 Algonauts2021[49] 10 1.75s 900 100 Table 2: Characteristics of the video-fMRI datasets used in our experiments 3.2 Evaluation Metrics To comprehensively and fairly evaluate the performance of our model, we propose the following evaluation metrics. Semantic-level metrics Following prior studies[19, 36], we use the N-way top-K accuracy classification test and VIFI-score as the semantics-level metrics. For the classification test, we compare the ground truth (GT) against the predicted video (PV) classifications using a classifier. A trial is successful if the GT class ranks within the top-K probabilities from the PV\u2019s classification among N randomly selected classes. Additionally, we implement two modes: image-based (2-way-I) and video-based (2-way-V). We describe this evaluation method in Algorithm 2. For the VIFI-score, we utilize VIFICLIP[53]\u2014a CLIP model fine-tuned on the video dataset\u2014to extract features from both the GT and the PV, followed by the calculation of cosine similarity. Pixel-level metrics We employ the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and hue-based Pearson correlation coefficient (Hue-pcc) as pixel-level metrics. Spaciotemporal (ST) -level metric We adopt the CLIP-pcc, a common metric in the field of video editing, which involves computing CLIP image embeddings on each frame of the predicted videos and reporting the average cosine similarity between all pairs of adjacent video frames. 4 Results 4.1 Comparative Experimental Results We compare our model with all previous video reconstruction models3 on the CC2017 dataset. Visual comparisons are presented in Figure 4, while quantitative comparisons are detailed in Table 3. In the computation of quantitative metrics, the results of Wen et al.[31] pertain to the first segment of the test set, whereas the results of other researchers are derived from the whole test set. The findings on HCP and Algonauts2021 datasets are elaborated in Appendices E.2 and E.3, respectively. 3It should be noted that when replicating the results of Nishimoto et al.[28] on the CC2017 dataset, we utilized videos from the training sets of both CC2017 and HCP as the natural movie prior. 6 \fGT Ours Chen (NeurIPS 2023 Oral) Kupershmit, 2022 Wang (Cerebral Cortex 2022) Wen (Cerebral Cortex 2017) and Nishimoto, 2011 Figure 4: Reconstruction results of CC2017 dataset. Our reconstructed results are highlighted with red boxes, while those of Wen and Nishimoto are delineated by blue and green boxes, respectively. Semantic-level \u2191 Pixel-level \u2191 ST-level \u2191 2-way-I 2-way-V VIFI-score SSIM PSNR Hue-pcc CLIP-pcc Nishimoto[28] 0.727\u00b10.04 \u2014\u2014 \u2014\u2014 0.116\u00b10.09 8.012\u00b12.31 0.753\u00b10.12 \u2014\u2014 Wen[31] 0.758\u00b10.03 \u2014\u2014 \u2014\u2014 0.114\u00b10.15 7.646\u00b13.48 0.647\u00b10.11 \u2014\u2014 Wang[32] 0.713\u00b10.04 0.773\u00b10.03 0.596\u00b10.07 0.118\u00b10.08 11.432\u00b12.42 0.589\u00b10.18 0.402\u00b10.41 Kupershmidt[34] 0.764\u00b10.03 0.771\u00b10.03 0.585\u00b10.08 0.135\u00b10.08 8.761\u00b12.22 0.606\u00b10.14 0.386\u00b10.47 Chen[36] 0.792\u00b10.03 0.853\u00b10.03 0.587\u00b10.08 0.171\u00b10.08 8.662\u00b11.52 0.760\u00b10.10 0.408\u00b10.46 Ours(sub1) 0.809\u00b10.03 0.837\u00b10.02 0.602\u00b10.07 0.301\u00b10.09 9.134\u00b11.48 0.768\u00b10.12 0.425\u00b10.42 Ours(sub2) 0.804\u00b10.29 0.832\u00b10.03 0.604\u00b10.08 0.287\u00b10.11 9.049\u00b11.45 0.795\u00b10.12 0.426\u00b10.42 Ours(sub3) 0.792\u00b10.03 0.833\u00b10.03 0.600\u00b10.08 0.349\u00b10.11 9.306\u00b11.54 0.791\u00b10.12 0.415\u00b10.39 Table 3: Quantitative comparison of reconstruction results. All metrics indicate superior performance with larger values, with the best results highlighted in bold and the second-best results underlined. Table 3 indicates that our model achieves SOTA performance in 5 out of 7 metrics, securing the second place in the remaining two. Specifically, our model outperforms the previous SOTA model by 76% in terms of SSIM, which underscores the benefits of incorporating structural information. Specifically, as depicted in Figure 4, our reconstruction results contain richer semantic information compared to earlier models, such as a girl and a yellow dog being held in someone\u2019s arms. In contrast to Mind-video by Chen et al.[36], our results are more consistent with the ground truth in terms of fine-grained structural and motion information. For instance, the reconstructed girl\u2019s clothing color, the dog\u2019s fur color, and the positioning of the forest along the coastline are closer to the stimulus videos. Regarding motion information, our results depict the dog being petted and a noticeable camera movement in the coral reef scene. Additional reconstruction results on other subjects, as well as instances of reconstruction failure, are presented in Appendix E.1. 4.2 Ablation Study In this subsection, we conduct a detailed ablation study to assess the effectiveness of the three decoders we proposed and to evaluate the impact of various hyperparameters on video reconstruction (See Appendix E.4). First, we present the results obtained using the full model. Then, on the basis of the full model, we separately eliminate the semantic decoder (w/o Semantic) and the structure decoder (w/o Structure) by replacing their outputs with random noise. For the consistency motion generator, we replaced it with 8 simple MLPs to model each frame individually (w/o Motion). Table 4 demonstrates that the removal of any decoder results in a significant decline in the model\u2019s performance across nearly all metrics, which shows the efficacy of our proposed decoders. 7 \fSemantic-level\u2191 Pixel-level\u2191 ST-level \u2191 2-way-I 2-way-V VIFI-score SSIM PSNR Hue-pcc CLIP-pcc w/o Semantic 0.679\u00b10.04 0.766\u00b10.04 0.523\u00b10.07 0.097\u00b10.09 8.005\u00b11.57 0.737\u00b10.11 0.123\u00b10.31 w/o Structure 0.789\u00b10.03 0.814\u00b10.03 0.555\u00b10.08 0.184\u00b10.08 8.712\u00b11.37 0.791\u00b10.11 0.260\u00b10.41 w/o Motion 0.674\u00b10.04 0.789\u00b10.03 0.585\u00b10.08 0.136\u00b10.13 8.611\u00b12.43 0.715\u00b10.14 0.376\u00b10.42 Full Model 0.809\u00b10.03 0.837\u00b10.02 0.602\u00b10.07 0.301\u00b10.10 9.134\u00b11.51 0.768\u00b10.11 0.425\u00b10.41 Table 4: Ablation study on our proposed decoders. 100 repetitions are conducted on the metrics 2-way-I and 2-way-V, while 5 trials are performed on other metrics, with the results being averaged across all samples in test set and trials. Colors reflect statistical significance (paired t-test) compared to the Full Model. p < 0.0001 (purple); p < 0.01 (pink); p < 0.05 (yellow); p > 0.05 (green). 5 Interpretability Analysis 5.1 Have we truly decoded motion information from fMRI? This work focuses on the video reconstruction from fMRI, aiming for motion consistency between the reconstructed and stimulus videos. We specifically design a Consistency Motion Generator (CMG) to decode motion information from fMRI. Following the work of Wang et al. [32], we perform a permutation test on 3 subjects from the CC2017 dataset to ascertain whether this module decodes the correct motion information from fMRI. Specifically, for each 8-frame reconstructed video clip from each subject, we randomly shuffle the frame order 100 times and compute pixel-level and spaciotemporal-level metrics between the actual and shuffled frames. Subsequently, we estimate the P-value by the following formula: P = P100 i=1 \u03b4i/100, where \u03b4i = 1 if the ith permutation outperforms the reconstruction result in the original order based on the metrics; otherwise, \u03b4i = 0. A lower P-value signifies a closer alignment between the sequential order of the reconstructed video and the ground truth. We repeat the permutation test 5 times under conditions with and without the CMG, as illustrated in Figure 5. It can be observed that the P-value significantly increased across nearly all metrics for all subjects when the CMG is removed, suggesting that we truly decodes motion information from fMRI. *** *** *** NS *** *** *** *** *** *** *** * (a) sub01 (b) sub02 (c) sub03 Figure 5: The result of permutation test on the CC2017 dataset. The experiment is repeated 5 times on 3 subjects, with the mean and std presented in subplots (a), (b), and (c), respectively. Paired t-tests are performed, with significance denoted as p < 0.001(\u2217\u2217\u2217), p < 0.01(\u2217\u2217), p < 0.05(\u2217), and p > 0.05(NS) for non-significant results. 5.2 Which brain regions are responsible for decoding different features, respectively? To investigate voxels in which brain regions are responsible for decoding different features (semantic, structure, motion) during the fMRI-to-feature stage, we compute the voxel-wise importance maps in the visual cortex. Specifically, for a trained decoder, we multiply the weight matrix of the linear layers, then average the result across the feature dimension, and normalize it to estimate the importance weight for each voxel. A higher weight indicates that the voxel plays a more significant role in feature decoding. We project the importance maps of subject 1\u2019s voxels from the CC2017 dataset onto the visual cortex, as depicted in Figure 6. To obtain ROI-wise importance maps, we calculate the average of the importance weights of voxels contained within each Region of Interest (ROI), with the results presented in Figure 7. The results from other subjects are presented in Appendix E.5. 8 \fDorsal Ventral Anterior Anterior 0 1 Normalized weights for decoding semantic feature Dorsal Ventral Anterior Anterior 0 1 Normalized weights for decoding structure feature Dorsal Ventral Anterior Anterior 0 1 Normalized weights for spatial-temporal attention (a) Semantic (b) Structure (c) Motion Figure 6: Voxel-wise importance maps projected onto the visual cortex of subject 1. The lighter the color, the greater the weight of the voxel in the interpretation of feature. Figure 6 (a) indicates that high-level visual cortex (HVC, such as MT, MST and TPOJ) contribute more significantly to the decoding of semantic feature, with a calculated weight of 2.588, accounting for 60.5% of the total, as shown in Figure 7 (a). In contrast, low-level visual cortex (LVC, such as V1, V2, V3) have a weight of 1.685, representing 39.5%. Although it is not immediately apparent from Figure 6 (b) which ROI contributes most to the decoding of structural feature, Figure 7 (b) reveals that V1 and V2 have the greatest weight, with HVC having a weight of 1.279 (36.03%), and LVC weighing 2.271 (63.97%). Considering the aforementioned findings, our results are plausible from the perspective of cognitive neuroscience. It is generally believed that LVC is predominantly responsible for processing low-level information of visual stimuli [4, 54, 55], such as orientation and contour. Meanwhile, V4 is involved in color processing [56, 57]. In contrast, HVC is responsible for processing high-level semantic information of objects[58], including category. (a) Semantic (b) Structure (c) Motion Figure 7: ROI-wise importance maps in the visual cortex of subject 1. Figure 6 (c) indicates that both LVC and HVC contribute to the decoding of motion information, with significant weight attributed to V1 and MT. As derived from Figure 7 (c), the weight distribution between LVC and HVC is comparable, accounting for 42.4% and 57.6%, respectively. This observation is consistent with previous work[59], which validates the function of MT in visual motion processing. Furthermore, our findings affirm that the spatial and temporal modules designed in CMG effectively capture spatiotemporal information from across both LVC and HVC. 6 Conclusion In this paper, we introduce a video reconstruction model (Mind-Animator) that decouples semantic, structural, and motion information from fMRI, achieving state-of-the-art performance across 3 public 9 \fdatasets. We mitigate the interference of external video data on motion information decoding through a rational experimental design. The results of the permutation test demonstrate that the motion information we decoded indeed originates from fMRI, rather than being a \"hallucination\" from generative model. Additionally, the visualization of voxel-wise and ROI-wise importance maps substantiate the neurobiological interpretability of our model. Acknowledgments and Disclosure of Funding We would like to express our gratitude to Prof.Jack L. Gallant and Prof.Shinji Nishimoto for their pioneering exploration in the field of video reconstruction and for their high-quality code. We are grateful to Prof.Juan Helen Zhou and Dr.Zijiao Chen for their patient answers to our questions and for making all the results of the Mind-video test set public. We also extend our thanks to Prof.Michal Irani, Dr.Ganit Kupershmidt, and Dr.Roman Beliy for providing us with all the reconstruction results of their models on the test set. We would like to express our appreciation to Prof.Zhongming Liu and Dr.Haiguang Wen for their open-sourced high-quality video-fMRI dataset and the preprocessing procedures. Our gratitude also goes to the Human Connectome Project (HCP) for providing a large-scale fMRI dataset and cortical visualization tools. We are thankful to the Algonauts2021 competition for providing a set of pre-processed video-fMRI data from multiple subjects. We are thankful to the Stable Diffusion team for their high-performance text-to-image model, and we also appreciate the Tune-a-video team for their open-source video editing framework, which allows us to reconstruct videos without introducing additional motion information."
17
+ }
title_10K/test_title_short_2405.03485v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03485v1",
3
+ "title": "LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model",
4
+ "abstract": "In this paper, we introduce LGTM, a novel Local-to-Global pipeline for\nText-to-Motion generation. LGTM utilizes a diffusion-based architecture and\naims to address the challenge of accurately translating textual descriptions\ninto semantically coherent human motion in computer animation. Specifically,\ntraditional methods often struggle with semantic discrepancies, particularly in\naligning specific motions to the correct body parts. To address this issue, we\npropose a two-stage pipeline to overcome this challenge: it first employs large\nlanguage models (LLMs) to decompose global motion descriptions into\npart-specific narratives, which are then processed by independent body-part\nmotion encoders to ensure precise local semantic alignment. Finally, an\nattention-based full-body optimizer refines the motion generation results and\nguarantees the overall coherence. Our experiments demonstrate that LGTM gains\nsignificant improvements in generating locally accurate, semantically-aligned\nhuman motion, marking a notable advancement in text-to-motion applications.\nCode and data for this paper are available at https://github.com/L-Sun/LGTM",
5
+ "authors": "Haowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, Ruizhen Hu",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.GR"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model",
16
+ "main_content": "INTRODUCTION In this paper, we address the problem of text-to-motion, i.e., given a textual description of movements for a character, we aim to automatically generate plausible and realistic 3D human motions. The successful automation of this process holds significant potential for a variety of downstream applications, including the creation of content for augmented and virtual reality environments, advancements in robotics, and enhancements in human-machine interactions [Chen et al. 2021; Lan et al. 2023; Scanlon et al. 2023; Zhao et al. 2022]. As a longstanding challenge at the confluence of natural language processing, machine learning, and computer graphics, textto-motion generation has garnered significant attention in recent research [Jiang et al. 2023; Petrovich et al. 2022; Tevet et al. 2022a]. The advent of diffusion models, as highlighted in various studies [Alexanderson et al. 2023; Poole et al. 2022; Rombach et al. 2022], has propelled notable advancements in this field [Tevet et al. 2022b]. Despite these strides, the task of generating motions that are both locally semantic accurate and globally coherent from textual descriptions remains a formidable hurdle. Current methods often face difficulties in effectively capturing the nuanced local semantics embedded in motion descriptions and in producing motions that align accurately with these semantic cues. In particular, existing approaches in text-to-motion synthesis often encounter issues such as local semantic leakage and missing elements [Chen et al. 2023a; Tevet et al. 2022b]. For instance, when prompted with a description like \u201ca man kicks something with his left leg\u201d, these methods might erroneously generate a motion that arXiv:2405.03485v1 [cs.CV] 6 May 2024 \fHaowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, and Ruizhen Hu corresponds to a \u201cright kick\u201d. Similarly, prompts involving complex actions requiring coordination of multiple body parts frequently result in motions with certain parts omitted. Our observations reveal two primary shortcomings in these methods. Firstly, most existing techniques utilize a single global text descriptor for all local body motions. This approach requires the network to learn the association between local motion semantics and respective body parts from a unified global text source. This process proves challenging, especially when the textual content bears similarity across different body parts, leading to difficulties in differentiating specific actions for each part. Secondly, the text encoders used in these methods exhibit limited effectiveness in encoding motionrelated text. This limitation is apparent in the high feature similarity observed among different motion texts, as detailed in recent studies [Petrovich et al. 2023]. This homogeneity in encoded text features further exacerbates the network\u2019s struggle to discern and accurately represent subtle variations in local textual semantics. Towards this end, we present a novel diffusion-based text-tomotion generation architecture, LGTM, adept at producing motions that are both in alignment with textual descriptions and precise in local semantic accuracy. LGTM operates through a local-to-global approach, structured in two main stages. The first stage implements an efficient strategy to tackle the issue of local semantic accuracy. Here, we introduce a partition module that employs large language models (LLMs) to dissect global motion descriptions into narratives specific to each body part. Subsequently, dedicated body-part motion encoders independently process these part-specific narratives. This focused approach effectively circumvents local semantic inaccuracies by reducing redundant information and preventing semantic leakage, thus maintaining a sharp focus on relevant local semantics. However, as each body-part motion encoder functions independently, without awareness of other parts\u2019 movements, it is imperative to synchronize these individual motions to avoid fullbody coordination issues. To address this, the second stage of LGTM introduces an attention-based full-body optimizer. This component is specifically designed to facilitate the integration of information among different body parts, ensuring that the overall motion is not only locally precise but also globally coherent and fluid. To evaluate the effectiveness of LGTM, we further conduct experiments on text-driven motion generation and provide both quantitative and qualitative results. Our experiments show that our proposed LGTM can generate faithful motions that better align with the input text both locally and globally, and outperform stateof-the-art methods. To summarize, our contributions are as follows: \u2022 We present LGTM, a novel diffusion-based architecture that translate textual descriptions into accurate and coherent human motions, marking a significant improvement over previous text-to-motion approaches. \u2022 LGTM introduces a unique partition module that utilizes LLMs to decompose complex motion descriptions into part-specific narratives. This significantly enhances local semantic accuracy in motion generation. \u2022 Our experiments demonstrate the effective integration of independent body-part motion encoders with an attention-based full-body optimizer, ensuring both local precision and global coherence in generated motions, providing a promising improvement for text-to-motion generation. 2 RELATED WORK The generation of motion sequences is a longstanding challenge within the domain of computer graphics, where the objective is to produce a series of motion frames guided by conditional control signals. Given that our approach is centered on body-partition-based text-to-motion synthesis, we explore relevant literature across two primary aspects: body partition modeling and text-to-motion generation. Part-based motion modeling. Partitioning the human body into distinct segments facilitates the control of motion synthesis at a more granular level, allowing for localized adjustments. Several studies have explored the concept of combining motions of individual body parts to synthesize novel motions. [Hecker et al. 2008] introduced a retargeting algorithm that composes motions at the level of individual body parts to generate diverse character animations. [Jang et al. 2008] divided motions into upper and lower body segments, merging them through an algorithm to augment their motion database. [Soga et al. 2016] synthesized dance motions from existing datasets by focusing on body partitions. [Jang et al. 2022] performed style transfer at the part level, utilizing a graph convolutional network to assemble different body part motions into new, coherent sequences, preserving local styles while transferring them to specific body parts without compromising the integrity of other parts or the entire body. However, these methods rely on pre-existing motion data, and hence are more accurately described as synthesis rather than generation. For more detailed local control, [Starke et al. 2020] proposed a local phase model based on body partitions used to generate basketball player movements, achieving higher local fidelity compared to global phase approaches [Starke et al. 2019; Zhang et al. 2018]. [Starke et al. 2021] introduced a neural animation layering technique that combines trajectories of different body parts produced by control modules, providing animators with more granular control and enabling the creation of high-quality motion. [Lee et al. 2022] developed an algorithm for reassembling physically-based part motions, allowing the combination of partial movements from characters with varying skeletal structures. By operating in a physically simulated virtual environment, they employed part-wise timewarping and optimization-based assembly to ensure improved spatial and temporal alignment. [Bae et al. 2023] utilized part-wise motion discriminators to enhance motion variety and a global control policy to maintain the physical realism of the movements. Text-to-motion generation. Text provides a user-friendly interface for directing motion generation due to its ease of use and editing capabilities. However, a significant challenge arises from the difficulty in precisely controlling the outcome of the generated motion through text. In this subsection, we examine text-to-motion generation techniques and identify their limitations. Certain text-to-motion approaches are founded on the encoderdecoder architecture and focus on aligning modalities within a unified latent space. [Ahuja and Morency 2019] trained their network by alternating between encoding motions and texts, then \fLGTM: Local-to-Global Text-Driven Human Motion Diffusion Model Motion Rearrange Partition Module \u201cbends over\u201d \u201cFlaps arm\u201d Attention Encoder + Global Text Encoder Part Motion Encoders Full-Body Motion Optimizer \ud835\udc5b \ud835\udc5b Linear Part Text Encoder Conformer Smooth Net \u201ca person bends over and flaps his arms.\u201d torso left_arm noised motion \ud835\udc0c\ud835\udc5b cleaned motion \ud835\udc0c\ud835\udfce Figure 2: Overview of our LGTM framework, which consists of three major components. (1) The partition module utilizes ChatGPT to deconstruct motion descriptions \ud835\udc47into body part level text \ud835\udc47part, and decomposes full-body motion M to body part motion Mpart; (2) The part motion encoders encodes part-level motions with corresponding part-level text independently and a diffusion time step \ud835\udc5b; (3) The full-body motion optimizer utilizes an attention module to optimize fused body part motion with full-body text semantic. decoding them back into motion, thereby implicitly aligning the two modalities. [Ghosh et al. 2021; Petrovich et al. 2022] encoded text and motion concurrently and decoded them into motion, employing additional loss functions to bring the modalities closer within the latent space. These methods struggle with generating motions from lengthy textual descriptions. [Athanasiou et al. 2022] tackled long motion generation by producing short motion clips in an auto-regressive fashion, but this requires manual segmentation of long textual descriptions into shorter segments and specification of action duration. To utilize visual priors, [Tevet et al. 2022a] employed a frozen CLIP [Radford et al. 2021] text encoder to encode motion descriptions and aligned the motion latent space with that of CLIP. Nevertheless, the images used for alignment, rendered from random motion frames, can confuse the network when the frames are not representative. Moreover, [Petrovich et al. 2023] observed that motion descriptions tend to cluster closely in the CLIP latent space, as the distribution of motion-related text is narrower than that of the broader text datasets used to train CLIP. Recent developments in neural diffusion models for image generation have inspired text-to-motion methods that leverage these models to achieve superior quality. [Tevet et al. 2022b; Zhang et al. 2022] utilized Transformer to denoise motion conditioned on text. [Chen et al. 2023b] introduced a U-Net-based DDIM generative model to denoise motion in latent space, resulting in expedited generation. However, these methods lack the ability to control localized motion generation through masking. Additionally, they struggle to learning correct mapping of the local semantics because all body parts share the same textual information, which potentially lead to semantically mismatched part motions. An alternative approach to motion generation involves processing motion in a discrete space through token prediction [Guo et al. 2022b; Jiang et al. 2023; Yao et al. [n. d.]]. But the limitations of these works are that the expressive capacity of the codebook can restrict the diversity of the generated motions, potentially causing the text input to be mapped to unintended motions. The challenges in controlling local motion semantics stem from: (1) the sharing of textual information across all body parts, and (2) the difficulty networks face in distinguishing text latent codes encoded by CLIP. These factors contribute to the difficulty of achieving precise local semantic control in motion generation, leading to issues such as semantic leakage. Drawing inspiration from the technological advancements and challenges identified in prior research, we propose a novel framework that combines body-part partitioning with independent local motion semantic injection and a global semantic joint optimization strategy. This framework is designed to enhance the fidelity and controllability of text-to-motion synthesis, addressing the need for more nuanced and accurate motion generation. 3 METHOD In this section, we delve into the specifics of LGTM, as illustrated in Figure 2. LGTM is structured as a local-to-global generation framework that initially creates local, part-level motion, followed by a global fusion and optimization process to produce the final full-body motion. At its core, LGTM operates by subdividing the fullbody text and motion spaces into body-part-specific subspaces. Such subdivision is adeptly handled by a dedicated Partition Module. For each of these subspaces, we have developed specialized part motion encoders. These encoders are trained to learn independently a series of mappings between part-level motions and part-level text. This strategy effectively mitigates the issues of incorrect local semantic mapping seen in previous methods. Following the localized encoding, LGTM introduces a full-body motion optimizer to establish correlations among the various subspaces and ensure the consistency and coherence of the final full-body motion. Below, we provide a detailed explanation of the functionalities and details of each module in LGTM. \fHaowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, and Ruizhen Hu 3.1 Preliminary: Human Motion Diffusion Model Input representation. We define the input pair for our method as (M,\ud835\udc47), where M represents full-body motion data and \ud835\udc47denotes the raw full-body text description. Specifically, we use the HumanML3D representation proposed by [Guo et al. 2022a] as our motion data representation, which is calculated from the SMPL motion data [Loper et al. 2015] and includes redundant motion features that are helpful for network training. A full-body motion data M contains \ud835\udc39frames and \ud835\udc3d= 22 joints. Specifically, we denote M = [\u00a4 rroot, \ud835\udc63root,\u210e, p, r, v, c], where \u00a4 rroot \u2208R\ud835\udc39\u00d71, \ud835\udc63root \u2208R\ud835\udc39\u00d72 and \u210e\u2208R\ud835\udc39\u00d71 are the angular velocity around y-axis, linear velocity on x-z plane, and height of the root joint, p \u2208R\ud835\udc39\u00d7(\ud835\udc3d\u22121)\u00d73 and r \u2208R\ud835\udc39\u00d7(\ud835\udc3d\u22121)\u00d76 are local position and 6D rotation [Zhang et al. 2018] of all joints except root joint, v \u2208R\ud835\udc39\u00d7\ud835\udc3d\u00d73 is the local velocity of all joints, and c \u2208R\ud835\udc39\u00d74 is the contact signal of feet. Diffusion model. Our method is built upon a text-conditional diffusion model. In the training stage, this model adds noise to a clean motion M following the Markov process and trains a network to predict the added noise with an L2 loss. In the sampling stage, this model gradually reduces noise from a purely noised motion M\ud835\udc5bwith the predicted noise. We use the DDIM [Song et al. 2022] as our diffusion model to accelerate the sampling process. More details is provided in the supplementary material. 3.2 Partition Module The Partition Module is designed to inject local semantics into each body part for Part Motion Encoders. In practice, an input pair (M,\ud835\udc47) is divided into six parts, including head, left arm, right arm, torso, left leg, and right leg. The motion M is decomposed as follows: Mhead = [phead, rhead, vhead] \u2208R\ud835\udc39\u00d724 Mleft_arm = \u0002 pleft_arm, rleft_arm, vleft_arm \u0003 \u2208R\ud835\udc39\u00d748 Mright_arm = \u0002 pright_arm, rright_arm, vright_arm \u0003 \u2208R\ud835\udc39\u00d748 Mtorso = [ptorso, rtorso, vtorso, \u00a4 rroot, \ud835\udc63root,\u210e] \u2208R\ud835\udc39\u00d743 Mleft_leg = \u0002 pleft_leg, rleft_leg, vleft_leg, cleft_leg \u0003 \u2208R\ud835\udc39\u00d750 Mright_leg = \u0002 pright_leg, rright_leg, vright_leg, cright_leg \u0003 \u2208R\ud835\udc39\u00d750, where the subscript indicates where the feature from. For example, pright_leg includes all local positions of joints from the right leg. For the motion description \ud835\udc47, we leverage the knowledge inference capabilities of LLMs to decompose it into six parts: \ud835\udc47head, \ud835\udc47left_arm,\ud835\udc47right_arm,\ud835\udc47torso,\ud835\udc47left_leg and\ud835\udc47right_leg using crafted prompts. The prompt includes three sections: task definition, output requirements, and some output examples. The task definition instructs LLMs to extract principal descriptions for each motion part. The output requirements tell LLMs that we need structured output such as JSON format, body part naming, etc. Then, we employ a few-shot approach to guide LLMs in generating the desired output. More details of our prompts can be found in the supplementary materials. A decomposed description example is shown in Table 1. Table 1: An example of decomposing full-body motion description: \u201ca person waves the right hand and then slightly bends down to the right and takes a few steps forward.\u201d Part name Part description head dose nothing left arm dose nothing right arm waves hand torso slightly bends down left leg takes a few steps forward right leg takes a few steps forward Figure 3: The structure of an attention encoder block. 3.3 Part Motion Encoders The part motion encoders, {\ud835\udc38head, . . . , \ud835\udc38right_leg}, aim to learn local semantic mapping from part-level input pairs \u0010 M\ud835\udc5b part,\ud835\udc47part \u0011 independently. Since each encoder obtains information only from its corresponding part-level input pair and cannot access information from other body parts, the issue of semantic leakage is effectively alleviated. We denote the part-level encoding process as follows: z\ud835\udc5b part = \ud835\udc38part \u0010 M\ud835\udc5b part,\ud835\udc47part,\ud835\udc5b \u0011 , (1) where each part motion encoder, \ud835\udc38part, consists of three components: a linear layer, a text encoder, and a Conformer [Gulati et al. 2020]. The linear layer aims to align the size of the latent dimension with that of the text encoder. We use six different frozen part-level TMR text encoders [Petrovich et al. 2023], each corresponding to one of the six body parts, which are pretrained on part-level motiontext pairs \u0000Mpart,\ud835\udc47part \u0001 respectively. Since the TMR model is trained only on motion description and motion data, and not on large visual datasets, the motion-related text embedding encoded by TMR is easier for the network to distinguish than that by CLIP. The projected motion and text embedding are then fused and processed by a Conformer[Gulati et al. 2020]. The Conformer incorporates convolution blocks into the Transformer [Vaswani et al. 2017] architecture to better capture temporal local features. Moreover, previous work [Alexanderson et al. 2023] shows the success of Conformer on music-to-dance task. 3.4 Full-Body Motion Optimizer Since each part\u2019s motion and text are independently encoded to n z\ud835\udc5b head, \u00b7 \u00b7 \u00b7 , z\ud835\udc5b left_leg o independently, the network will ignore the correlations between the different body parts, therefore, we propose that the full-body motion optimizer \ud835\udc3aestablishes correlations by adjusting the movements of each body part based on full-body text information. \fLGTM: Local-to-Global Text-Driven Human Motion Diffusion Model Figure 4: Example results generated by our method. Specifically, we first concatenate all body part latent codes into a full-body latent code z\ud835\udc5bwhose shape is (\ud835\udc39,\ud835\udc46) = (\ud835\udc39, 6 \u00d7 128), and then fuse it with the global text embedding encoded by freezing the full-body level TMR text encoder. Next, we use an attention encoder [Vaswani et al. 2017] to compute a delta that adjusts each part in the latent code z\ud835\udc5b. The attention encoder is where the exchange of spatio-temporal information actually occurs. It consists of several attention encoder blocks, each containing a multi-head attention block and a feed-forward layer, as shown in Figure 3. Since the latent code z\ud835\udc5bis processed by a multi-head attention block on the temporal dimension \ud835\udc39, and feed-forward layers (FFN) operate on the spatial dimension \ud835\udc46, the latent code for each body part can continuously exchange temporal and spatial information. Next, we use a SmoothNet [Zeng et al. 2022] to reduce jitter, which contains a stacked MLP with residual connections and operates on the temporal dimension, acting as a low-pass filter in the latent space. Finally, we project the latent code to origin feature dimension, and get a clean motion \u02c6 M0. The full-body motion optimizer can be formulated as \u02c6 M0 = \ud835\udc3a \u0010 z\ud835\udc5b head, \u00b7 \u00b7 \u00b7 , z\ud835\udc5b left_leg,\ud835\udc47 \u0011 = Linear(SoothNet(z\ud835\udc5b+ AttentionEncoder(ztext + z\ud835\udc5b))) (2) 4 RESULTS In this section, we present the motions generated by our method and conduct a comparative analysis with other text-driven motion generation methods. Additionally, we perform several ablation studies to highlight the contributions of individual components within our framework. 4.1 Implementation Details The part-level motion description is generated by ChatGPT. (gpt3.5-turbo-1106) model. Our model is trained with AdamW optimizer with learning rate decaying strategy of fast warm cosine decay. The initial learning rate is 10\u22124 and the batch size is 64. The number of diffusion steps is 1K. The training time of our model on the HumanML3D dataset is about 8 hours on 3 NVIDIA RTX 4090 GPUs. 4.2 Qualitative Results Figure 4 shows several example results generated by our method. We can see that our method can generate motion with precise local semantics, such as body part semantic correspondence and action timing order, as our method injects local semantic information into corresponding parts independently, and the whole-body optimizer builds correct relationships between body parts in both spatial and temporal domains. For example, the result of \u201ca man leans forward and jumps high\u201d shows that the character does lean and jump in the correct order. The result of\u201ca man lock his hands to his face, and do a dance move net with his legs\u201d shows that the character keeps correct spatial relationship between hand and face while dancing. The result of \u201ca person doing air kicks with his right feet\u201d shows that the character do kick with correct body part. We also provide some visual comparisons to two baselines, including MDM [Tevet et al. 2022b] and MLD [Chen et al. 2023b]. Figure 5 shows that our method can generate more semantic wellmatched motion. In the first row, the character can pick something with both hands in our result, but with just left hand in MDM. In the second row, the character only jumps on the left foot correctly in our result, but jumps on both feet in MDM and dose not jump in MLD. In the third row, the result of MDM contains weird pose and the MLD dose not contain \u201cclaps\u201d, but our result is more correct. The last row shows that, for more complex text inputs, our method is able to generate more semantic accurate results than those two baselines. 4.3 Quantitative Evaluation Evaluation metrics. To quantitatively evaluate our method, we use the metrics suggested by [Guo et al. 2022a] which includes \fHaowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, and Ruizhen Hu Figure 5: Qualitative comparison of results generated by our method with those from MDM [Tevet et al. 2022b] and MLD [Chen et al. 2023b]. (1) Fr\u00e9chet Inception Distance (FID) that evaluates the generated motion quality against real motion distribution; (2) Diversity (DIV) that calculates the variance of generated motion; (3) R Precision that calculates the top-n matching accuracy between generated motion and the corresponding text description; (4) Multi-Modal Distance (MM Dist) that calculates the distance between paired motion and text; (5) Part-level Multi-Modal Similarity (PMM Sim) that calculates the normalized cosine similarity between part-level paired motion and text. These metrics are calculated in the latent space using the text encoder and motion encoder from T2M [Guo et al. 2022a] as in previous works. As our method provides detailed control of generated motions, we also compare our method to baselines in terms of part-level motion quality using Part-level Multi-Modal Similarity (PMM Sim), by training both partlevel text encoder and motion encoder with contrastive learning as in TMR [Petrovich et al. 2023], which we believe makes motion samples in the latent space more dispersed allowing dissimilar motions can be distinguished more easily. Specifically, we calculate the PMM Sim in the TMR latent space as follows: \ud835\udc60part = 1 2 \ud835\udc67M part \u00b7 \ud835\udc67\ud835\udc47 part \u2225\ud835\udc67M part\u2225\u2225\ud835\udc67\ud835\udc47 part\u2225 + 1 ! (3) where both \ud835\udc67M part and \ud835\udc67\ud835\udc47 part are obtained by encoding part-level motion and text through TMR encoders. Although we mainly focus on semantically controllable generation, we also evaluate common artifacts in text-to-motion synthesis. We assess the generated motions using three specific metrics: sliding, penetration, and floating, as introduced by [Yuan et al. 2022]. Comparison results. The comparison results for full-body motion are presented in Tables 2, and the comparison results for part-level motion are presented in Table 3. The FID and DIV in Tables 2 indicate that our method generates more realistic and diverse motion. The R Precision and MM Dist indicate that our method can generate better globally semantically matching motion. Table 3 also shows that our method achieves the best local semantic matching, with performance very close to that of real data. Our local-to-global design injects local semantic information independently into body parts and refines it with global semantics, which provides more accurate and structured semantic information to the network to help generation and thus achieve higher quality. For artifact evaluation, as shown in Table 4, we can see that each method exhibits performance very close to the ground truth (the Real row) at the millimeter scale. The artifacts can be attributed to the dataset\u2019s intrinsic quality variances. 4.4 Ablation Studies We have designed two main experiments to assess the impact of different components of our approach. The first experiment investigates the influence of different text encoders on the motion quality. The second experiment evaluates the effect of the full-body motion optimizer on the the quality of motions generated by our method. The importance of text encoder. We test our method by replacing our pre-trained text encoder with CLIP as an alternative, demonstrating that the TMR text encoder we use can capture more detailed semantics. Furthermore, we also present the results obtained by MDM using either CLIP or the TMR text encoder for comparison. \fLGTM: Local-to-Global Text-Driven Human Motion Diffusion Model Table 2: Comparison of the visual quality and degree of semantic matching between input text and output full-body motion. These metrics are computed in the latent space of the T2M model [Guo et al. 2022a]. Method FID \u2193 DIV\u2191 R Precision\u2191 MM Dist \u2193 Top 1 Top 2 Top 3 Real 0.000 9.831 0.513 0.708 0.804 2.939 MotionDiffuse[2022] 0.687 8.894 0.318 0.531 0.677 3.118 MDM[2022b] 0.747 9.462 0.390 0.581 0.695 3.635 MLD[2023b] 1.753 8.970 0.383 0.573 0.687 3.682 Ours (LGTM) 0.218 9.638 0.490 0.689 0.788 3.013 Table 3: Comparison of text-to-motion generation using PMM Sim. These metrics are calculated in the latent space of the part-level TRM encoder. Higher values indicate better performance. Method head left arm right arm torso left leg right leg Real 0.803 0.716 0.723 0.759 0.755 0.760 MotionDiffuse[2022] 0.789 0.687 0.712 0.735 0.728 0.739 MDM[2022b] 0.783 0.699 0.691 0.740 0.717 0.723 MLD[2023b] 0.771 0.675 0.702 0.717 0.723 0.726 Ours (LGTM) 0.799 0.719 0.724 0.763 0.755 0.763 Table 4: Comparison of text to motion generation using metrics on artifact. Method sliding (cm/s) \u2193 penetration (cm) \u2193 floating (cm) \u2193 Real 0.743 1.442 0.079 MotionDiffuse[2022] 1.359 1.783 0.051 MDM[2022b] 0.721 1.622 0.102 MLD[2023b] 0.949 2.392 0.064 Ours (LGTM) 0.854 1.247 0.046 Table 5: Comparison of the impact of different text encoders on full-body metrics computed in the latent space of T2M model [Guo et al. 2022a]. Method FID \u2193 DIV\u2191 R Precision\u2191 MM Dist \u2193 Top 1 Top 2 Top 3 MDM + CLIP 0.747 9.462 0.390 0.581 0.695 3.635 MDM + TMR 0.403 9.687 0.455 0.653 0.759 3.266 Ours + CLIP 0.331 9.386 0.391 0.569 0.674 3.699 Ours + TMR 0.218 9.638 0.490 0.689 0.788 3.013 Table 5 and Table 6 evaluate full-body and part-level motion quality, respectively. In general, we observe that using the TMR text encoder consistently produces better results than using CLIP, for both our method and MDM as well as both local and global quality. When comparing our method to MDM using the same text encoder, our method generally performs better, further demonstrating the superiority of our local-to-global design. Table 6: Comparison of the impact of different text encoders on PMM Sim computed using the part-level TRM encoder. The greater the value, the better. Method head left arm right arm torso left leg right leg MDM + CLIP 0.783 0.699 0.691 0.740 0.717 0.723 MDM + TMR 0.803 0.704 0.707 0.756 0.734 0.743 Ours + CLIP 0.795 0.693 0.694 0.752 0.725 0.732 Ours + TMR 0.799 0.719 0.724 0.763 0.755 0.763 Table 7: Comparison of the impact of using Conformer versus Transformer in Part Motion Encoders on global quality. Method FID \u2193 DIV\u2191 R Precision\u2191 MM Dist \u2193 Top 1 Top 2 Top 3 Transformer 1.814 8.578 0.373 0.567 0.680 3.688 Conformer 0.218 9.638 0.490 0.689 0.788 3.013 Table 8: Comparison of the impact of using Conformer versus Transformer in Part Motion Encoders on PMM Sim. Higher values indicate better performance. Method head left arm right arm torso left leg right leg Transformer 0.784 0.712 0.718 0.750 0.728 0.732 Conformer 0.799 0.719 0.724 0.763 0.755 0.763 The impact of Conformer. The goal of replacing Transformer with Conformer in Part Motion Encoders is to improve the motion quality. To validate the improvement, we compare both configurations on global quality metrics. From Table 7 and Table 8, we observe that LGTM with Conformer can achieves better quality and semantic matching performance than with Transformer. This improvement can be attributed to the convolution blocks of Conformer, which capture local features better than self-attention. The importance of full-body motion optimizer. The goal of our fullbody motion optimizer is to establish correlations among different body part movements and improve the coordination of full-body movements. To validate the effect, we compare it to the setting \u201cw/o opt\u201d, where we remove the key component of our full-body optimizer, namely, the attention encoder From Table 9 and Table 10, we can see that the local motion quality drops, and the full-body motion quality is also much worse without the optimizer; see Figure 6 for one example result. Without the full-body optimizer, the character\u2019s two feet cannot coordinate well to step alternately during movement due to the lack of information exchange. 5 CONCLUSION In this study, we propose LGTM for text-to-motion generation, which significantly improves the accuracy and coherence of 3D human motion derived from textual descriptions. By integrating large language models with a local-to-global generation framework, our method effectively addresses key challenges in semantic mapping and motion coherence. \fHaowen Sun, Ruikun Zheng, Haibin Huang, Chongyang Ma, Hui Huang, and Ruizhen Hu Table 9: Comparison of the impact of attention encoder on global quality. These metric are calculated using the T2M model [Guo et al. 2022a]. Method FID \u2193 DIV\u2191 R Precision\u2191 MM Dist \u2193 Top 1 Top 2 Top 3 w/o opt 7.384 11.552 0.219 0.360 0.454 5.227 w/ opt 0.218 9.638 0.490 0.689 0.788 3.013 Table 10: Comparison of the impact of attention encoder on PMM Sim using the part-level TMR encoder. The greater the value, the better. Method head left arm right arm torso left leg right leg w/o opt 0.783 0.715 0.700 0.735 0.699 0.709 w/ opt 0.799 0.719 0.724 0.763 0.755 0.763 (a) With full-body optimizer. (b) Without full-body optimizer. Figure 6: Motions generation by our method with and without the full-body optimizer for \u201ca person walks upstairs, turns left, and walks back downstairs.\u201d Figure 7: A failure case. The corresponding input prompt is someone imitating a golf swing. Limitation and future work. As we use ChatGPT for motion description decomposition, the local semantic mapping depends on the reasoning ability of ChatGPT. Incorrect decomposition or mapping may lead to unsatisfactory motion generation results. For example, when generating the \u201cgolf swing\u201d motion, which requires high-level and full-body coordination, LGTM struggles because ChatGPT identifies that the right hand swings the golf club but fails to decompose this reasoning into a series of low-level actions for each body part. The result is that the network generates an implausible motion, as shown in Figure 7. Also, ambiguous texts in the dataset can confuse the network during training. For example, the phrase \u201ca person performs action A and action B\u201d could imply that these actions occur simultaneously or sequentially, leading to output that may not align with user expectations. This issue could be mitigated by providing more detailed temporal descriptions. Furthermore, due to the limited length of samples in the dataset, our current framework cannot consistently generate long-term motions with high quality. For future work, one promising direction is to incorporate our local-to-global idea with those VQ-VAE based approaches such as TM2T [Guo et al. 2022b] and MotionGPT [Jiang et al. 2023] by onstructing part-level motion clips as motion tokens for more detailed motion generation with different part-level motion combinations. ACKNOWLEDGMENTS We thank the anonymous reviewers for their valuable comments. This work was supported in parts by NSFC (62322207, 62161146005, U2001206), Guangdong Natural Science Foundation (2021B1515020085), Shenzhen Science and Technology Program (RCYX20210609103121030), DEGP Innovation Team (2022KCXTD025), Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) and Scientific Development Funds of Shenzhen University."
17
+ }
title_10K/test_title_short_2405.03549v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03549v1",
3
+ "title": "Bridging discrete and continuous state spaces: Exploring the Ehrenfest process in time-continuous diffusion models",
4
+ "abstract": "Generative modeling via stochastic processes has led to remarkable empirical\nresults as well as to recent advances in their theoretical understanding. In\nprinciple, both space and time of the processes can be discrete or continuous.\nIn this work, we study time-continuous Markov jump processes on discrete state\nspaces and investigate their correspondence to state-continuous diffusion\nprocesses given by SDEs. In particular, we revisit the $\\textit{Ehrenfest\nprocess}$, which converges to an Ornstein-Uhlenbeck process in the infinite\nstate space limit. Likewise, we can show that the time-reversal of the\nEhrenfest process converges to the time-reversed Ornstein-Uhlenbeck process.\nThis observation bridges discrete and continuous state spaces and allows to\ncarry over methods from one to the respective other setting. Additionally, we\nsuggest an algorithm for training the time-reversal of Markov jump processes\nwhich relies on conditional expectations and can thus be directly related to\ndenoising score matching. We demonstrate our methods in multiple convincing\nnumerical experiments.",
5
+ "authors": "Ludwig Winkler, Lorenz Richter, Manfred Opper",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "stat.ML",
9
+ "cats": [
10
+ "stat.ML",
11
+ "cs.LG",
12
+ "math.DS",
13
+ "math.PR"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "Diffusion AND Model",
17
+ "gt": "Bridging discrete and continuous state spaces: Exploring the Ehrenfest process in time-continuous diffusion models",
18
+ "main_content": "Introduction Generative modeling based on stochastic processes has led to state-of-the-art performance in multiple tasks of interest, all aiming to sample artificial data from a distribution that is only specified by a finite set of training data (Nichol & Dhariwal, 2021). The general idea is based on the concept of time-reversal: we let the data points diffuse until *Equal contribution (the author order was determined by numpy.random.rand(1)) 1Technical University of Berlin 2Zuse Institute Berlin 3dida Datenschmiede GmbH 4University of Birmingham 5University of Potsdam. Correspondence to: Ludwig Winkler <[email protected]>, Lorenz Richter <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). they are close to the equilibrium distribution of the process, from which we assume to be able to sample readily, such that the time-reversal then brings us back to the desired target distribution (Sohl-Dickstein et al., 2015). In this general setup, one can make several choices and take different perspectives. While the original attempt considers discrete-time, continuous-space processes (Ho et al., 2020), one can show that in the small step-size limit the models converge to continuous-time, continuous-space processes given by stochastic differential equations (SDEs) (Song et al., 2021). This continuous time framework then allows fruitful connections to mathematical tools such as partial differential equations, path space measures and optimal control (Berner et al., 2024). As an alternative, one can consider discrete state spaces in continuous time via Markov jump processes, which have been suggested for generative modeling in Campbell et al. (2022). Those are particularly promising for problems that naturally operate on discrete data, such as, e.g., text, images, graph structures or certain biological data, to name just a few. While discrete in space, an appealing property of those models is that timediscretization is not necessary \u2013 neither during training nor during inference1. While the connections between Markov jump processes and state-continuous diffusion processes have been studied extensively (see, e.g., Kurtz (1972)), a relationship between their time-reversals has only been looked at recently, where an exact correspondence is still elusive (Santos et al., 2023). In this work, we make this correspondence more precise, thus bridging the gap between discrete-state generative modeling with Markov jump processes and the celebrated continuous-state score-based generative modeling. A key ingredient will be the so-called Ehrenfest process, which can be seen as the discrete-state analog of the OrnsteinUhlenbeck process, that is usually employed in the continuous setting, as well as a new loss function that directly translates learning rate functions of a time-reversed Markov jump process to score functions in the continuous-state ana1Note that this is not true for the timeand space-continuous SDE case, where training can be done simulation-free, however, inference relies on a discretization of the reverse stochastic process. However, see Section 4.2 for high-dimensional settings in Markov jump processes. 1 arXiv:2405.03549v1 [stat.ML] 6 May 2024 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models log. Our contributions can be summarized as follows: \u2022 We propose a loss function via conditional expectations for training state-discrete diffusion models, which exhibits advantages compared to previous loss functions. \u2022 We introduce the Ehrenfest process and derive the jump moments of its time-reversed version. \u2022 Those jump moments allow an exact correspondence to score-based generative modeling, such that, for the first time, the two methods can now be directly linked to one another. \u2022 In consequence, the bridge between discrete and continuous state space brings the potential that one setting can benefit from the respective other. This paper is organized as follows. After listing related work in Section 1.1 and defining notation in Section 1.2, we introduce the time-reversal of Markov jump processes in Section 2 and propose a loss function for learning this reversal in Section 2.1. We define the Ehrenfest process in Section 3 and study its convergence to an SDE in Section 3.1. In Section 3.2 we then establish the connection between the time-reversed Ehrenfest process and score-based generative modeling. Section 4 is devoted to computational aspects and Section 5 provides some numerical experiments that demonstrate our theory. Finally, we conclude in Section 6. 1.1. Related work Starting with a paper by Sohl-Dickstein et al. (2015), a number of works have contributed to the success of diffusionbased generative modeling, all in the continuous-state setting, see, e.g., Ho et al. (2020); Song & Ermon (2020); Kingma et al. (2021); Nichol & Dhariwal (2021); Vahdat et al. (2021). We shall highlight the work by Song et al. (2021), which derives an SDE formulation of score-based generative modeling and thus builds the foundation for further theoretical developments (Berner et al., 2024; Richter & Berner, 2024). We note that the underlying idea of timereversing a diffusion process dates back to work by Nelson (1967); Anderson (1982). Diffusion models on discrete state spaces have been considered by Hoogeboom et al. (2021) based on appropriate binning operations of continuous models. Song et al. (2020) proposed a method for discrete categorical data, however, did not perform any experiment. A purely discrete diffusion model, both in time and space, termed Discrete Denoising Diffusion Probabilistic Models (D3PMs) has been introduced in Austin et al. (2021). Continuous-time Markov jump processes on discrete spaces have first been applied to generative modeling in Campbell et al. (2022), where, however, different forward processes have been considered, for which the forward transition probability is approximated by solving the forward Kolmogorov equation. Sun et al. (2022) introduced the idea of categorical ratio matching for continuous-time Markov Chains by learning the conditional distribution occurring in the transition ratios of the marginals when computing the reverse rates. Recently, in a similar setting, Santos et al. (2023) introduced a pure death process as the forward process, for which one can derive an alternative loss function. Further, they formally investigate the correspondence between Markov jump processes and SDEs, however, in contrast to our work, without identifying a direct relationship between the corresponding learned models. Finally, we refer to the monographs Gardiner et al. (1985); Van Kampen (1992); Br\u00b4 emaud (2013) for a general introduction to Markov jump processes. 1.2. Notation For transition probabilities of a Markov jump process M we write pt|s(x|y) := P (M(t) = x|M(s) = y) for s, t \u2208[0, T] and x, y \u2208\u2126. With pt(x) we denote the (unconditional) probability of the process at time t. We use pdata := p0. With \u03b4x,y we denote the Kronecker delta. For a function f, we say that f(x) \u2208o(g(x)) if limx\u21920 f(x) g(x) = 0. 2. Time-reversed Markov jump processes We consider Markov jump processes M(t) that run on the time interval [0, T] \u2282R and are allowed to take values in a discrete set \u2126\u223c \u2282Zd. Usually, we consider \u2126\u223c = {0, . . . , S}d such that the cardinality of our space is |\u2126| = (S + 1)d. Jumps between the discrete states appear randomly, where the rate of jumping from state y to x at time t is specified by the function rt(x|y). The jump rates determine the jump probability in a time increment \u2206t via the relation pt+\u2206t|t(x|y) = \u03b4x,y + rt(x|y)\u2206t + o(\u2206t), (1) i.e. the higher the rate and the longer the time increment, the more likely is a transition between two corresponding states. For a more detailed introduction to Markov jump processes, we refer to Appendix B.1. In order to simulate the process backwards in time, we are interested in the rates of the time-reversed process \u20d7 M(t), which determine the backward transition probability via pt\u2212\u2206t|t(x|y) = \u03b4x,y + \u20d7 rt(x|y)\u2206t + o(\u2206t). (2) The following lemma provides a formula for the rates of the time-reversed process, cf. Campbell et al. (2022). Lemma 2.1. For two states x, y \u2208\u2126, the transition rates of the time-reversed process \u20d7 M(t) are given by \u20d7 rt(y|x) = Ex0\u223cp0|t(x0|x) \u0014pt|0(y|x0) pt|0(x|x0) \u0015 rt(x|y), (3) 2 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models where rt is the rate of the forward process M(t). Proof. See Appendix A. Remark 2.2 (Conditional expectation). We note that the expectation appearing in (3) is a conditional expectation, conditioned on the value M(t) = x. This can be compared to the SDE setting, where the time-reversal via the score function can also be written as a conditional expectation, namely \u2207x log pSDE t (x) = Ex0\u223cpSDE 0|t (x0|x) h \u2207x log pSDE t|0 (x|x0) i , see Lemma A.1 in the appendix for more details. We will elaborate on this correspondence in Section 3.2. While the forward transition probability pt|0 can usually be approximated (e.g. by solving the corresponding master equation, see Appendix B.1), the time-reversed transition function p0|t is typically not tractable, and we therefore must resort to a learning task. One idea is to approximate p0|t \u2248p\u03b8 0|t by a distribution parameterized in \u03b8 \u2208Rp (e.g. via neural networks), see, e.g. Campbell et al. (2022) and Appendix C.2. We suggest an alternative method in the following. 2.1. Loss functions via conditional expectations Recalling that any conditional expectation can be written as an L2 projection (see Lemma A.2 in the appendix), we define the loss Ly(\u03c6y) = E \"\u0012 \u03c6y(x, t) \u2212pt|0(y|x0) pt|0(x|x0) \u00132# , (4) where the expectation is over x0 \u223cpdata, t \u223cU(0, T), x \u223c pt|0(x|x0). Assuming a sufficiently rich function class F, it then holds that the minimizer of the loss equals the conditional expectation in Lemma 2.1 for any y \u2208\u2126, i.e. arg min \u03c6y\u2208F Ly(\u03c6y) = Ex0\u223cp0|t(x0|x) \u0014pt|0(y|x0) pt|0(x|x0) \u0015 . (5) We can thus directly learn the conditional expectation. In contrast to approximating the reverse transition probability p0|t, this has the advantage that we do not need to model a distribution, but a function, which is less challenging from a numerical perspective. Furthermore, we will see that the conditional expectation can be directly linked to the score function in the SDE setting, such that our approximating functions \u03c6y can be directly linked to the approximated score. We note that the loss has already been derived in a more general version in Meng et al. (2022) and applied to the setting of Markov jump processes in Lou et al. (2023), however, following a different derivation. A potential disadvantage of the loss (4), on the other hand, is that we may need to approximate different functions \u03c6y for different y \u2208\u2126. This, however, can be coped with in two ways. On the one hand, we may focus on birth-death processes, for which r(y|x) is non-zero only for y = x \u00b1 1, such that we only need to learn 2 instead of S \u22121 functions \u03c6y. In the next section we will argue that birth-death process are in fact favorable for multiple reasons. On the one hand, we can do a Taylor expansion such that for certain processes it suffices to only consider one approximating function, as will be shown in Remark 3.3. 3. The Ehrenfest process In principle, we are free to choose any forward process M(t) for which we can compute the forward transition probabilities pt|0 and which is close to its stationary distribution after a not too long run time T. In the sequel, we argue that the Ehrenfest process is particularly suitable \u2013 both from a theoretical and practical perspective. For notational convenience, we make the argument in dimension d = 1, noting, however, that a multidimensional extension is straightforward. For computational aspects in high-dimensional spaces we refer to Section 4.1. We define the Ehrenfest process2 as ES(t) := S X i=1 Zi(t), (6) where each Zi is a process on the state space \u2126= {0, 1} with transition rates r(0|1) = r(1|0) = 1 2 (sometimes called telegraph or Kac process). We note that the Ehrenfest process is a birth-death process with values in {0, . . . , S} and transition rates r(x + 1|x) = 1 2(S \u2212x), r(x \u22121|x) = x 2 . (7) We observe that we can readily transform the timeindependent rates in (7) to time-dependent rates rt(x \u00b1 1|x) := \u03bbt r(x \u00b1 1|x) (8) via a time transformation, where \u03bb : [0, T] \u2192R, see Appendix B.2. Without loss of generality, we will focus on the time-independent rates (7) in the sequel. One compelling property of the Ehrenfest process is that we can sample without needing to simulate trajectories. Lemma 3.1. Assuming ES(0) = x0, the Ehrenfest process can be written as ES(t) = E0,S(t) + E1,S(t), (9) 2The Ehrenfest process was introduced by the Russian-Dutch and German physicists Tatiana and Paul Ehrenfest to explain the second law of thermodynamics, see Ehrenfest & EhrenfestAfanassjewa (1907). 3 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models where E0,S(t) \u223cB(S \u2212x0, 1 \u2212f(t)) and E1,S(t) \u223c B(x0, f(t)) are independent binomial random variables and f(t) := 1 2 (1 + e\u2212t). Consequently, the forward transition probability is given by the discrete convolution pt|0(x|x0) = X z\u2208\u2126 P (E0,S(t) = z) P (E1,S(t) = x \u2212z) . (10) Proof. See Appendix A. We note that the sum in (10) can usually be numerically evaluated without great effort. 3.1. Convergence properties in the infinite state space limit It is known that certain (appropriately scaled) Markov jump processes converge to state-continuous diffusion processes when the state space size S + 1 tends to infinity (see, e.g., Kurtz (1972); Gardiner et al. (1985)). For the Ehrenfest process, this convergence can be studied quite rigorously. To this end, let us introduce the scaled Ehrenfest process e ES(t) := 2 \u221a S \u0012 ES(t) \u2212S 2 \u0013 (11) with transition rates r \u0012 x \u00b1 2 \u221a S \f \f \f \fx \u0013 = \u221a S 4 ( \u221a S \u2213x), (12) now having values in \u2126= n \u2212 \u221a S, \u2212 \u221a S + 2 \u221a S , . . . , \u221a S o . We are interested in the large state space limit S \u2192\u221e, noting that this implies 2 \u221a S \u21920 for the transition steps, thus leading to a refinement of the state space. The following convergence result is shown in Sumita et al. (2004, Theorem 4.1). Proposition 3.2 (State space limit of Ehrenfest process). In the limit S \u2192\u221e, the scaled Ehrenfest process e ES(t) converges in law to the Ornstein-Uhlenbeck process Xt for any t \u2208[0, T], where Xt is defined via the SDE dXt = \u2212Xt dt + \u221a 2 dWt, (13) with Wt being standard Brownian motion. For an illustration of the convergence we refer to Figure 1. Note that the convergence of the scaled Ehrenfest process to the Ornstein-Uhlenbeck process implies pt|0(x|x0) \u2248pOU t|0 (x|x0) := N(x; \u00b5t(x0), \u03c32 t ) (14) with \u00b5t(x0) = x0e\u2212t and \u03c32 t = (1\u2212e\u22122t). For the quantity in the conditional expectation (3) we can thus compute pt|0 \u0010 x \u00b1 \u03b4 \f \f \fx0 \u0011 pt|0(x|x0) \u2248exp \u0012\u22132(x \u2212\u00b5t(x0))\u03b4 \u2212\u03b42 2\u03c32 t \u0013 (15a) \u2248exp \u0012 \u2212\u03b42 2\u03c32 t \u0013 1 \u2213(x \u2212\u00b5t(x0))\u03b4 \u03c32 + ((x \u2212\u00b5t(x0))\u03b4)2 2\u03c34 ! , (15b) where we used the shorthand \u03b4 := 2 \u221a S . Remark 3.3 (Learning of conditional expectation). Note that the approximation (15b) allows us to define the loss LGau\u00df(\u03c6) := E \"\u0012 \u03c6(x, t) \u2212exp \u0012\u22132(x \u2212\u00b5t(x0))\u03b4 \u2212\u03b42 2\u03c32 t \u0013\u00132# . (16) Further, we can write Ex0 \uf8ee \uf8f0 pt|0 \u0010 x \u00b1 \u03b4 \f \f \fx0 \u0011 pt|0(x|x0) \uf8f9 \uf8fb\u2248exp \u0012 \u2212\u03b42 2\u03c32 t \u0013 \uf8eb \uf8ed1 \u2213(x \u2212Ex0 [\u00b5t(x0)])\u03b4 \u03c32 + Ex0 h ((x \u2212\u00b5t(x0))\u03b4)2i 2\u03c34 \uf8f6 \uf8f8, (17) where x0 \u223cp0|t(x0|x). In consequence, this allows us to consider the loss functions LTaylor(\u03c61) := E h (\u03c61(x, t) \u2212\u00b5t(x0))2i , (18) and LTaylor,2(\u03c62) := E \u0014\u0010 \u03c62(x, t) \u2212((x \u2212\u00b5t(x0))\u03b4)2\u00112\u0015 , (19) where the expectations are over x0 \u223c pdata, t \u223c U(0, T), x \u223cpt|0(x|x0). We can also only consider the first order term in the Taylor expansion (15b), such that we then only have to approximate one instead of two functions. Since the scaled forward Ehrenfest process converges to the Ornstein-Uhlenbeck process, we can expect the timereversed scaled Ehrenfest process to converge to the timereversal of the Ornstein-Uhlenbeck process. We shall study this conjecture in more detail in the sequel. 3.2. Connections between time-reversal of Markov jump processes and score-based generative modeling Inspecting Lemma 2.1, which specifies the rate function of a backward Markov jump process, we realize that the 4 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models time-reversal essentially depends on two things, namely the forward rate function with switched arguments as well as the conditional expectation of the ratio between two forward transition probabilities. To gain some intuition, let us first assume that the state space size S + 1 is large enough and that the transition density pt|0 can be extended to R (which we call pt|0) such that it can be approximated via a Taylor expansion. We can then assume that r \u0012 x \u00b1 2 \u221a S \f \f \f \fx \u0013 \u2248r \u0012 x \f \f \f \fx \u2213 2 \u221a S \u0013 (20) as well as pt|0 \u0010 x \u00b1 2 \u221a S \f \f \fx0 \u0011 pt|0(x|x0) \u2248 pt|0(x|x0) \u00b1 2 \u221a S \u2207pt|0(x|x0) pt|0(x|x0) (21a) = 1 \u00b1 2 \u221a S \u2207log pt|0(x|x0), (21b) where the conditional expectation of \u2207log pt|0(x|x0) is reminiscent of the score function in SDE-based diffusion models (cf. Lemma A.1 in the appendix). This already hints at a close connection between the time-reversal of Markov jump processes and score-based generative modeling. Further, note that (21a) corresponds to (15b) for large enough S and pt|0 \u2248pOU t|0 . We shall make the above observation more precise in the following. To this end, let us study the first and second jump moments of the Markov jump process, given as b(x) = X y\u2208\u2126,y\u0338=x (y \u2212x)r(y|x), (22) D(x) = X y\u2208\u2126,y\u0338=x (y \u2212x)2r(y|x), (23) see Appendix B.3. For the scaled Ehrenfest process (11) we can readily compute b(x) = \u2212x, D(x) = 2, (24) which align with the drift and diffusion coefficient (which is the square root of D) of the Ornstein-Uhlebeck process in Proposition 3.2. In particular, we can show the following relation between the jump moments of the forward and the backward Ehrenfest processes, respectively. Proposition 3.4. Let b and D be the first and second jump moments of the scaled Ehrenfest process e ES. The first and second jump moments of the time-reversed scaled Ehrenfest \u20d7 e ES are then given by \u20d7 b(x, t) = \u2212b(x) + D(x) Ex0\u223cp0|t(x0|x) \u0014\u2206Spt|0(x|x0) pt|0(x|x0) \u0015 + o(S\u22121/2), (25) \u20d7 D(x) = D(x) + o(S\u22121/2), (26) where \u2206Spt|0(x|x0) := pt|0(x + 2 \u221a S |x0) \u2212pt|0(x|x0) 2 \u221a S (27) is a one step difference and pt|0 and p0|t are the forward and reverse transition probabilities of the scaled Ehrenfest process. Proof. See Appendix A. Remark 3.5 (Convergence of the time-reversed Ehrenfest process). We note that Proposition 3.4 implies that the timereversed Ehrenfest process in expected to converge in law to the time-reversed Ornstein-Uhlenbeck process. This can be seen as follows. For S \u2192\u221e, we know via Proposition 3.2 that the forward Ehrenfest process converges to the Ornstein-Uhlenbeck process, i.e. pt|0 converges to pOU t|0 , where pOU t|0 (x|x0) is the transition density of the OrnsteinUhlenbeck process (13) starting at X0 = x0. Together with the fact that the finite difference approximation operator \u2206S converges to the first derivative, this implies that Ex0\u223cp0|t(x0|x) h \u2206Spt|0(x|x0) pt|0(x|x0) i is expected to converge to Ex0\u223cpOU 0|t (x0|x) h \u2207log pOU t|0 (x|x0) i . Now, Lemma A.1 in the appendix shows that this conditional expectation is the score function of the Ornstein-Uhlenbeck process, i.e. \u2207log pOU t (x) = Ex0\u223cpOU 0|t (x0|x) h \u2207log pOU t|0 (x|x0) i . Finally, we note that the first and second jump moments converge to the drift and the square of the diffusion coefficient of the limiting SDE, respectively (Gardiner et al., 1985). Therefore, the scaled time-reversed Ehrenfest process \u20d7 e ES(t) is expected to converge in law to the process Yt given by dYt = \u0000Yt + 2\u2207log pOU T \u2212t(Yt) \u0001 dt + \u221a 2 dWt, (28) which is the time-reversal of the Ornstein-Uhlenbeck process stated in (13). Note that we write (28) as a forward process from t = 0 to t = T, where Wt is a forward Brownian motion, which induces the time-transformation t 7\u2192T \u2212t in the score function. Remark 3.6 (Generalizations). Following the proof of Proposition 3.4, we expect that the formulas for the first two jump moments of the time-reversed Markov jump process, stated 5 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models in (25) and (26), are valid for any (appropriately scaled) birth-death process whose transition rates fulfill 1 S (r(x \u00b1 \u03b4|x) \u2212r(x|x \u2213\u03b4)) = o(S\u22121), (29) where \u03b4 is a jump step size that decreases with the state space size S + 1. Crucially, Remark 3.5 shows that we can directly link approximations in the (scaled) state-discrete setting to standard state-continuous score-based generative modeling via Ex0\u223cp0|t(x0|x) \"pt|0(x \u00b1 2 \u221a S |x0) pt|0(x|x0) # \u22481\u00b1 2 \u221a S \u2207log pOU t (x), (30) see also the proof of Proposition 3.4 in Appendix A. In particular, this allows for transfer learning between the two cases. E.g., we can train a discrete model and use the approximation of the conditional expectation (up to scaling) as the score function in a continuous model. Likewise, we can train a continuous model and approximate the conditional expectation by the score. We have illustrated the latter approach in Figure 1, where we have used the (analytically available) score function that transports a standard Gaussian to a multimodal Gaussian mixture in a discrete-state Ehrenfest process that starts at a binomial distribution which is designed in such a way that it converges to the standard Gaussian for S \u2192\u221e. Similar to (4), the correspondence (30) motivates to train a state-discrete scaled Ehrenfest model with the loss defined by LOU(e \u03c6) := E \u0014\u0010 e \u03c6(x, t) \u2212\u2207log pOU t|0 (x|x0) \u00112\u0015 (31a) = E \"\u0012 e \u03c6(x, t) + (x \u2212\u00b5t(x0)) \u03c32 t \u00132# , (31b) where the expectation is over x0 \u223cpdata, t \u223cU(0, T), x \u223c pt|0(x|x0) and where \u00b5t(x0) = x0e\u2212t and \u03c32 t = (1\u2212e\u22122t), as before. In fact, this loss is completely analog to the denoising score matching loss in the state-continuous setting. We later set \u03c6 = 1 \u00b1 2 \u221a S e \u03c6\u2217, where e \u03c6\u2217is the minimizer of (31), to get the approximated conditional expectation. Remark 3.7 (Ehrenfest process as discrete-state DDPM). To make the above considerations more precise, note that we can directly link the discrete-space Ehrenfest process to pretrained score models in continuous space, such as, e.g., the celebrated denoising diffusion probabilistic models (DDPM) (Ho et al., 2020). Those models usually transport a standard Gaussian to the target density that is supported on [\u22121, 1]d. In order to cope with the fact that the scaled Ehrenfest process terminates (approximately) at a standard Gaussian irrespective of the size S + 1, we typically choose 4 2 0 2 4 x 0.0 0.1 0.2 0.3 0.4 Prior and target distribution 2.0 1.5 1.0 0.5 0.0 t 4 2 0 2 4 x Time-reversed Ornstein-Uhlenbeck process 4 2 0 2 4 x 0.0 0.1 0.2 0.3 0.4 Prior and target distribution 2.0 1.5 1.0 0.5 0.0 t 4 2 0 2 4 x Time-reversed Ehrenfest process Figure 1. We display two time-reversed processes from t = 2 to t = 0 that transport a standard Gaussian (left panels, in green) to a multimodal Gaussian mixture model (left panels, in orange), or a binomial distribution to a binomial mixture, respectively, once using a diffusion process in continuous space (upper panel) and once a time-reversed (scaled) Ehrenfest process in discrete space with S = 100 (lower panel). Crucially, in both cases we use the (state-continuous) score function to employ the time-reversal, which for this problem is known analytically, see Appendix D.1. The plots demonstrate that the distributions of the processes seem indeed very close one another, implying that the approximation (30) is quite accurate even for a moderate state space size S + 1. S = 2552 such that the interval [\u22121, 1] contains 256 states that correspond to the RGB color values of images, recalling that the increments between the states are 2 \u221a S . Further, noting the actual Ornstein-Uhlenbeck process that DDPM is trained on, we employ the time scaling \u03bbt = 1 2\u03b2(t), where \u03b2 and further details are stated in Appendix D.2, and choose the (time-dependent) rates rt \u0012 x \u00b1 2 \u221a S \f \f \f \fx \u0013 = \u03b2(t) \u221a S 8 ( \u221a S \u2213x), (32) according to (8) and (12). 4. Computational aspects In this section, we comment on computational aspects that are necessary for the training and simulation of the timereversal of our (scaled) Ehrenfest process. For convenience, we refer to Algorithm 1 and Algorithm 2 in Appendix C.1 for the corresponding training and sampling algorithms, respectively. 4.1. Modeling of dimensions In order to make computations feasible in high-dimensional spaces \u2126d, we typically factorize the forward process, such that each dimension propagates independently, cf. Camp6 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models bell et al. (2022). Note that this is analog to the OrnsteinUhlenbeck process in score-based generative modeling, in which the dimensions also do not interact, see, e.g., (13). We thus consider pt|0(x|y) = d Y i=1 p(i) t|0(x(i)|y(i)), (33) where p(i) t|0 is the transition probability for dimension i \u2208 {1, . . . , d} and x(i) is the i-th component of x \u2208\u2126d. In Campbell et al. (2022) it is shown that the forward and backward rates then translate to rt(x|y) = d X i=1 r(i) t (x(i)|y(i))\u0393x\u00aci,y\u00aci, (34) where \u0393x\u00aci,y\u00aci is one if all dimensions except the i-th dimension agree, and \u20d7 rt(x|y) = d X i=1 E \" pt|0(y(i)|x(i) 0 ) pt|0(x(i)|x(i) 0 ) # r(i) t (x(i)|y(i))\u0393x\u00aci,y\u00aci, (35) where the expectation is over x(i) 0 \u223cp0|t(x(i) 0 |x). Equation (35) illustrates that the time-reversed process does not factorize in the dimensions even though the forward process does. Note with (34) that for a birth-death process a jump appears only in one dimension at a time, which implies that rt(x \u00b1 \u03b4i|x) = r(i) t (x(i) \u00b1 \u03b4(i) i |x(i)), (36) where now \u03b4i = (0, . . . , 0, \u03b4(i) i , 0, . . . , 0)\u22a4with \u03b4(i) i being the jump step size in the i-th dimension. Likewise, (35) becomes \u20d7 rt(x \u00b1 \u03b4i|x) = E \" pt|0(y(i)|x(i) 0 ) pt|0(x(i)|x(i) 0 ) # r(i) t (x(i)|x(i) + \u03b4(i) i ), (37) where the expectation is over x(i) 0 \u223cp0|t(x(i) 0 |x), which still depends on all dimensions. For each dimension i \u2208{1, . . . , d} we can therefore approximate the conditional expectation appearing in (37) via the loss function (4) with two functions \u03c6i,b : Rd \u00d7 [0, T] \u2192R and \u03c6i,d : Rd \u00d7 [0, T] \u2192R. Alternatively, we can learn just two functions \u03c6b/d : Rd \u00d7 [0, T] \u2192Rd for the entire space and identify \u03c6i,b/d = \u03c6(i) b/d. 4.2. \u03c4-leaping The fact that jumps only happen in one dimension at a time implies that the naive implementation of changing component by component (e.g. by using the Gillespie\u2019s algorithm, Figure 2. We plot histograms of 500.000 samples from the timereversed scaled Ehrenfest process at different times. The processes have been trained with three different losses. see Gillespie (1976)) would require a very long sampling time. As suggested in Campbell et al. (2022), we can therefore rely on \u03c4-leaping for an approximate simulation methods (Gillespie, 2001). The general idea is to not simulate jump by jump, but wait for a time interval of length \u03c4 and apply all jumps at once. One can show that the number of jumps is Poisson distributed with a mean of \u03c4 \u20d7 rt (x|y). For further details we refer to Algorithm 2. 5. Numerical experiments In this section, we demonstrate our theoretical insights in numerical experiments. If not stated otherwise, we always consider the scaled Ehrenfest process defined in (11). We will compare the different variants of the loss (4), namely LGauss defined in (16), LTaylor defined in (18) and LOU defined in (31). 5.1. Illustrative example Let us first consider an illustrative example, for which the data distribution is tractable. We consider a process in d = 2 with S = 32, where the (S + 1)d = 332 different state combinations in pdata are defined to be proportional to the pixels of an image of the letter \u201cE\u201d. Since the dimensionality is d = 2, we can visually inspect the entire distribution at any time t \u2208[0, T] by plotting 2-dimensional histograms of the simulated processes. With this experiment we can in particular check that modeling the dimensions of the forward process independently from one another (as explained in Section 4.1) is no restriction for the backward process. Indeed Figure 2 shows that the time-reversed process, which is learned with (versions of) the loss (4), can transport the prior distribution (which is approximately binomial, or, loosely speaking, a binned Gaussian) to the specified target. Again, note that this plot does not display single realizations, but 7 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models entire distributions, which, in this case, are approximated with 500.000 samples. We realize that in this simple problem LGau\u00df performs slightly better than LOU and LTaylor. As expected, the approximations work sufficiently well even for a moderate state space size S + 1. As argued in Section 3.1, this should get even better with growing S. For further details, we refer to Appendix D.3. 5.2. MNIST For a basic image modeling task, we consider the MNIST dataset, which consists of gray scale pixels and was resized to 32 \u00d7 32 to match the required input size of a U-Net neural network architecture3, such that d = 32 \u00d7 32 = 1024 and S = 255. As before, we train our time-reversed Ehrenfest model by using the variants of the loss introduced in Section 2.1. In Figure 3 we display generated samples from a model trained with LOU. The models with the other losses look equally good, so we omit them. For further details, we refer to Appendix D.4. Figure 3. MNIST samples obtained with the time-reversed scaled Ehrenfest process which was trained with LOU. 5.3. Image modeling with CIFAR-10 As a more challenging task, we consider the CIFAR-10 data set, with dimension d = 3 \u00d7 32 \u00d7 32 = 3072, each taking 256 different values (Krizhevsky et al., 2009). In the experiments we again compare our three different losses, however, realize that LGau\u00df did not produce satisfying results and had convergence issues, which might follow from numerical issues due to the exponential term appearing in (16). Further, we consider three different scenarios: we train a model from scratch, we take the U-Net model that was pretrained in the state-continuous setting, and we take the same model and further train it with our state-discrete training algorithm (recall Remark 3.7, which describes how to link the Ehrenfest process to DDPM). We display the metrics in Table 1. When using only transfer 3Taken from the repository https://github.com/ w86763777/pytorch-ddpm. learning, the different losses indicate different ways of incorporating the pretrained model, see Appendix D.2. We realize that both losses produce comparable results, with small advantages for LOU. Even without having invested much time in finetuning hyperparameters and sampling strategies, we reach competitive performance with respect to the alternative methods LDR (Campbell et al., 2022) and D3PM (Austin et al., 2021). Remarkably, even the attempt with transfer learning returns good results, without having applied any further training. For further details, we refer to Appendix D.5, where we also display more samples in Figures 6-9. Figure 4. CIFAR-10 samples from the Ehrenfest process with a pretrained model, further finetuned with LOU. Figure 5. CIFAR-10 samples from the Ehrenfest process with a pretrained model, further finetuned with LTaylor. IS (\u2191) FID (\u2193) Ehrenfest LOU 8.75 11.57 (transfer learning) LTaylor 8.68 11.72 Ehrenfest LOU 9.50 5.08 (from scratch) LTaylor 9.66 5.12 LTaylor2 9.40 5.44 Ehrenfest LOU 9.14 6.63 (pretrained) LTaylor 9.06 6.91 \u03c4-LDR (0) 8.74 8.10 Alternative \u03c4-LDR (10) 9.49 3.74 methods D3PM Gauss 8.56 7.34 D3PM Absorbing 6.78 30.97 Table 1. Performance in terms of Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017) on CIFAR-10 over 50.000 samples. We compare two losses and consider three different scenarios: we train a model from scratch, we take the U-Net model that was pretrained in the statecontinuous setting (called \u201ctransfer learning\u201d) or we take the same model and further train it with our state-discrete training algorithm (called \u201cpretraining\u201d). 6. Conclusion In this work, we have related the time-reversal of discretespace Markov jump processes to continuous-space scorebased generative modeling, such that, for the first time, one 8 \fBridging discrete and continuous state space: Exploring the Ehrenfest process in time-continuous diffusion models can directly link models of the respective settings to one another. While we have focused on the theoretical connections, our numerical experiments demonstrate that we can already reach competitive performance with the new loss function that we proposed. We suspect that further tuning and the now possible transfer learning between discrete and continuous state space will further enhance the performance. On the theoretical side, we anticipate that the convergence of the time-reversed jump processes to the reversed SDE can be generalized even further, which we leave to future work. Acknowledgements L.W. acknowledges support by the Federal Ministry of Education and Research (BMBF) for BIFOLD (01IS18037A). The research of L.R. has been partially funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1114 \u201cScaling Cascades in Complex Systems\u201d (project A05, project number 235221301). M.O. has been partially funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1294 \u201cData Assimilation\u201d (project number 318763901). Impact statement The goal of this work is to advance the theoretical understanding of generative modeling based on stochastic processes, eventually leading to improvements in applications as well. While there are potential societal consequences of our work in principle, we do not see any concrete issues and thus believe that we do not specifically need to highlight any."
19
+ }
title_10K/test_title_short_2405.03606v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03606v1",
3
+ "title": "Strang Splitting for Parametric Inference in Second-order Stochastic Differential Equations",
4
+ "abstract": "We address parameter estimation in second-order stochastic differential\nequations (SDEs), prevalent in physics, biology, and ecology. Second-order SDE\nis converted to a first-order system by introducing an auxiliary velocity\nvariable raising two main challenges. First, the system is hypoelliptic since\nthe noise affects only the velocity, making the Euler-Maruyama estimator\nill-conditioned. To overcome that, we propose an estimator based on the Strang\nsplitting scheme. Second, since the velocity is rarely observed we adjust the\nestimator for partial observations. We present four estimators for complete and\npartial observations, using full likelihood or only velocity marginal\nlikelihood. These estimators are intuitive, easy to implement, and\ncomputationally fast, and we prove their consistency and asymptotic normality.\nOur analysis demonstrates that using full likelihood with complete observations\nreduces the asymptotic variance of the diffusion estimator. With partial\nobservations, the asymptotic variance increases due to information loss but\nremains unaffected by the likelihood choice. However, a numerical study on the\nKramers oscillator reveals that using marginal likelihood for partial\nobservations yields less biased estimators. We apply our approach to\npaleoclimate data from the Greenland ice core and fit it to the Kramers\noscillator model, capturing transitions between metastable states reflecting\nobserved climatic conditions during glacial eras.",
5
+ "authors": "Predrag Pilipovic, Adeline Samson, Susanne Ditlevsen",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "stat.ME",
9
+ "cats": [
10
+ "stat.ME",
11
+ "math.ST",
12
+ "stat.TH"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "Strang Splitting for Parametric Inference in Second-order Stochastic Differential Equations",
17
+ "main_content": "Introduction Second-order stochastic differential equations (SDEs) are an effective instrument for modeling complex systems showcasing both deterministic and stochastic dynamics, which incorporate the second derivative of a variable the acceleration. These models are extensively applied in many fields, including physics (Rosenblum and Pikovsky, 2003), molecular dynamics (Leimkuhler and Matthews, 2015), ecology (Johnson et al., 2008; Michelot and Blackwell, 2019), paleoclimate research (Ditlevsen et al., 2002), and neuroscience (Ziv et al., 1994; Jansen and Rit, 1995). arXiv:2405.03606v1 [stat.ME] 6 May 2024 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT The general form of a second-order SDE in Langevin form is given as follows: \u00a8 Xt = F(Xt, \u02d9 Xt, \u03b2) + \u03a3\u03bet. (1) Here, Xt \u2208Rd denotes the variable of interest, the dot indicates derivative with respect to time t, drift F represents the deterministic force, and \u03bet is a white noise representing the system\u2019s random perturbations around the deterministic force. We assume that \u03a3 is constant, that is the noise is additive. The main goal of this study is to estimate parameters in second-order SDEs. We first reformulate the d-dimensional second-order SDE (1) into a 2d-dimensional SDE in It\u00f4\u2019s form. We define an auxiliary velocity variable, and express the second-order SDE in terms of its position Xt and velocity Vt: dXt = Vt dt, X0 = x0, dVt = F (Xt, Vt; \u03b2) dt + \u03a3 dWt, V0 = v0, (2) where Wt is a standard Wiener process. We refer to Xt and Vt as the smooth and rough coordinates, respectively. A specific example of model (2) is F(x, v) = \u2212c(x, v)v \u2212\u2207U(x), for some function c(\u00b7) and potential U(\u00b7). Then, model (2) is called a stochastic damping Hamiltonian system. This system describes the motion of a particle subjected to potential, dissipative, and random forces (Wu, 2001). An example of a stochastic damping Hamiltonian system is the Kramers oscillator introduced in Section 2.1. Let Yt = (X\u22a4 t , V\u22a4 t )\u22a4, e F(x, v; \u03b2) = (v\u22a4, F(x, v; \u03b2)\u22a4)\u22a4and e \u03a3 = (0\u22a4, \u03a3\u22a4)\u22a4. Then (2) is formulated as dYt = e F (Yt; \u03b2) dt + e \u03a3 dWt, Y0 = y0. (3) The notation e over an object indicates that it is associated with process Yt. Specifically, the object is of dimension 2d or 2d \u00d7 2d. When it exists, the unique solution of (3) is called a diffusion or diffusion process. System (3) is usually not fully observed since the velocity Vt is not observable. Thus, our primary objective is to estimate the underlying drift parameter \u03b2 and the diffusion parameter \u03a3, based on discrete observations of either Yt (referred to as complete observation case), or only Xt (referred to as partial observation case). Diffusion Yt is said to be hypoelliptic since the matrix e \u03a3e \u03a3\u22a4= \u00140 0 0 \u03a3\u03a3\u22a4 \u0015 (4) is not of full rank, while Yt admits a smooth density. Thus, (2) is a subclass of a larger class of hypoelliptic diffusions. Parametric estimation for hypoelliptic diffusions is an active area of research. Ditlevsen and S\u00f8rensen (2004) studied discretely observed integrated diffusion processes. They proposed to use prediction-based estimating functions, which are suitable for non-Markovian processes and which do not require access to the unobserved component. They proved consistency and asymptotic normality of the estimators for N \u2192\u221e, but without any requirements on the sampling interval h. Certain moment conditions are needed to obtain results for fixed h, which are often difficult to fulfill for nonlinear drift functions. The estimator was applied to paleoclimate data in Ditlevsen et al. (2002), similar to the data we analyze in Section 5. Gloter (2006) also focused on parametric estimation for discretely observed integrated diffusion processes, introducing a contrast function using the Euler-Maruyama discretization. He studied the asymptotic properties as the sampling interval h \u21920 and the sample size N \u2192\u221e, under the so-called rapidly increasing experimental design Nh \u2192\u221e and Nh2 \u21920. To address the ill-conditioned contrast from the Euler-Maruyama discretization, he suggested using only the rough equations of the SDE. He proposed to recover the unobserved integrated component through the finite difference approximation (Xtk+1 \u2212Xtk)/h. This approximation makes the estimator biased and requires a correction factor of 3/2 in one of the terms of the contrast function for partial observations. Consequently, the correction increases the asymptotic variance of the estimator of the diffusion parameter. Samson and Thieullen (2012) expanded the ideas of (Gloter, 2006) and proved the results of (Gloter, 2006) in more general models. Similar to (Gloter, 2006), their focus was on contrasts using the Euler-Maruyama discretization limited to only the rough equations. Pokern et al. (2009) proposed an It\u00f4-Taylor expansion, adding a noise term of order h3/2 to the smooth component in the numerical scheme. They argued against the use of finite differences for approximating unobserved components. Instead, he suggested using the It\u00f4-Taylor expansion leading to non-degenerate conditionally Gaussian approximations of the transition density and using Markov Chain Monte Carlo (MCMC) Gibbs samplers for conditionally imputing missing components based on the observations. They found out that this approach resulted in a biased estimator of the drift parameter of the rough component. 2 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Ditlevsen and Samson (2019) focused on both filtering and inference methods for complete and partial observations. They proposed a contrast estimator based on the strong order 1.5 scheme (Kloeden and Platen, 1992), which incorporates noise of order h3/2 into the smooth component, similar to (Pokern et al., 2009). Moreover, they retained terms of order h2 in the mean, which removed the bias in the drift parameters noted in (Pokern et al., 2009). They proved consistency and asymptotic normality under complete observations, with the standard rapidly increasing experimental design Nh \u2192\u221eand Nh2 \u21920. They adopted an unconventional approach by using two separate contrast functions, resulting in marginal asymptotic results rather than a joint central limit theorem. The model was limited to a scalar smooth component and a diagonal diffusion coefficient matrix for the rough component. Melnykova (2020) developed a contrast estimator using local linearization (LL) (Ozaki, 1985; Shoji and Ozaki, 1998; Ozaki et al., 2000) and compared it to the least-squares estimator. She employed local linearization of the drift function, providing a non-degenerate conditional Gaussian discretization scheme, enabling the construction of a contrast estimator that achieves asymptotic normality under the standard conditions Nh \u2192\u221eand Nh2 \u21920. She proved a joint central limit theorem, bypassing the need for two separate contrasts as in Ditlevsen and Samson (2019). The models in Ditlevsen and Samson (2019) and Melnykova (2020) allow for parameters in the smooth component of the drift, in contrast to models based on second-order differential equations. Recent work by Gloter and Yoshida (2020, 2021) introduced adaptive and non-adaptive methods in hypoelliptic diffusion models, proving asymptotic normality in the complete observation regime. In line with this work, we briefly review their non-adaptive estimator. It is based on a higher-order It\u00f4-Taylor expansion that introduces additional Gaussian noise onto the smooth coordinates, accompanied by an appropriate higher-order mean approximation of the rough coordinates. The resulting estimator was later termed the local Gaussian (LG), which should be differentiated from LL. The LG estimator can be viewed as an extension of the estimator proposed in Ditlevsen and Samson (2019), with fewer restrictions on the class of models. Gloter and Yoshida (2020, 2021) found that using the full SDE to create a contrast reduces the asymptotic variance of the estimator of the diffusion parameter compared to methods using only rough coordinates in the case of complete observations. The most recent contributions are Iguchi et al. (2023a,b); Iguchi and Beskos (2023), building on the foundation of the LG estimator and focusing on high-frequency regimes addressing limitations in earlier methods. Iguchi et al. (2023b) presented a new closed-form contrast estimator for hypoelliptic SDEs (denoted as Hypo-I) based on Edgeworth-type density expansion and Malliavin calculus that achieves asymptotic normality under the less restrictive condition of Nh3 \u21920. Iguchi et al. (2023a) focused on a highly degenerate class of SDEs (denoted as Hypo-II) where smooth coordinates split into further sub-groups and proposed estimators for both complete and partial observation settings. Iguchi and Beskos (2023) further refined the conditions for estimators asymptotic normality for both Hypo-I and Hypo-II under a weak design Nhp \u21920, for p \u22652. The existing methods are generally based on approximations with varying degrees of refinements to correct for possible nonlinearities. This implies that they quickly degrade for highly nonlinear models if the step size is increased. In particular, this is the case for Hamiltonian systems. Instead, we propose to use splitting schemes, more precisely the Strang splitting scheme. Splitting schemes are established techniques initially developed for solving ordinary differential equations (ODEs) and have proven to be effective also for SDEs (Ableidinger et al., 2017; Buckwar et al., 2022; Pilipovic et al., 2024). These schemes yield accurate results in many practical applications since they incorporate nonlinearities in their construction. This makes them particularly suitable for second-order SDEs, where they have been widely used. Early work in dissipative particle dynamics (Shardlow, 2003; Serrano et al., 2006), applications to molecular dynamics (Vanden-Eijnden and Ciccotti, 2006; Melchionna, 2007; Leimkuhler and Matthews, 2015) and studies on internal particles (Pavliotis et al., 2009) all highlight the scheme\u2019s versatility. Burrage et al. (2007), Bou-Rabee and Owhadi (2010), and Abdulle et al. (2015) focused on the long-run statistical properties such as invariant measures. Bou-Rabee (2017); Br\u00e9hier and Gouden\u00e8ge (2019) and Adams et al. (2022) used splitting schemes for stochastic partial differential equations (SPDEs). Despite the extensive use of splitting schemes in different areas, statistical applications have been lacking. We have recently proposed statistical estimators for elliptic SDEs (Pilipovic et al., 2024). The straightforward and intuitive schemes lead to robust, easy-to-implement estimators, offering an advantage over more numerically intensive and less user-friendly state-of-the-art methods. We use the Strang splitting scheme to approximate the transition density between two consecutive observations and derive the pseudo-likelihood function since the exact likelihood function is often unknown or intractable. Then, to estimate parameters, we employ maximum likelihood estimation (MLE). However, two specific statistical problems arise due to hypoellipticity and partial observations. First, hypoellipticity leads to degenerate Euler-Maruyama transition schemes, which can be addressed by constructing the pseudo-likelihood solely from the rough equations of the SDE, referred to as the rough likelihood hereafter. The 3 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Strang splitting technique enables the estimator to incorporate both smooth and rough components (referred to as the full likelihood). It is also possible to construct Strang splitting estimators using only the rough likelihood, raising the question of which estimator performs better. Our results are in line with Gloter and Yoshida (2020, 2021) in the complete observation setting, where we find that using the full likelihood reduces the asymptotic variance of the diffusion estimator. We found the same results in the simulation study for the LL estimator proposed by Melnykova (2020). Second, we suggest to treat the unobserved velocity by approximating it using finite difference methods. While Gloter (2006) and Samson and Thieullen (2012) exclusively use forward differences, we investigate also central and backward differences. The forward difference approach leads to a biased estimator unless it is corrected. One of the main contributions of this work is finding suitable corrections of the pseudo-likelihoods for different finite difference approximations such that the Strang estimators are asymptotically unbiased. This also ensures consistency of the diffusion parameter estimator, at the cost of increasing its asymptotic variance. When only partial observations are available, we explore the impact of using the full likelihood versus the rough likelihood and how different finite differentiation approximations influence the parametric inference. We find that the choice of likelihood does not affect the asymptotic variance of the estimator. However, our simulation study on the Kramers oscillator suggests that using the full likelihood in finite sample setups introduce more bias than using only the rough marginal likelihood, which is the opposite of the complete observation setting. Finally, we analyze a paleoclimate ice core dataset from Greenland using a second-order SDE. The main contributions of this paper are: 1. We extend the Strang splitting estimator of (Pilipovic et al., 2024) to hypoelliptic models given by second-order SDEs, including appropriate correction factors to obtain consistency. 2. When complete observations are available, we show that the asymptotic variance of the estimator of the diffusion parameter is smaller when maximizing the full likelihood. In contrast, for partial observations, we show that the asymptotic variance remains unchanged regardless of using the full or marginal likelihood of the rough coordinates. 3. We discuss the influence on the statistical properties of using the forward difference approximation for imputing the unobserved velocity variables compared to using the backward or the central difference. 4. We evaluate the performance of the estimators through a simulation study of a second-order SDE, the Kramers oscillator. Additionally, we show numerically in a finite sample study that the marginal likelihood for partial observations is more favorable than the full likelihood. 5. We fit the Kramers oscillator to a paleoclimate ice core dataset from Greenland and estimate the average time needed to pass between two metastable states. The structure of the paper is as follows. In Section 2, we introduce the class of SDE models, define hypoellipticity, introduce the Kramers oscillator, and explain the Strang splitting scheme and its associated estimators. The asymptotic properties of the estimator are established in Section 3. The theoretical results are illustrated in a simulation study on the Kramers Oscillator in Section 4. Section 5 illustrates our methodology on the Greenland ice core data, while the technical results and the proofs of the main theorems and properties are in Section 6 and Supplementary Material S1, respectively. Notation. We use capital bold letters for random vectors, vector-valued functions, and matrices, while lowercase bold letters denote deterministic vectors. \u2225\u00b7 \u2225denotes both the L2 vector norm in Rd. Superscript (i) on a vector denotes the i-th component, while on a matrix it denotes the i-th column. Double subscript ij on a matrix denotes the component in the i-th row and j-th column. The transpose is denoted by \u22a4. Operator Tr(\u00b7) returns the trace of a matrix and det(\u00b7) the determinant. Id denotes the d-dimensional identity matrix, while 0d\u00d7d is a d-dimensional zero square matrix. We denote by [ai]d i=1 a vector with coordinates ai, and by [bij]d i,j=1 a matrix with coordinates bij, for i, j = 1, . . . , d. For a real-valued function g : Rd \u2192R, \u2202x(i)g(x) denotes the partial derivative with respect to x(i) and \u22022 x(i)x(j)g(x) denotes the second partial derivative with respect to x(i) and x(j). The nabla operator \u2207x denotes the gradient vector of g with respect of x, that is, \u2207xg(x) = [\u2202x(i)g(x)]d i=1. H denotes the Hessian matrix of function g, Hg(x) = [\u2202x(i)x(j)g(x)]d i,j=1. For a vector-valued function F : Rd \u2192Rd, the differential operator Dx denotes the Jacobian matrix DxF(x) = [\u2202x(i)F (j)(x)]d i,j=1. Let R represent a vector (or a matrix) valued function defined on (0, 1) \u00d7 Rd (or (0, 1) \u00d7 Rd\u00d7d), such that, for some constant C, \u2225R(a, x)\u2225< aC(1 + \u2225x\u2225)C for all a, x. When denoted by R, it refers to a scalar function. For an open set A, the bar A indicates closure. We write P \u2212 \u2192for convergence in probability P. 4 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2 Problem setup Let Y = (Yt)t\u22650 in (3) be defined on a complete probability space (\u2126, F, P\u03b8) with a complete right-continuous filtration F = (Ft)t\u22650, and let the d-dimensional Wiener process W = (Wt)t\u22650 be adapted to Ft. The probability measure P\u03b8 is parameterized by the parameter \u03b8 = (\u03b2, \u03a3). Rewrite equation (3) as follows: dYt = e A(\u03b2)(Yt \u2212e b(\u03b2)) dt + e N (Yt; \u03b2) dt + e \u03a3 dWt, (5) where e A(\u03b2) = \u0014 0d\u00d7d Id Ax(\u03b2) Av(\u03b2) \u0015 , e b(\u03b2) = \u0014 b(\u03b2) 0d \u0015 , e N(x, v; \u03b2) = \u0014 0d N(x, v; \u03b2) \u0015 . (6) Function F in (2) is thus split as F(x, v; \u03b2) = Ax(\u03b2)(x \u2212b(\u03b2)) + Av(\u03b2)v + N(x, v; \u03b2). Let \u0398\u03b2 \u00d7 \u0398\u03a3 = \u0398 denote the closure of the parameter space with \u0398\u03b2 and \u0398\u03a3 being two convex open bounded subsets of Rr and Rd\u00d7d, respectively. The function N : R2d \u00d7 \u0398\u03b2 \u2192Rd is assumed locally Lipschitz; functions Ax and Av are defined on \u0398\u03b2 and take values in Rd\u00d7d; and the parameter matrix \u03a3 takes values in Rd\u00d7d. The matrix \u03a3\u03a3\u22a4is assumed to be positive definite, shaping the variance of the rough coordinates. As any square root of \u03a3\u03a3\u22a4induces the same distribution, \u03a3 is identifiable only up to equivalence classes. Hence, estimation of the parameter \u03a3 means estimation of \u03a3\u03a3\u22a4. The drift function e F in (3) is divided into a linear part given by the matrix e A and a nonlinear part given by e N. The true value of the parameter is denoted by \u03b80 = (\u03b20, \u03a30), and we assume that \u03b80 \u2208\u0398. When referring to the true parameters, we write Ax,0, Av,0, b0, N0(x), F0(x) and \u03a3\u03a3\u22a4 0 instead of Ax(\u03b20), Av(\u03b20), b(\u03b20), N(x; \u03b20), F(x; \u03b20) and \u03a30\u03a3\u22a4 0 , respectively. We write Ax, Av, b, N(x), F(x), and \u03a3\u03a3\u22a4for any parameter \u03b8. 2.1 Example: The Kramers oscillator The abrupt temperature changes during the ice ages, known as the Dansgaard\u2013Oeschger (DO) events, are essential elements for understanding the climate (Dansgaard et al., 1993). These events occurred during the last glacial era spanning approximately the period from 115,000 to 12,000 years before present and are characterized by rapid warming phases followed by gradual cooling periods, revealing colder (stadial) and warmer (interstadial) climate states (Rasmussen et al., 2014). To analyze the DO events in Section 5, we propose a stochastic model of the escape dynamics in metastable systems, the Kramers oscillator (Kramers, 1940), originally formulated to model the escape rate of Brownian particles from potential wells. The escape rate is related to the mean first passage time \u2014 the time needed for a particle to exceed the potential\u2019s local maximum for the first time, starting at a neighboring local minimum. This rate depends on variables such as the damping coefficient, noise intensity, temperature, and specific potential features, including the barrier\u2019s height and curvature at the minima and maxima. We apply this framework to quantify the rate of climate transitions between stadial and interstadial periods. This provides an estimate on the probability distribution of the ocurrence of DO events, contributing to our understanding of the global climate system. Following Arnold and Imkeller (2000), we introduce the Kramers oscillator as the stochastic Duffing oscillator an example of a second-order SDE and a stochastic damping Hamiltonian system. The Duffing oscillator (Duffing, 1918) is a forced nonlinear oscillator, featuring a cubic stiffness term. The governing equation is given by: \u00a8 xt + \u03b7 \u02d9 xt + d dxU(xt) = f(t), where U(x) = \u2212ax2 2 + bx4 4 , with a, b > 0, \u03b7 \u22650. (7) The parameter \u03b7 in (7) indicates the damping level, a regulates the linear stiffness, and b determines the nonlinear component of the restoring force. In the special case where b = 0, the equation simplifies to a damped harmonic oscillator. Function f represents the driving force and is usually set to f(t) = \u03b7 cos(\u03c9t), which introduces deterministic chaos (Korsch and Jodl, 1999). When the driving force is f(t) = \u221a2\u03b7T\u03be(t), where \u03be(t) is white noise, equation (7) characterizes the stochastic movement of a particle within a bistable potential well, interpreting T > 0 as the temperature of a heat bath. Setting \u03c3 = \u221a2\u03b7T, equation (7) can be reformulated as an It\u00f4 SDE for variables Xt and Vt = \u02d9 Xt, expressed as: dXt = Vt dt, dVt = \u0012 \u2212\u03b7Vt \u2212d dxU(Xt) \u0013 dt + \u03c3 dWt, (8) 5 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where Wt denotes a standard Wiener process. The parameter set of SDE (8) is \u03b8 = {\u03b7, a, b, \u03c32}. The existence and uniqueness of the invariant measure \u03bd0(dx, dy) of (8) is proved in Theorem 3 in (Arnold and Imkeller, 2000). The invariant measure \u03bd0 is linked to the invariant density \u03c00 through \u03bd0(dx, dy) = \u03c00(x, v) dx dy. Here we write \u03c00(x, v) instead of \u03c0(x, v; \u03b80), and \u03c0(x, v) instead of \u03c0(x, v; \u03b8). The Fokker-Plank equation for \u03c0 is given by \u2212v \u2202 \u2202x\u03c0(x, v) + \u03b7\u03c0(x, v) + \u03b7v \u2202 \u2202v \u03c0(x, v) + d dxU(x) \u2202 \u2202v \u03c0(x, v) + \u03c32 2 \u22022 \u2202v2 \u03c0(x, v) = 0. (9) The invariant density that solves the Fokker-Plank equation is: \u03c0(x, v) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 exp \u0010 \u2212\u03b7 \u03c32 v2\u0011 , (10) where C is the normalizing constant. The marginal invariant probability of Vt is thus Gaussian with zero mean and variance \u03c32/(2\u03b7). The marginal invariant probability of Xt is bimodal driven by the potential U(x): \u03c0(x) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 . (11) At steady state, for a particle moving in any potential U(x) and driven by random Gaussian noise, the position x and velocity v are independent of each other. This is reflected by the decomposition of the joint density \u03c0(x, v) into \u03c0(x)\u03c0(v). Fokker-Plank equation (9) can also be used to derive the mean first passage time \u03c4 which is inversely related to Kramers\u2019 escape rate \u03ba (Kramers, 1940): \u03c4 = 1 \u03ba \u2248 2\u03c0 \u0012q 1 + \u03b72 4\u03c92 \u2212 \u03b7 2\u03c9 \u0013 \u2126 exp \u0012\u2206U T \u0013 , where xbarrier = 0 is the local maximum of U(x) and xwell = \u00b1 p a/b are the local minima, \u03c9 = p |U \u2032\u2032(xbarrier)| = \u221aa, \u2126= p U \u2032\u2032(xwell) = \u221a 2a, and \u2206U = U(xbarrier) \u2212U(xwell) = a2/4b, . The formula is derived assuming strong friction, or an over-damped system (\u03b7 \u226b\u03c9), and a small parameter T/\u2206U \u226a1, indicating sufficiently deep potential wells. For the potential defined in (7), the mean waiting time \u03c4 is then approximated by \u03c4 \u2248 \u221a 2\u03c0 q a + \u03b72 4 \u2212\u03b7 2 exp \u0012 a2\u03b7 2b\u03c32 \u0013 . (12) 2.2 Hypoellipticity The SDE (5) is said to be hypoelliptic if its quadratic diffusion matrix e \u03a3e \u03a3\u22a4is not of full rank, while its solutions admit a smooth transition density with respect to the Lebesgue measure. According to H\u00f6rmander\u2019s theorem (Nualart, 2006), this is fulfilled if the SDE in its Stratonovich form satisfies the weak H\u00f6rmander condition. Since \u03a3 does not depend on y, the It\u00f4 and Stratonovich forms coincide. We begin by recalling the concept of Lie brackets: for smooth vector fields f, g : R2d \u2192R2d, the i-th component of the Lie bracket, [f, g](i), is defined as [f, g](i) := D\u22a4 y g(i)(y)f(y) \u2212D\u22a4 y f (i)(y)g(y). We define the set H of vector fields by initially including e \u03a3(i), i = 1, 2, ..., 2d, and then recursively adding Lie brackets H \u2208H \u21d2[e F, H], [e \u03a3(1), H], . . . , [e \u03a3(2d), H] \u2208H. The weak H\u00f6rmander condition is met if the vectors in H span R2d at every point y \u2208R2d. The initial vectors span {(0, v) \u2208R2d | v \u2208Rd}, a d-dimensional subspace. We therefore need to verify the existence of some H \u2208H with a non-zero first element. The first iteration of the system yields [e F, e \u03a3(i)](1) = \u2212\u03a3(i), [e \u03a3(i), e \u03a3(j)](1) = 0, for i, j = 1, 2, ..., 2d. The first equation is non-zero, as are all subsequent iterations. Thus, the second-order SDE defined in (5) is always hypoelliptic. 6 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.3 Assumptions The following assumptions are a generalization of those presented in (Pilipovic et al., 2024). Let T > 0 be the length of the observed time interval. We assume that (5) has a unique strong solution Y = {Yt | t \u2208[0, T]}, adapted to F = {Ft | t \u2208[0, T]}, which follows from the following first two assumptions (Theorem 2 in Alyushina (1988), Theorem 1 in Krylov (1991), Theorem 3.5 in Mao (2007)). We need the last three assumptions to prove the properties of the estimators. (A1) Function N is twice continuously differentiable with respect to both y and \u03b8, i.e., N \u2208C2. Moreover, it is globally one-sided Lipschitz continuous with respect to y on R2d \u00d7 \u0398\u03b2. That is, there exists a constant C > 0 such that for all y1, y2 \u2208R2d, (y1 \u2212y2)\u22a4(N(y1; \u03b2) \u2212N(y2; \u03b2)) \u2264C\u2225y1 \u2212y2\u22252. (A2) Function N exhibits at most polynomial growth in y, uniformly in \u03b8. Specifically, there exist constants C > 0 and \u03c7 \u22651 such that for all y1, y2 \u2208R2d, \u2225N (y1; \u03b2) \u2212N (y2; \u03b2) \u22252 \u2264C \u00001 + \u2225y1\u22252\u03c7\u22122 + \u2225y2\u22252\u03c7\u22122\u0001 \u2225y1 \u2212y2\u22252. Additionally, its derivatives exhibit polynomial growth in y, uniformly in \u03b8. (A3) The solution Y to SDE (5) has invariant probability \u03bd0(dy). (A4) \u03a3\u03a3\u22a4is invertible on \u0398\u03a3. (A5) \u03b2 is identifiable, that is, if F(y, \u03b21) = F(y, \u03b22) for all y \u2208R2d, then \u03b21 = \u03b22. Assumption (A1) ensures finiteness of the moments of the solution X (Tretyakov and Zhang, 2013), i.e., E[ sup t\u2208[0,T ] \u2225Yt\u22252p] < C(1 + \u2225y0\u22252p), \u2200p \u22651. (13) Assumption (A3) is necessary for the ergodic theorem to ensure convergence in distribution. Assumption (A4) ensures that the model (5) is hypoelliptic. Assumption (A5) ensures the identifiability of the drift parameter. 2.4 Strang splitting scheme Consider the following splitting of (5): dY[1] t = e A(Y[1] t \u2212e b) dt + e \u03a3 dWt, Y[1] 0 = y0, (14) dY[2] t = e N(Y[2] t ) dt, Y[2] 0 = y0. (15) There are no assumptions on the choice of e A and e b, and thus the nonlinear function e N. Indeed, we show that the asymptotic results hold for any choice of e A and e b in both the complete and the partial observation settings. This extends the results in Pilipovic et al. (2024), where it is shown to hold in the elliptic complete observation case, as well. While asymptotic results are invariant to the choice of e A and e b, finite sample properties of the scheme and the corresponding estimators are very different, and it is important to choose the splitting wisely. Intuitively, when the process is close to a fixed point of the drift, the linear dynamics are dominating, whereas far from the fixed points, the nonlinearities might be dominating. If the drift has a fixed point y\u22c6, we therefore suggest setting e A = Dye F(y\u22c6) and e b = y\u22c6. This choice is confirmed in simulations (for more details see Pilipovic et al. (2024)). Solution of SDE (14) is an Ornstein\u2013Uhlenbeck (OU) process given by the following h-flow: Y[1] tk = \u03a6[1] h (Y[1] tk\u22121) = e \u00b5h(Y[1] tk\u22121; \u03b2) + e \u03b5h,k, (16) e \u00b5h(y; \u03b2) := e e Ah(y \u2212e b) + e b, (17) e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du, (18) where e \u03b5h,k i.i.d \u223cN2d(0, e \u2126h) for k = 1, . . . , N. It is useful to rewrite e \u2126h in the following block matrix form, e \u2126h = \" \u2126[SS] h \u2126[SR] h \u2126[RS] h \u2126[RR] h # , (19) 7 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where S in the superscript stands for smooth and R stands for rough. The Schur complement of e \u2126h with respect to \u2126[RR] h and the determinant of e \u2126h are given by: \u2126[S|R] h := \u2126[SS] h \u2212\u2126[SR] h (\u2126[RR] h )\u22121\u2126[RS] h , det e \u2126h = det \u2126[RR] h det \u2126[S|R] h . Assumptions (A1)-(A2) ensure the existence and uniqueness of the solution of (15) (Theorem 1.2.17 in Humphries and Stuart (2002)). Thus, there exists a unique function e fh : R2d \u00d7 \u0398\u03b2 \u2192R2d, for h \u22650, such that Y[2] tk = \u03a6[2] h (Y[2] tk\u22121) = e fh(Y[2] tk\u22121; \u03b2). (20) For all \u03b2 \u2208\u0398\u03b2, the h-flow e fh fulfills the following semi-group properties: e f0(y; \u03b2) = y, e ft+s(y; \u03b2) = e ft( e fs(y; \u03b2); \u03b2), t, s \u22650. For y = (x\u22a4, v\u22a4)\u22a4, we have: e fh(x, v; \u03b2) = \u0014 x fh(x, v; \u03b2) \u0015 , (21) where fh(x, v; \u03b2) is the solution of the ODE with vector field N(x, v; \u03b2). We introduce another assumption needed to define the pseudo-likelihood based on the splitting scheme. (A6) Inverse function e f \u22121 h (y; \u03b2) is defined asymptotically for all y \u2208R2d and all \u03b2 \u2208\u0398\u03b2, when h \u21920. Then, the inverse of \u02dc fh can be decomposed as: e f \u22121 h (x, v; \u03b2) = \u0014 x f \u22c6\u22121 h (x, v; \u03b2) \u0015 , (22) where f \u22c6\u22121 h (x, v; \u03b2) is the rough part of the inverse of e f \u22121 h . It does not equal f \u22121 h since the inverse does not propagate through coordinates when fh depends on x. We are now ready to define the Strang splitting scheme for model (5). Definition 2.1 (Strang splitting) Let Assumptions (A1)-(A2) hold. The Strang approximation of the solution of (5) is given by: \u03a6[str] h (Y[str] tk\u22121) = (\u03a6[2] h/2 \u25e6\u03a6[1] h \u25e6\u03a6[2] h/2)(Y[str] tk\u22121) = e fh/2(e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k). (23) Remark 1 The order of composition in the splitting schemes is not unique. Changing the order in the Strang splitting leads to a sum of 2 independent random variables, one Gaussian and one non-Gaussian, whose likelihood is not trivial. Thus, we only use the splitting (23). 2.5 Strang splitting estimators In this section, we introduce four estimators, all based on the Strang splitting scheme. We distinguish between estimators based on complete observations (denoted by C when both X and V are observed) and partial observations (denoted by P when only X is observed). In applications, we typically only have access to partial observations, however, the full observation estimator is used as a building block for the partial observation case. Additionally, we distinguish the estimators based on the type of likelihood function employed. These are the full likelihood (denoted by F) and the marginal likelihood of the rough component (denoted by R). We furthermore use the conditional likelihood based on the smooth component given the rough part (denoted by S | R) to decompose the full likelihood. 2.5.1 Complete observations Assume we observe the complete sample Y0:tN := (Ytk)N k=1 from (5) at time steps 0 = t0 < t1 < ... < tN = T. For notational simplicity, we assume equidistant step size h = tk \u2212tk\u22121. Strang splitting scheme (23) is a nonlinear transformation of a Gaussian random variable e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k. We define: e Zk,k\u22121(\u03b2) := e f \u22121 h/2(Ytk; \u03b2) \u2212e \u00b5h( e fh/2(Ytk\u22121; \u03b2); \u03b2), (24) 8 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT and apply change of variables to get: p(ytk | ytk\u22121) = pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121)| det Dy e f \u22121 h/2(ytk)|. Using \u2212log | det Dy e f \u22121 h/2 (y; \u03b2) | = log | det Dy e fh/2 (y; \u03b2) | and det Dy e fh/2 (y; \u03b2) = det Dvfh/2 (y; \u03b2), together with the Markov property of Y0:tN , we get the following objective function based on the full log-likelihood: L[CF](Y0:tN ; \u03b8) := N X k=1 \u0010 log det e \u2126h(\u03b8) + e Zk,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk,k\u22121(\u03b2) + 2 log | det Dvfh/2(Ytk; \u03b2)| \u0011 . (25) Now, split e Zk,k\u22121 from (24) into the smooth and rough parts e Zk,k\u22121 = ((Z[S] k,k\u22121)\u22a4, (Z[R] k,k\u22121)\u22a4)\u22a4defined as: Z[S] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]d i=1 = Xtk \u2212\u00b5[S] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (26) Z[R] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]2d i=d+1 = f \u22c6\u22121 h/2 (Ytk; \u03b2) \u2212\u00b5[R] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (27) where \u00b5[S] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]d i=1, \u00b5[R] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]2d i=d+1. (28) We also define the following sequence of vectors Z[S|R] k,k\u22121(\u03b2) := Z[S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z[R] k,k\u22121(\u03b2). (29) The formula for jointly normal distributions yields: pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121) = pN (0,\u2126[RR] h )(z[R] k,k\u22121 | ytk\u22121) \u00b7 pN (\u2126[SR] h (\u2126[RR] h )\u22121z[R] k,k\u22121,\u2126[S|R] h )(z[S] k,k\u22121 | z[R] k,k\u22121, ytk\u22121). This leads to dividing the full log-likelihood L[CF] into a sum of the marginal log-likelihood L[CR](Y0:tN ; \u03b8) and the smooth-given-rough log-likelihood L[CS|R](Y0:tN ; \u03b8): L[CF](Y0:tN ; \u03b8) = L[CR](Y0:tN ; \u03b8) + L[CS|R](Y0:tN ; \u03b8), where L[CR] (Y0:tN ; \u03b8) := N X k=1 log det \u2126[RR] h (\u03b8) + Z[R] k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z[R] k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 (Ytk; \u03b2) \f \f ! , (30) L[CS|R] (Y0:tN ; \u03b8) := N X k=1 \u0010 log det \u2126[S|R] h (\u03b8) + Z[S|R] k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z[S|R] k,k\u22121(\u03b2) \u0011 . (31) The terms containing the drift parameter in L[CR] in (30) are of order h1/2, as in the elliptic case, whereas the terms containing the drift parameter in L[CS|R] in (31) are of order h3/2. Consequently, under a rapidly increasing experimental design where Nh \u2192\u221eand Nh2 \u21920, the objective function (31) is degenerate for estimating the drift parameter. However, it contributes to the estimation of the diffusion parameter when the full objective function (25) is used. We show in later sections that employing (25) results in a lower asymptotic variance for the diffusion parameter making it more efficient in complete observation scenarios. The estimators based on complete observations are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (Y0:tN ; \u03b8) , obj \u2208{[CF], [CR]}. (32) Although the full objective function is based on twice as many equations as the marginal likelihood, its implementation complexity, speed, and memory requirements are similar to the marginal objective function. Therefore, if the complete observations are available, we recommend using the objective function (25) based on the full likelihood. 9 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.5.2 Partial observations Assume we only observe the smooth coordinates X0:tN := (Xtk)N k=0. The observed process Xt alone is not a Markov process, although the complete process Yt is. To approximate Vtk, we define the backward difference process: \u2206hXtk := Xtk \u2212Xtk\u22121 h . (33) From SDE (2) it follows that \u2206hXtk = 1 h Z tk tk\u22121 Vt dt. (34) We propose to approximate Vtk using \u2206hXtk by any of the three approaches: 1. Backward difference approximation: Vtk \u2248\u2206hXtk; 2. Forward difference approximation: Vtk \u2248\u2206hXtk+1; 3. Central difference approximation: Vtk \u2248 \u2206hXtk +\u2206hXtk+1 2 . The forward difference approximation performs best in our simulation study, which is also the approximation method employed in Gloter (2006) and Samson and Thieullen (2012). In the field of numerical approximations of ODEs, backward and forward finite differences have the same order of convergence, whereas the central difference has a higher convergence rate. However, the diffusion parameter estimator based on the central difference (Xtk+1 \u2212Xtk\u22121)/2h is less suitable because this approximation skips a data point and thus increases the estimator\u2019s variance. For further discussion, see Remark 6. Thus, we focus exclusively on forward differences, following Gloter (2006); Samson and Thieullen (2012), and all proofs are done for this approximation. Similar results also hold for the backward difference, with some adjustments needed in the conditional moments due to filtration issues. We start by approximating e Z for the case of partial observations denoted by e Z: e Zk+1,k,k\u22121(\u03b2) := e f \u22121 h/2(Xtk, \u2206hXtk+1; \u03b2) \u2212e \u00b5h( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2). (35) The smooth and rough parts of e Z are thus equal to: Z [S] k,k\u22121(\u03b2) := Xtk \u2212\u00b5[S] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (36) Z [R] k+1,k,k\u22121(\u03b2) := f \u22c6\u22121 h/2 (Xtk, \u2206hXtk+1; \u03b2) \u2212\u00b5[R] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (37) and Z [S|R] k+1,k,k\u22121(\u03b2) := Z [S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z [R] k+1,k,k\u22121(\u03b2). (38) Compared to Z[R] k,k\u22121 in (27), Z [R] k+1,k,k\u22121 in (37) depends on three consecutive data points, with the additional point Xtk+1 entering through \u2206hXtk+1. Furthermore, Xtk enters both f \u22c6\u22121 h/2 and e \u00b5[R] h , rending them coupled. This coupling has a significant influence on later derivations of the estimator\u2019s asymptotic properties, in contrast to the elliptic case where the derivations simplify. While it might seem straightforward to incorporate e Z, Z [S] k,k\u22121 and Z [R] k,k\u22121 into the objective functions (25), (30) and (31), it introduces bias in the estimators of the diffusion parameters, as also discussed in (Gloter, 2006; Samson and Thieullen, 2012). The bias arises because Xtk enters in both f \u22c6\u22121 h/2 and e \u00b5[R] h , and the covariances of e Z, Z [S] k,k\u22121, and Z [R] k,k\u22121 differ from their complete observation counterparts. To eliminate this bias, Gloter (2006); Samson and Thieullen (2012) applied a correction of 2/3 multiplied to log det of the covariance term in the objective functions, which is log det \u03a3\u03a3\u22a4in the Euler-Maruyama discretization. We also need appropriate corrections to our objective functions (25), (30) and (31), however, caution is necessary because log det e \u2126h(\u03b8) depends on both drift and diffusion parameters. To counterbalance this, we also incorporate an adjustment to h in \u2126h. Moreover, we add the term 4 log | det Dvfh/2| to objective function (31) to obtain consistency of the drift estimator under partial observations. The detailed derivation of these correction factors will be elaborated in the following sections. 10 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We thus propose the following objective functions: L[PF](X0:tN ; \u03b8) := 4 3(N \u22122) log det e \u21263h/4(\u03b8) (39) + N\u22121 X k=1 \u0010e Zk+1,k,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk+1,k,k\u22121(\u03b2) + 6 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 , L[PR] (X0:tN ; \u03b8) := 2 3(N \u22122) log det \u2126[RR] 3h/2(\u03b8) (40) + N\u22121 X k=1 \u0010 Z [R] k+1,k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z [R] k+1,k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001\f \f \u0011 , L[PS|R] (X0:tN ; \u03b8) := 2(N \u22122) log det \u2126[S|R] h (\u03b8) (41) + N\u22121 X k=1 \u0010 Z [S|R] k+1,k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z [S|R] k+1,k,k\u22121(\u03b2) + 4 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 . (42) Remark 2 Due to the correction factors in the objective functions, we now have that L[PF](X0:tN ; \u03b8) \u0338= L[PR](X0:tN ; \u03b8) + L[PS|R](X0:tN ; \u03b8). (43) However, when expanding the objective functions (39)-(41) using Taylor series to the lowest necessary order in h, their approximations will satisfy equality in (43), as shown in Section 6. Remark 3 Adding the extra term 4 log | det Dvfh/2| in (41) is necessary to keep the consistency of the drift parameter. However, this term is not initially present in objective function (31), making this correction somehow artificial. This can potentially make the objective function further from the true log-likelihood. The estimators based on the partial sample are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (X0:tN ; \u03b8) , obj \u2208{[PF], [PR]}. (44) In the partial observation case, the asymptotic variances of the diffusion estimators are identical whether using (39) or (40), in contrast to the complete observation scenario. This variance is shown to be 9/4 times higher than the variance of the estimator \u02c6 \u03b8[CF] N , and 9/8 times higher than that of the estimator based on the marginal likelihood \u02c6 \u03b8[CR] N . The numerical study in Section 4 shows that the estimator based on the marginal objective function (40) is less biased than the one based on the full objective function (39) in finite sample scenarios with partial observations. A potential reason for this is discussed in Remark 3. Therefore, we recommend using the objective function (40) for partial observations. 3 Main results This section states the two main results \u2013 consistency and asymptotic normality of all four proposed estimators. The key ideas for proofs are presented in Supplementary Materials S1. First, we state the consistency of the estimators in both complete and partial observation cases. Let L[obj] N be one of the objective functions (25), (30), (39) or (40) and b \u03b8[obj] N the corresponding estimator. Thus, obj \u2208{[CF], [CR], [PF], [PR]}. We use superscript [C\u00b7] to refer to any objective function in the complete observation case. Likewise, [\u00b7R] stands for an objective function based on the rough marginal likelihood either in the complete or the partial observation case. Theorem 3.1 (Consistency of the estimators) Assume (A1)-(A6), h \u21920, and Nh \u2192\u221e. Then under the complete observation or partial observation case, it holds: b \u03b2[obj] N P\u03b80 \u2212 \u2212 \u2192\u03b20, d \u03a3\u03a3 [obj] N P\u03b80 \u2212 \u2212 \u2192\u03a3\u03a3\u22a4 0 . 11 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Remark 4 We split the full objective function (25) into the sum of the rough marginal likelihood (30) and the conditional smooth-given-rough likelihood (31). Even if (31) cannot identify the drift parameter \u03b2, it is an important intermediate step in understanding the full objective function (25). This can be seen in the proof of Theorem 3.1, where we first establish consistency of the diffusion estimator with a convergence rate of \u221a N, which is faster than \u221a Nh, the convergence rate of the drift estimators. Then, under complete observations, we show that 1 Nh(L[CR] N (\u03b2, \u03c30) \u2212L[CR] N (\u03b20, \u03c30)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 Z (F0(y) \u2212F(y))\u22a4(\u03a3\u03a3\u22a4)\u22121(F0(y) \u2212F(y)) d\u03bd0(y). (45) The right-hand side of (45) is non-negative, with a unique zero for F = F0. Conversely, for objective function (31), it holds: 1 Nh(L[CS|R] N (\u03b2, \u03c3) \u2212L[CS|R] N (\u03b20, \u03c3)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0. (46) Hence, (46) does not have a unique minimum, making the drift parameter unidentifiable. Similar conclusions are drawn in the partial observation case. Now, we state the asymptotic normality of the estimator. First, we need some preliminaries. Let \u03c1 > 0 and B\u03c1 (\u03b80) = {\u03b8 \u2208\u0398 | \u2225\u03b8 \u2212\u03b80\u2225\u2264\u03c1} be a ball around \u03b80. Since \u03b80 \u2208\u0398, for sufficiently small \u03c1 > 0, B\u03c1(\u03b80) \u2208\u0398. For \u02c6 \u03b8[obj] N \u2208B\u03c1 (\u03b80), the mean value theorem yields: \u0012Z 1 0 HL[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt \u0013 (\u02c6 \u03b8[obj] N \u2212\u03b80) = \u2212\u2207\u03b8L[obj] N (\u03b80) . (47) Define: C[obj] N (\u03b8) := \uf8ee \uf8ef \uf8f0 h 1 Nh\u22022 \u03b2(i1)\u03b2(i2)L[obj] N (\u03b8) ir i1,i2=1 h 1 N \u221a h\u22022 \u03b2(i)\u03c3(j)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u221a h\u22022 \u03c3(j)\u03b2(i)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u22022 \u03c3(j1)\u03c3(j2)L[obj] N (\u03b8) is j1,j2=1 \uf8f9 \uf8fa \uf8fb, (48) s[obj] N := \"\u221a Nh( \u02c6 \u03b2[obj] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[obj] N \u2212\u03c30) # , \u03bb[obj] N := \uf8ee \uf8ef \uf8f0 \u2212 1 \u221a Nh \u2207\u03b2L[obj] N (\u03b80) \u22121 \u221a N \u2207\u03c3L[obj] N (\u03b80) \uf8f9 \uf8fa \uf8fb, (49) and D[obj] N := R 1 0 C[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt. Then, (47) is equivalent to D[obj] N s[obj] N = \u03bb[obj] N . Let: [C\u03b2(\u03b80)]i1,i2 := Z (\u2202\u03b2(i1)F0(y))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03b2(i2)F0(y)) d\u03bd0(y), 1 \u2264i1, i2 \u2264r, (50) [C\u03c3(\u03b80)]j1,j2 := Tr((\u2202\u03c3(j1)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03c3(j2)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121), 1 \u2264j1, j2 \u2264s. (51) Theorem 3.2 Let assumptions (A1)-(A6) hold, and let h \u21920, Nh \u2192\u221e, and Nh2 \u21920. Then under complete observations, it holds: \"\u221a Nh( \u02c6 \u03b2[CR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[CF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. If only partial observations are available and the unobserved coordinates are approximated using the forward or backward differences, then \"\u221a Nh( \u02c6 \u03b2[PR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[PF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. 12 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Here, we only outline the proof. According to Theorem 1 in Kessler (1997) or Theorem 1 in S\u00f8rensen and Uchida (2003), Lemmas 3.3 and 3.4 below are enough for establishing asymptotic normality of \u02c6 \u03b8N. For more details, see proof of Theorem 1 in S\u00f8rensen and Uchida (2003). Lemma 3.3 Let CN(\u03b80) be defined in (48). For h \u21920 and Nh \u2192\u221e, it holds: C[CR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015 , C[PR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2 3C\u03c3(\u03b80) \u0015 , C[CF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015 , C[PF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 8 3C\u03c3(\u03b80) \u0015 . Moreover, let \u03c1N be a sequence such that \u03c1N \u21920, then in all cases it holds: sup \u2225\u03b8\u2225\u2264\u03c1N \u2225C[obj] N (\u03b80 + \u03b8) \u2212C[obj] N (\u03b80)\u2225 P\u03b80 \u2212 \u2212 \u21920. Lemma 3.4 Let \u03bbN be defined (49). For h \u21920, Nh \u2192\u221eand Nh2 \u21920, it holds: \u03bb[CR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015\u0013 , \u03bb[CF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 4C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 16C\u03c3(\u03b80) \u0015\u0013 , under P\u03b80. Now, the two previous lemmas suggest s[obj] N = (D[obj] n )\u22121\u03bb[obj] N d \u2212 \u2192C[obj] N (\u03b80)\u22121\u03bb[obj] N . The previous line is not completely formal, but it gives the intuition. For more details on formally deriving the result, see Section 7.4 in Pilipovic et al. (2024) or proof of Theorem 1 in S\u00f8rensen and Uchida (2003). 4 Simulation study This Section illustrates the simulation study of the Kramers oscillator (8), demonstrating the theoretical aspects and comparing our proposed estimators against estimators based on the EM and LL approximations. We chose to compare our proposed estimators to these two, because the EM estimator is routinely used in applications, and the LL estimator has shown to be one of the best state-of-the-art methods, see Pilipovic et al. (2024) for the elliptic case. The true parameters are set to \u03b70 = 6.5, a0 = 1, b0 = 0.6 and \u03c32 0 = 0.1. We outline the estimators specifically designed for the Kramers oscillator, explain the simulation procedure, describe the optimization implemented in the R programming language R Core Team (2022), and then present and interpret the results. 4.1 Estimators used in the study For the Kramers oscillator (8), the EM transition distribution is: \u0014 Xtk Vtk \u0015 | \u0014 Xtk\u22121 Vtk\u22121 \u0015 = \u0014 x v \u0015 \u223cN \u0012\u0014 x + hv v + h \u0000\u2212\u03b7v + ax \u2212bx3\u0001 \u0015 , \u0014 0 0 0 h\u03c32 \u0015\u0013 . The ill-conditioned variance of this discretization restricts us to an estimator that only uses the marginal likelihood of the rough coordinate. The estimator for complete observations directly follows from the Gaussian distribution. The estimator for partial observations is defined as (Samson and Thieullen, 2012): b \u03b8[PR] EM = arg min \u03b8 ( 2 3(N \u22123) log \u03c32 + 1 h\u03c32 N\u22122 X k=1 (\u2206hXtk+1 \u2212\u2206hXtk \u2212h(\u2212\u03b7\u2206hXtk\u22121 + aXtk\u22121 \u2212bX3 tk\u22121))2 ) . To our knowledge, the LL estimator has not previously been applied to partial observations. Given the similar theoretical and computational performance of the Strang and LL discretizations, we suggest (without formal proof) to adjust the LL objective functions with the same correction factors as used in the Strang approach. The numerical evidence indicates 13 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT that the LL estimator has the same asymptotic properties as those proved for the Strang estimator. We omit the definition of the LL estimator due to its complexity (see Melnykova (2020); Pilipovic et al. (2024) and accompanying code). To define S estimators based on the Strang splitting scheme, we first split SDE (8) as follows: d \u0014 Xt Vt \u0015 = \u0014 0 1 \u22122a \u2212\u03b7 \u0015 | {z } A \u0014 Xt Vt \u0015 \u2212 \u0014 x\u22c6 \u00b1 0 \u0015 | {z } b ! dt + \u0014 0 aXt \u2212bX3 t + 2a(Xt \u2212x\u22c6 \u00b1) \u0015 | {z } N(Xt,Vt) dt + \u0014 0 \u03c3 \u0015 dWt, where x\u22c6 \u00b1 = \u00b1 p a/b are the two stable points of the dynamics. Since there are two stable points, we suggest splitting with x\u22c6 +, when Xt > 0, and x\u22c6 \u2212, when Xt < 0. This splitting follows the guidelines from (Pilipovic et al., 2024). Note that the nonlinear ODE driven by N(x, v) has a trivial solution where x is a constant. To obtain Strang estimators, we plug in the corresponding components in the objective functions (25), (30), (39) and (40). 4.2 Trajectory simulation We simulate a sample path using the EM discretization with a step size of hsim = 0.0001 to ensure good performance. To reduce discretization errors, we sub-sample from the path at wider intervals to get time step h = 0.1. The path has N = 5000 data points. We repeat the simulations to obtain 250 data sets. 4.3 Optimization in R For optimizing the objective functions, we proceed as in Pilipovic et al. (2024) using the R package torch (Falbel and Luraschi, 2022), which allows automatic differentiation. The optimization employs the resilient backpropagation algorithm, optim_rprop. We use the default hyperparameters and limit the number of optimization iterations to 2000. The convergence criterion is set to a precision of 10\u22125 for the difference between estimators in consecutive iterations. The initial parameter values are set to (\u22120.1, \u22120.1, 0.1, 0.1). 4.4 Results The results of the simulation study are presented in Figure 1. Figure 1A) presents the distributions of the normalized estimators in the complete and partial observation cases. The S and LL estimators exhibit nearly identical performance, particularly in the complete observation scenario. In contrast, the EM method displays significant underperformance and notable bias. The variances of the S and LL rough-likelihood estimators of \u03c32 are higher compared to those derived from the full likelihood, aligning with theoretical expectations. Interestingly, in the partial observation scenario, Figure 1A) reveals that estimators employing the full likelihood display greater finite sample bias compared to those based on the rough likelihood. Possible reasons for this bias are discussed in Remark 3. However, it is noteworthy that this bias is eliminated for smaller time steps, e.g. h = 0.0001 (not shown), thus confirming the theoretical asymptotic results. This observation suggests that the rough likelihood is preferable under partial observations due to its lower bias. Backward finite difference approximations of the velocity variables perform similarly to the forward differences and are therefore excluded from the figure for clarity. We closely examine the variances of the S estimators of \u03c32 in Figure 1B). The LL estimators are omitted due to their similarity to the S estimators, and because the computation times for the LL estimators are prohibitive. To align more closely with the asymptotic predictions, we opt for h = 0.02 and conduct 1000 simulations. Additionally, we set \u03c32 0 = 100 to test different noise levels. Atop each empirical distribution, we overlay theoretical normal densities that match the variances as per Theorem 3.2. The theoretical variance is derived from C\u03c32(\u03b80) in (51), which for the Kramers oscillator in (8) is: C\u03c32(\u03b80) = 1 \u03c34 0 . (52) Figure 1 illustrates that the lowest variance of the diffusion estimator is observed when using the full likelihood with complete observations. The second lowest variance is achieved using the rough likelihood with complete observations. The largest variance is observed in the partial observation case; however, it remains independent of whether the full or rough likelihood is used. Once again, we observe that using the full likelihood introduces additional finite sample bias. In Figure 1C), we compare running times calculated using the tictoc package in R. Running times are measured from the start of the optimization step until convergence. The figure depicts the median over 250 repetitions to mitigate the influence of outliers. The EM method is notably the fastest; however, the S estimators exhibit only slightly slower performance. The LL estimators are 10-100 times slower than the S estimators, depending on whether complete or partial observations are used and whether the full or rough likelihood is employed. 14 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 1: Parameter estimates in a simulation study for the Kramers oscillator, eq. (8). The color code remains consistent across all three figures. A) Normalized distributions of parameter estimation errors (\u02c6 \u03b8N \u2212\u03b80) \u2298\u03b80 in both complete and partial observation cases, based on 250 simulated data sets with h = 0.1 and N = 5000. Each column corresponds to a different parameter, while the color indicates the type of estimator. Estimators are distinguished by superscripted objective functions (F for full and R for rough). B) Distribution of b \u03c32 N estimators based on 1000 simulations with h = 0.02 and N = 5000 across different observation settings (complete or partial) and likelihood choices (full or rough) using the Strang splitting scheme. The true value of \u03c32 is set to \u03c32 0 = 100. Theoretical normal densities are overlaid for comparison. Theoretical variances are calculated based on C\u03c32(\u03b80), eq. (52). C) Median computing time in seconds for one estimation of various estimators based on 250 simulations with h = 0.1 and N = 5000. Shaded color patterns represent times in the partial observation case, while no color pattern indicates times in the complete observation case. 15 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 2: Ice core data from Greenland. Left: Trajectories over time (in kilo years) of the centered negative logarithm of the Ca2+ measurements (top) and forward difference approximations of its rate of change (bottom). The two vertical dark red lines represent the estimated stable equilibria of the double-well potential function. Green points denote upand down-crossings of level \u00b10.6, conditioned on having crossed the other level. Green vertical lines indicate empirical estimates of occupancy in either of the two metastable states. Right: Empirical densities (black) alongside estimated invariant densities with confidence intervals (dark red), prediction intervals (light red), and the empirical density of a simulated sample from the estimated model (blue). 5 Application to Greenland Ice Core Data During the last glacial period, significant climatic shifts known as Dansgaard-Oeschger (DO) events have been documented in paleoclimatic records (Dansgaard et al., 1993). Proxy data from Greenland ice cores, particularly stable water isotope composition (\u03b418O) and calcium ion concentrations (Ca2+), offer valuable insights into these past climate variations (Boers et al., 2017, 2018; Boers, 2018; Ditlevsen et al., 2002; Lohmann and Ditlevsen, 2019; Hassanibesheli et al., 2020). The \u03b418O ratio, reflecting the relative abundance of 18O and 16O isotopes in ice, serves as a proxy for paleotemperatures during snow deposition. Conversely, calcium ions, originating from dust deposition, exhibit a strong negative correlation with \u03b418O, with higher calcium ion levels indicating colder conditions. Here, we prioritize Ca2+ time series due to its finer temporal resolution. In Greenland ice core records, the DO events manifest as abrupt transitions from colder climates (stadials) to approximately 10 degrees warmer climates (interstadials) within a few decades. Although the waiting times between state switches last a couple of thousand years, their spacing exhibits significant variability. The underlying mechanisms driving these changes remain largely elusive, prompting discussions on whether they follow cyclic patterns, result from external forcing, or emerge from noise-induced processes (Boers, 2018; Ditlevsen et al., 2007). We aim to determine if the observed data can be explained by noise-induced transitions of the Kramers oscillator. The measurements were conducted at the summit of the Greenland ice sheet as part of the Greenland Icecore Project (GRIP) (Anklin et al., 1993; Andersen et al., 2004). Originally, the data were sampled at 5 cm intervals, resulting in a non-equidistant time series due to ice compression at greater depths, where 5 cm of ice core spans longer time periods. For our analysis, we use a version of the data transformed into a uniformly spaced series through 20-year binning and averaging. This transformation simplifies the analysis and highlights significant climatic trends. The dataset is available in the supplementary material of (Rasmussen et al., 2014; Seierstad et al., 2014). 16 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT To address the large amplitudes and negative correlation with temperature, we transform the data to minus the logarithm of Ca2+, where higher values of the transformed variable indicate warmer climates at the time of snow deposition. Additionally, we center the transformed measurements around zero. With the 20-year binning, to obtain one point per 20 years, we average across the bins, resulting in a time step of h = 0.02kyr (1kyr = 1000 years). Additionally, we addressed a few missing values using the na.approx function from the zoo package. Following the approach of Hassanibesheli et al. (2020), we analyze a subset of the data with a sufficiently good signal-to-noise ratio. Hassanibesheli et al. (2020) examined the data from 30 to 60kyr before present. Here, we extend the analysis to cover 30kyr to 80kyr, resulting in a time interval of T = 50kyr and a sample size of N = 2500. We approximate the velocity of the transformed Ca2+ by the forward difference method. The trajectories and empirical invariant distributions are illustrated in Figure 2. We fit the Kramers oscillator to the \u2212log Ca2+ time series and estimate parameters using the Strang estimator. Following Theorem 3.2, we compute C\u03b2(\u03b80) from (50). Applying the invariant density \u03c00(x, v) from (10), which decouples into \u03c00(x) (11) and a Gaussian zero-mean and \u03c32 0/(2\u03b70) variance, leads us to: C\u03b2(\u03b80) = \uf8ee \uf8ef \uf8ef \uf8f0 1 2\u03b70 0 0 0 1 \u03c32 0 R \u221e \u2212\u221ex2\u03c00(x) dx \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 0 \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 1 \u03c32 0 R \u221e \u2212\u221ex6\u03c00(x) dx \uf8f9 \uf8fa \uf8fa \uf8fb. (53) Thus, to obtain 95% confidence intervals (CI) for the estimated parameters, we plug b \u03b8N into (52) and (53). The estimators and confidence intervals are shown in Table 1. We also calculate the expected waiting time \u03c4, eq. (12), of crossing from one state to another, and its confidence interval using the Delta Method. Parameter Estimate 95% CI \u03b7 62.5 59.4 \u221265.6 a 296.7 293.6 \u2212299.8 b 219.1 156.4 \u2212281.7 \u03c32 9125 8589 \u22129662 \u03c4 3.97 3.00 \u22124.94 Table 1: Estimated parameters of the Kramers oscillator from Greenland ice core data. The model fit is assessed in the right panels of Figure 2. Here, we present the empirical distributions of the two coordinates along with the fitted theoretical invariant distribution and a 95% confidence interval. Additionally, a prediction interval for the distribution is provided by simulating 1000 datasets from the fitted model, matching the size of the empirical data. We estimate the empirical distributions for each simulated dataset and construct a 95% prediction interval using the pointwise 2.5th and 97.5th percentiles of these estimates. A single example trace is included in blue. While the fitted distribution for \u2212log Ca2+ appears to fit well, even with this symmetric model, the velocity variables are not adequately captured. This discrepancy is likely due to the presence of extreme values in the data that are not effectively accounted for by additive Gaussian noise. Consequently, the model compensates by estimating a large variance. We estimate the waiting time between metastable states to be approximately 4000 years. However, this approximation relies on certain assumptions, namely 62.5 \u2248\u03b7 \u226b\u221aa \u224817.2 and 73 \u2248\u03c32/2\u03b7 \u226aa2/4b \u2248100. Thus, the accuracy of the approximation may not be highly accurate. Defining the current state of the process is not straightforward. One method involves identifying successive upand down-crossings of predefined thresholds within the smoothed data. However, the estimated occupancy time in each state depends on the level of smoothing applied and the distance of crossing thresholds from zero. Using a smoothing technique involving running averages within windows of 11 data points (equivalent to 220 years) and detecting downand up-crossings of levels \u00b10.6, we find an average occupancy time of 4058 years in stadial states and 3550 years in interstadial states. Nevertheless, the actual occupancy times exhibit significant variability, ranging from 60 to 6900 years, with the central 50% of values falling between 665 and 2115 years. This classification of states is depicted in green in Figure 2. Overall, the estimated mean occupancy time inferred from the Kramers oscillator appears reasonable. 6 Technical results In this Section, we present all the necessary technical properties that are used to derive the main results of the paper. 17 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We start by expanding e \u2126h and its block components \u2126[RR] h (\u03b8)\u22121, \u2126[S|R] h (\u03b8)\u22121, log det \u2126[RR] h (\u03b8), log det \u2126[S|R] h (\u03b8) and log | det Dfh/2 (y; \u03b2) | when h goes to zero. Then, we expand e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) around Ytk\u22121 when h goes to zero. The main tools used are It\u00f4\u2019s lemma, Taylor expansions, and Fubini\u2019s theorem. The final result is stated in Propositions 6.6 and 6.7. The approximations depend on the drift function F, the nonlinear part N, and some correlated sequences of Gaussian random variables. Finally, we obtain approximations of the objective functions (25), (30), (31) and (39) (41). Proofs of all the stated propositions and lemmas in this section are in Supplementary Material S1. 6.1 Covariance matrix e \u2126h The covariance matrix e \u2126h is approximated by: e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du = he \u03a3e \u03a3\u22a4+ h2 2 ( e Ae \u03a3e \u03a3\u22a4+ e \u03a3e \u03a3\u22a4e A\u22a4) + h3 6 ( e A2 e \u03a3e \u03a3\u22a4+ 2 e Ae \u03a3e \u03a3\u22a4e A\u22a4+ e \u03a3e \u03a3\u22a4( e A2)\u22a4) + h4 24( e A3 e \u03a3e \u03a3\u22a4+ 3 e A2 e \u03a3e \u03a3\u22a4e A\u22a4+ 3 e Ae \u03a3e \u03a3\u22a4( e A2)\u22a4+ e \u03a3e \u03a3\u22a4( e A3)\u22a4) + R(h5, y0). (54) The following lemma approximates each block of e \u2126h up to the first two leading orders of h. The result follows directly from equations (4), (6), and (54). Lemma 6.1 The covariance matrix e \u2126h defined in (54)-(19) approximates block-wise as: \u2126[SS] h (\u03b8) = h3 3 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0), \u2126[SR] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (Av(\u03b2)\u03a3\u03a3\u22a4+ 2\u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RS] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (2Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RR] h (\u03b8) = h\u03a3\u03a3\u22a4+ h2 2 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h3, y0). Building on Lemma 6.1, we calculate products, inverses, and logarithms of the components of e \u2126h in the following lemma. Lemma 6.2 For the covariance matrix e \u2126h defined in (54) it holds: (i) \u2126[RR] h (\u03b8)\u22121 = 1 h(\u03a3\u03a3\u22a4)\u22121 \u22121 2((\u03a3\u03a3\u22a4)\u22121Av(\u03b2) + Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h, y0); (ii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121 = h 2 I \u2212h2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h3, y0); (iii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121\u2126[RS] h (\u03b8) = h3 4 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0); (iv) \u2126[S|R] h (\u03b8) = h3 12 \u03a3\u03a3\u22a4+ R(h5, y0); (v) log det \u2126[RR] h (\u03b8) = d log h + log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0); (vi) log det \u2126[S|R] h (\u03b8) = 3d log h + log det \u03a3\u03a3\u22a4+ R(h2, y0); (vii) log det e \u2126h(\u03b8) = 4d log h + 2 log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0). Remark 5 We adjusted the objective functions for partial observations using the term c log det \u2126[\u00b7] h/c, where c is a correction constant. This adjustment keeps the term h Tr Av(\u03b2) in (v)-(vii) constant, not affecting the asymptotic distribution of the drift parameter. There is no h4-term in \u2126[S|R] h (\u03b8) which simplifies the approximation of \u2126[S|R] h (\u03b8)\u22121 and log det \u2126[S|R] h (\u03b8). Consequently, this makes (41) a bad choice for estimating the drift parameter. 18 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.2 Nonlinear solution e fh We now state a useful proposition for the nonlinear solution e fh (Section 1.8 in (Hairer et al., 1993)). Proposition 6.3 Let Assumptions (A1), (A2) and (A6) hold. When h \u21920, the h-flow of (15) approximates as: e fh(y) = y + h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y), (55) e f \u22121 h (y) = y \u2212h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y). (56) Applying the previous proposition on (21) and (22), we get: fh(y) = v + hN(y) + h2 2 (DvN(y))N(y) + R(h3, y), (57) f \u22c6\u22121 h (y) = v \u2212hN(y) + h2 2 (DvN(y))N(y) + R(h3, y). (58) The following lemma approximates log | det Dfh/2 (y; \u03b2) | in the objective functions and connects it with Lemma 6.2. Lemma 6.4 Let e fh be the function defined in (21). It holds: 2 log | det Dfh/2 (Ytk; \u03b2) | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121), 2 log | det Dfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001 | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). An immediate consequence of the previous lemma and that DvF(y; \u03b2) = Av(\u03b2) + DvN(y; \u03b2) is log det \u2126[RR] h (\u03b8) + 2 log | det Dfh/2 (Ytk; \u03b2) | = log det h\u03a3\u03a3\u22a4+ h Tr DvF(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). The same equality holds when Ytk is approximated by (Xtk, \u2206hXtk+1). The following lemma expands function \u00b5h( e fh/2(y)) up to the highest necessary order of h. Lemma 6.5 For the functions e fh in (21) and e \u00b5h in (28), it holds \u00b5[S] h ( e fh/2(y)) = x + hv + h2 2 F(y) + R(h3, y), (59) \u00b5[R] h ( e fh/2(y)) = v + h(F(y) \u22121 2N(y)) + R(h2, y). (60) 6.3 Random variables e Zk,k\u22121 and e Zk+1,k,k\u22121 To approximate the random variables Z[S] k,k\u22121(\u03b2), Z[R] k,k\u22121(\u03b2), Z [S] k,k\u22121(\u03b2), and Z [R] k+1,k,k\u22121(\u03b2) around Ytk\u22121, we start by defining the following random sequences: \u03b7k\u22121 := 1 h1/2 Z tk tk\u22121 dWt, (61) \u03bek\u22121 := 1 h3/2 Z tk tk\u22121 (t \u2212tk\u22121) dWt, \u03be\u2032 k := 1 h3/2 Z tk+1 tk (tk+1 \u2212t) dWt, (62) \u03b6k\u22121 := 1 h5/2 Z tk tk\u22121 (t \u2212tk\u22121)2 dWt, \u03b6\u2032 k := 1 h5/2 Z tk+1 tk (tk+1 \u2212t)2 dWt. (63) The random variables (61)-(63) are Gaussian with mean zero. Moreover, at time tk they are Ftk+1 measurable and independent of Ftk. The following linear combinations of (61)-(63) appear in the expansions in the partial observation case: Uk,k\u22121 := \u03be\u2032 k + \u03bek\u22121, (64) Qk,k\u22121 := \u03b6\u2032 k + 2\u03b7k\u22121 \u2212\u03b6k\u22121. (65) 19 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT It is not hard to check that \u03be\u2032 k + \u03b7k\u22121 \u2212\u03be\u2032 k\u22121 = Uk,k\u22121. This alternative representation of Uk,k\u22121 will be used later in proofs. The It\u00f4 isometry yields: E\u03b80[\u03b7k\u22121\u03b7\u22a4 k\u22121 | Ftk\u22121] = I, E\u03b80[\u03b7k\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03b7k\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 2I, (66) E\u03b80[\u03bek\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 6I, E\u03b80[\u03bek\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03be\u2032 k\u03be\u2032\u22a4 k | Ftk\u22121] = 1 3I, (67) E\u03b80[Uk,k\u22121U\u22a4 k,k\u22121 | Ftk\u22121] = 2 3I, E\u03b80[Uk,k\u22121(Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4| Ftk\u22121] = I. (68) The covariances of other combinations of the random variables (61)-(63) are not needed for the proofs. However, to derive asymptotic properties, we need some fourth moments calculated in Supplementary Materials S1. The following two propositions are the last building blocks for approximating the objective functions (30)-(31) and (40)-(41). Proposition 6.6 The random variables e Zk,k\u22121(\u03b2) in (24) and e Zk+1,k,k\u22121(\u03b2) in (35) are approximated by: Z[S] k,k\u22121(\u03b2) = h3/2\u03a30\u03be\u2032 k\u22121 + h2 2 (F0(Ytk\u22121) \u2212F(Ytk\u22121)) + h5/2 2 DvF0(Ytk\u22121)\u03a30\u03b6\u2032 k\u22121 + R(h3, Ytk\u22121), Z[R] k,k\u22121(\u03b2) = h1/2\u03a30\u03b7k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 + h3/2DvF0(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h2, Ytk\u22121), Z [S] k,k\u22121(\u03b2) = \u2212h2 2 F(Ytk\u22121) \u2212h5/2 2 DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h3, Ytk\u22121), Z [R] k+1,k,k\u22121(\u03b2) = h1/2\u03a30Uk,k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h3/2DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + h3/2 2 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h2, Ytk\u22121). Remark 6 Proposition 6.6 yield E\u03b80[Z[R] k,k\u22121(\u03b2)Z[R] k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = \u2126[RR] h + R(h2, Ytk\u22121), E\u03b80[Z [R] k+1,k,k\u22121(\u03b2)Z [R] k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 2 3h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = 2 3\u2126[RR] h + R(h2, Ytk\u22121). Thus, the correction factor 2/3 in (40) compensates for the underestimation of the covariance of Z [R] k+1,k,k\u22121(\u03b2). Similarly, it can be shown that the same underestimation happens when using the backward difference. On the other hand, when using the central difference, it can be shown that E\u03b80[Z [R],central k+1,k,k\u22121(\u03b2)Z [R],central k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 5 12h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121), which is a larger deviation from \u2126[RR] h , yielding a larger correcting factor and larger asymptotic variance of the diffusion parameter estimator. Proposition 6.7 Let e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) be defined in (24) and (35), respectively. Then, Z[S|R] k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 \u2212h5/2 2 DvF0(Ytk\u22121)\u03a30(\u03be\u2032 k\u22121 \u2212\u03b6\u2032 k\u22121) + R(h3, Ytk\u22121), Z [S|R] k+1,k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30Uk,k\u22121 \u2212h2 2 F0(Ytk\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30Uk,k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h5/2 4 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h3, Ytk\u22121). 20 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.4 Objective functions Starting with the complete observation case, we approximate objective functions (30) and (31) up to order R(h3/2, Ytk\u22121) to prove the asymptotic properties of the estimators \u02c6 \u03b8[CR] N and \u02c6 \u03b8[CS|R] N . After omitting the terms of order R(h, Ytk\u22121) that do not depend on \u03b2, we obtain the following approximations: L[CR] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 (69) + 2 \u221a h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 + h N X k=1 Tr DvF(Ytk; \u03b2), L[CS|R] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ 3 N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) (70) \u22123h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121DvN(Ytk\u22121; \u03b2)\u03a30\u03b7k\u22121 \u2212h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 L[CF] N (Y0:tN ; \u03b8) = L[CR] N (Y0:tN ; \u03b8) + L[CS|R] N (Y0:tN ; \u03b8) . (71) The two last sums in (70) converge to zero because E\u03b80[(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u03b7\u22a4 k\u22121|Ftk\u22121] = 0. Moreover, (70) lacks the quadratic form of F(Ytk\u22121) \u2212F0(Ytk\u22121), that is crucial for the asymptotic variance of the drift estimator. This implies that the objective function L[CS|R] N is not suitable for estimating the drift parameter. Conversely, (70) provides a correct and consistent estimator of the diffusion parameter, indicating that the full objective function (the sum of L[CR] N and L[CS|R] N ) consistently estimates \u03b8. Similarly, the approximated objective functions in the partial observation case are: L[PR] N (Y0:tN ; \u03b8) = 2 3(N \u22122) log det \u03a3\u03a3\u22a4+ N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (72) + 2 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N\u22121 X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N\u22121 X k=1 (Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + h N\u22121 X k=1 Tr DvF(Ytk; \u03b2), L[PS|R] N (Y0:tN ; \u03b8) = 2(N \u22122) log det \u03a3\u03a3\u22a4+ 3 N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (73) + 6 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121F(Ytk\u22121; \u03b20) \u22123h N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 DvN(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + 2h N\u22121 X k=1 Tr DvN(Ytk; \u03b2), 21 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT L[PF] N (Y0:tN ; \u03b8) = L[PR] N (Y0:tN ; \u03b8) + L[PS|R] N (Y0:tN ; \u03b8) . (74) This time, the term with Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121 vanishes because Tr(\u03a30Uk,k\u22121U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)) = 0 due to the symmetry of the matrices and the trace cyclic property. Even though the partial observation objective function L[PR] (X0:tN ; \u03b8) (40) depends only on X0:tN , we could approximate it with L[PR] N (Y0:tN ; \u03b8) (72). This is useful for proving the asymptotic normality of the estimator since its asymptotic distribution will depend on the invariant probability \u03bd0 defined for the solution Y. The absence of the quadratic form F(Ytk\u22121) \u2212F0(Ytk\u22121) in (73) indicates that L[PS|R] N is not suitable for estimating the drift parameter. Additionally, the penultimate term in (73) does not vanish, needing an additional correction term of 2h PN\u22121 k=1 Tr DvN(Ytk; \u03b2) for consistency. This correction is represented as 4 log | det Dvfh/2| in (41). Notably, this term is absent in the complete objective function (31), making this adjustment somewhat artificial and could potentially deviate further from the true log-likelihood. Consequently, the objective function based on the full likelihood (39) inherits this characteristic from (73), suggesting that in the partial observation scenario, using only the rough likelihood (72) may be more appropriate. 7 Conclusion Many fundamental laws of physics and chemistry are formulated as second-order differential equations, a model class important for understanding complex dynamical systems in various fields such as biology and economics. The extension of these deterministic models to stochastic second-order differential equations represents a natural generalization, allowing for the incorporation of uncertainties and variability inherent in real-world systems. However, robust statistical methods for analyzing data generated from such stochastic models have been lacking, presenting a significant challenge due to the inherent degeneracy of the noise and partial observation. In this study, we propose estimating model parameters using a recently developed methodology of Strang splitting estimator for SDEs. This estimator has demonstrated finite sample efficiency with relatively large sample time steps, particularly in handling highly nonlinear models. We adjust the estimator to the partial observation setting and employ either the full likelihood or only the marginal likelihood based on the rough coordinates. For all four obtained estimators, we establish the consistency and asymptotic normality. The application of the Strang estimator to a historical paleoclimate dataset obtained from ice cores in Greenland has yielded valuable insights and analytical tools for comprehending abrupt climate shifts throughout history. Specifically, we employed the stochastic Duffing oscillator, also known as the Kramers oscillator, to analyze the data. While our focus in this paper has been primarily confined to second-order SDEs with no parameters in the smooth components, we are confident that our findings can be extended to encompass models featuring parameters in the drift of the smooth coordinates. This opens up directions for further exploration and application of our methodology to a broader range of complex dynamical systems, promising deeper insights into their behavior and underlying mechanisms. Acknowledgement This work has received funding from the European Union\u2019s Horizon 2020 research and innovation program under the Marie Sk\u0142odowska-Curie grant agreement No 956107, \"Economic Policy in Complex Environments (EPOC)\"; and Novo Nordisk Foundation NNF20OC0062958."
18
+ }
title_10K/test_title_short_2405.03690v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03690v2",
3
+ "title": "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs",
4
+ "abstract": "Recent advancements in Large Language Models (LLMs) have led to the\ndevelopment of Video Large Multi-modal Models (Video-LMMs) that can handle a\nwide range of video understanding tasks. These models have the potential to be\ndeployed in real-world applications such as robotics, AI assistants, medical\nsurgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our\ndaily lives underscores the importance of ensuring and evaluating their robust\nperformance in mirroring human-like reasoning and interaction capabilities in\ncomplex, real-world contexts. However, existing benchmarks for Video-LMMs\nprimarily focus on general video comprehension abilities and neglect assessing\ntheir reasoning capabilities over complex videos in the real-world context, and\nrobustness of these models through the lens of user prompts as text queries. In\nthis paper, we present the Complex Video Reasoning and Robustness Evaluation\nSuite (CVRR-ES), a novel benchmark that comprehensively assesses the\nperformance of Video-LMMs across 11 diverse real-world video dimensions. We\nevaluate 9 recent models, including both open-source and closed-source\nvariants, and find that most of the Video-LMMs, especially open-source ones,\nstruggle with robustness and reasoning when dealing with complex videos. Based\non our analysis, we develop a training-free Dual-Step Contextual Prompting\n(DSCP) technique to enhance the performance of existing Video-LMMs. Our\nfindings provide valuable insights for building the next generation of\nhuman-centric AI systems with advanced robustness and reasoning capabilities.\nOur dataset and code are publicly available at:\nhttps://mbzuai-oryx.github.io/CVRR-Evaluation-Suite/.",
5
+ "authors": "Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Jameel Hassan, Muzammal Naseer, Federico Tombari, Fahad Shahbaz Khan, Salman Khan",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Multi AND Modal AND LLM",
14
+ "gt": "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs",
15
+ "main_content": "Introduction Recently, Large Language Models (LLMs) [Touvron et al., 2023, Zheng et al., 2023, Jiang et al., 2024] have demonstrated impressive reasoning and planning capabilities while simultaneously handling a wide range of NLP tasks [Wei et al., 2022a, Brown et al., 2020]. Consequently, their integration with the vision modality, specifically for video understanding tasks, has given rise to Video Large Multi-modal Models (Video-LMMs) [Li et al., 2023b]. These models act as visual chatbots that accept both text and video as input and handle a diverse set of tasks, including video comprehension [Maaz et al., 2023], detailed video understanding [Lin et al., 2023], and action grounding [Zhang et al., 2023]. As these models directly capture video data, they hold substantial potential for deployment in real-world applications such as robotics, surveillance, medical surgery, and autonomous vehicles. However, as these models assume an expanding role in our everyday lives, assessing their performance in comprehending complex videos and demonstrating reliable reasoning and robustness capabilities arXiv:2405.03690v2 [cs.CV] 8 May 2024 \fBenchmark Textual Complex In the wild Contextual Multiple Temporal Order Robustness Reasoning (OOD) Dependency Actions & Fine-grained MSVD-QA [Xu et al., 2017] MSRVTT-QA [Xu et al., 2017] TGIF-QA [Jang et al., 2017] Activity Net-QA [Yu et al., 2019] VideoChat-GPT [Maaz et al., 2023] MVBench [Li et al., 2023c] SEED-Bench [Li et al., 2023a] CVRR-ES (ours) Table 1: Comparison of CVRR-ES with existing benchmarks for video QA. The CVRR-ES benchmark represents an initial effort to assess Video-LMMs in the context of their applicability and suitability in real-world applications. Non-existent actions with non-existent scene depictions. 6.0% Multiple actions in a single video. 13.25% Fine-grained action understanding. 9.58% Partial actions. 8.58% Non-existent actions with existent scene depictions. 5.75% Interpretation of visual context. 11.38% Continuity and Object Instance Count. 7.38% Unusual and Physically Anomalous activities. 7.92% Interpretation of social context. 11.67% Understanding of emotional context. 12.17% Time order understanding. 6.33% CVRR Evaluation Suite 0 20 40 60 80 100 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro GPT4V(ision) Human Video LMMs 15.92% 16.41% 16.46% 21.62% 24.96% 25.78% 32.89% 53.2% 70.78% 96.67% Figure 1: Left: CVRR-ES comprises of 11 diverse complex video evaluation dimensions encompassing a variety of complex, real-world contexts. Right: Overall performance of Video-LMMs on the CVRR-ES benchmark. Results for each Video-LMM are averaged across 11 video dimensions. across diverse real-world contexts becomes essential. Video-LMMs with such capabilities will be more effective when integrated into our daily lives for solving perception tasks and will be a promising step towards building human-centric AI-assistive systems. Several attempts in literature have been made to benchmark Video-LMMs. SEED-Bench [Li et al., 2023a] curated a MCQ-based benchmarking dataset including 3 evaluation dimensions for videos. Similarly, MV-Bench [Li et al., 2023c] constructed the Video-LMM benchmark and assembled 20 challenging video tasks for evaluating the spatial and temporal understanding of these models. While these methods aim at benchmarking Video-LMMs, they predominantly evaluate video and/or temporal comprehension abilities and overlook the complex reasoning aspects of Video-LMMs for real-world context, and their robustness towards user input text queries; both of which are crucial to ensure their responsible engagement with humans in various real-world situations in the wild. While some studies have explored similar areas such as hallucinations in image-based LLMs [Liu et al., 2023a, Qian et al., 2024], no such comprehensive study exists for the case of Video-LMMs. Motivated by the wide-scale applications of Video-LMMs and the lack of world-centric complex video benchmarking efforts, we present a new benchmark, Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES), to comprehensively assess the performance of Video-LMMs. As shown in Tab. 1, CVRR-ES evaluates Video-LMMs on key aspects of robustness and reasoning in videos, encompassing video domains that more accurately test models in real-world scenarios such as videos having contextual dependency and in-the-wild aspects. CVRR-ES is an open-ended video QA benchmark comprising 11 real-world video category dimensions (Fig. 1, left) that encompass diverse evaluation aspects. These dimensions span from context-dependent (e.g., social, emotional, etc.) categories to ones that often take place in the wild such as videos containing physically anomalous activities. We comprehensively evaluate a representative set of 9 recent Video-LMMs (Fig. 1, right) including both open-source and closed-source models on the CVRR-ES benchmark using a LLM-assisted automatic evaluation framework [Maaz et al., 2023, Cai et al., 2023]. The performance of Video-LMMs on the CVRR-ES benchmark reveals that these models struggle to correctly comprehend complex videos indicating their weak reasoning and lack of robustness to the textual user queries (Fig. 2). For instance, state-of-the-art Video-LLaVA [Lin et al., 2023] achieves only 15.92% performance averaged across 11 video dimensions of CVRR-ES. In contrast, closed-source models including GPT4V(vision) [OpenAI, 2023] and Gemini-Vision-Pro [Google, 2023] exhibit relatively stronger performance but still lag behind the performance of humans. Using CVRR-ES benchmark, we extensively perform quantitative and qualitative analysis formulating important insights into these Video-LMMs based on their failure cases and individual performances across the diverse video dimensions. 2 \f Based on the visual cues provided in the sequence of images, it appears that four individuals exit the vehicle. Correct Response: Three people exit the car to retrieve black bags from the trunk. 2. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. In the video, we can see a young man and woman exiting the car \u2026 How many individuals exit the vehicle to retrieve items from the trunk? Based on \u2026, one person is performing a prank on another individual. The prank seems to involve pouring a yellow liquid from a bowl, which is sneakily positioned behind the unsuspecting person's back at waist level. Correct Response: one man uses his back muscles to crack eggs placed by another man into a bowl behind him. The process is amusing and ends with a celebratory turn by the man holding the eggs. \u2026 Another man is holding a bowl of food behind the first man \u2026 the second man appears to be pouring the contents of the bowl down the first man's shorts. A man is seen standing in a kitchen with his hands on his hips. Another man is standing behind him and pouring something into a bowl. a guy is standing with his back facing someone else. The person standing behind him pours water on his back, causing him to shiver. In the video, a man is seen standing in a kitchen and washing dishes. He then puts on a white robe and continues washing dishes. In the video, we see a young man pouring water from a bucket to a plastic bowl. What is happening in the video? Why is the golfer celebrating in the video? Based on \u2026 the golfer is celebrating because they successfully hit the golf ball from a sand bunker onto the green and possibly close to the hole\u2026 Correct Response: The golfer is not celebrating and instead he is expressing his anger and frustration by hitting the golf stick into the ground. The golfer is celebrating because he made a hole-in-one. A hole-in-one is \u2026 The golfer is celebrating because he hit the ball into hole on his first attempt. The golfer is celebrating because he has just hit the ball into the hole. The golfer is celebrating because he has just made a hole in one, \u2026 The golfer is celebrating in the video because he is successfully teeing off and \u2026 Before the mug was moved, a tennis ball became visible underneath it. Correct Response: No other object was visible before the mug was moved in the video. A yellow tennis ball was visible in the scene before the mug was moved. A tennis ball. Before the mug was moved, a tennis ball was visible in the scene. In the beginning of the video, the person was holding a mug with a tennis ball on a table. Before the mug was moved, a white vase was visible in the scene. Before the mug was moved, what other object was visible in the scene? A tennis ball was visible in the scene before the mug was moved. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Figure 2: We observe that most Video-LMMs struggle to reason over complex videos (rows 1-3) and exhibit weak robustness and rectification capabilities when prompted to generate answers for user questions that can sometimes be confusing (row 4). The QA pairs in Comprehensive Video Reasoning and Robustness Evaluation Suite (CVRR-ES) benchmark assess the performance of Video-LMMs beyond general video comprehension. Based on our analysis, we observe that standard prompting of Video-LMMs struggles in steering their focus for complex video understanding. Additionally, their limitations in reasoning and robust video understanding of real-world scenarios are dominantly driven by the quality of textual inputs (i.e., user questions). Based on these insights, we develop a training-free Dual-Step Contextual Prompting (DSCP) technique, which effectively steers the model\u2019s behavior during inference to elicit video-specific reasoning and improved robustness within Video-LMMs. With DSCP, Video-LMMs show substantial improvements on our benchmark, suggesting the potential of prompting techniques for Video-LMMs. Our main contributions can be summarised as follows: \u2022 We present the Complex Video Robustness and Reasoning Evaluation suite (CVRR-ES), a Video Question Answering benchmark designed to assess the reasoning and robustness capabilities of Video-LMMs across 11 diverse world-centric complex video dimensions. \u2022 We comprehensively evaluate both open-source and closed-source Video-LMMs on the CVRR-ES benchmark and find that most models exhibit weak performance, highlighting their limited reasoning in complex videos and lack of robustness towards user text queries. \u2022 We conduct extensive analysis and formulate important conclusions about Video-LMMs based on their failure cases and performance on the CVRR-ES benchmark. Our findings provide valuable insights for building the next generation of human-centric AI systems with improved robustness and reasoning capabilities. \u2022 To improve Video-LMMs\u2019 reasoning and robustness abilities, we formulate a model-agnostic and training-free prompting technique that effectively enhances their performance. 3 \f2 Related Works Video Large Multi-modal models (Video-LMMs). Video-LMMs [Lin et al., 2023, Li et al., 2023d, Zhang et al., 2023] are advanced visual chatbots capable of performing a wide range of video understanding tasks, including video comprehension and captioning, video question-answering, and action grounding. These models accept both video and textual inputs and generate textual responses. From an architectural perspective, Video-LMMs typically combine pre-trained vision backbones [Radford et al., 2021, Fang et al., 2023, Wang et al., 2022b] with large language models [Touvron et al., 2023, Zheng et al., 2023] using connector modules such as MLP adapters, Q-former [Dai et al., 2023], and gated attention [Alayrac et al., 2022]. VideoChat [Li et al., 2023b] and VideoChat-GPT [Li et al., 2023d] presented initial open-source efforts in this direction and were trained with two stages of alignment and video-instruction following objectives. Recently, more advanced Video-LMMs have emerged in the field, with some models focusing on improving model architectures [Li et al., 2023d], expanding to new tasks [Munasinghe et al., 2023], and enabling support for long videos [Song et al., 2023, Ren et al., 2023]. In this work, we aim to develop a comprehensive benchmarking evaluation framework to assess the reasoning and robustness capabilities of Video-LMMs and develop a training-free prompting technique to improve their performance on these fronts. Benchmarking Video-LMMs. With the growing number of Video-LMMs emerging in the research community, several works have presented evaluation frameworks to assess and quantify these models for benchmarking and analysis purposes. SEED-Bench [Li et al., 2023a] evaluates the visual capabilities in both image and Video-LMMs across 12 unique dimensions. MV-Bench [Li et al., 2023c] curates 20 challenging video tasks to evaluate spatial and temporal understanding of VideoLMMs. Video-ChatGPT [Maaz et al., 2023] develops a quantitative evaluation framework to assess model understanding across five aspects of general video comprehension, such as the correctness and consistency of model captions. While these evaluation frameworks provide effective insights, their assessments do not extend beyond general video-comprehension metrics to more advanced aspects of reasoning and robustness, particularly for real-world context cases. In contrast, our work focuses on providing a complex video reasoning and robustness benchmark across 11 diverse real-world-centric evaluation types and offers a more thorough assessment of Video-LMMs in practical applications. Training-free Prompting Techniques. Steering model behavior at inference time using prompting has become a common paradigm in the NLP domain. Prompting [Wei et al., 2022b, Wang et al., 2022a] refers to the set of instructions given as a prefix to the language model to better align model responses with human intent without the need for task-specific fine-tuning. Prompting techniques can be as simple as a single sentence (e.g., \"Let\u2019s think step by step\") such as zero-shot chain of thought [Wei et al., 2022b] prompting, to more detailed techniques such as combining chain-ofthought prompting with few-shot learning [Brown et al., 2020] and self-consistency chain of thought prompting [Wang et al., 2022a]. Surprisingly, training-free prompting techniques for Video Large Multi-modal Models (Video-LMMs) have been minimally explored. In this work, we develop a dual-step prompting technique based on principled prompt instructions specifically designed to steer the model\u2019s behavior for improved reasoning and robustness over complex videos. 3 Complex Video Reasoning and Robustness Evaluation Suite As Video-LMMs are touching new real-world applications, it is essential to ensure that they robustly handle the user inputs, comprehend the visual world, and exhibit human-like reasoning capabilities. In this work, our goal is to establish a comprehensive benchmark that specifically assess the robustness and reasoning capabilities of Video-LMMs in a variety of complex and contextual videos covering diverse scenarios. To this end, we present Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES). We first provide a holistic overview of CVRR-ES benchmark below and detail the video evaluation dimensions in Sec. 3.1. Subsequently, we present the CVRR-ES creation process in Sec. 3.2. We provide details on the dataset quality and human evaluation in Appendix B. Overview of CVRR-ES Benchmark. CVRR-ES encompasses evaluation dimensions that cover diverse video categories related to real-world scenarios, ranging from context-dependent (e.g., social, emotional) categories to video types that often take place in the wild (e.g., anomalous activities). Specifically, we have compiled 11 video evaluation dimensions and curated 2,400 high-quality openended question-answer (QA) pairs, spanning 217 high-quality videos. The average video duration is 22.3 seconds, with maximum and minimum durations of 183 and 2 seconds, respectively. In Fig. 4 \fFigure 3: CVRR-ES Benchmark Statistics. Left: Frequency distribution of the type of questions. Right: Illustration of the most frequent keywords in the answer-set of CVRR-ES benchmark. 3 (left), we quantify the distribution of different question types present in our benchmark. This diverse set of questions aims to comprehensively capture the model\u2019s answering capabilities based on reasoning and robustness criteria. We show the word cloud plot based on the frequency of key words in the answer set of CVRR-ES in Fig. 3 (right). The frequent words correspond to objects and attributes with which Video-LMMs could most likely interact when deployed in practical scenarios. 3.1 CVRR-ES Video Category definitions. To assess the robustness and reasoning capabilities of Video-LMMs in the CVRR-ES benchmark, we carefully curate 11 diverse benchmark evaluation categories. As shown in Fig. 1 (left), these categories encompass a wide range of real-world complex and contextual videos within each category. Below, we define each video evaluation dimension of the CVRR-ES benchmark in detail. 1) Multiple actions in a single video. This category includes videos that contain multiple activities within a single video. The number of activities varies from 2 to 4 in these videos, mostly featuring humans performing multiple activities. We curate QA pairs in this category aiming to identify whether the model can reason over challenging questions concerning multiple actions and understand the interrelation between different actions within a video. 2) Fine-grained action understanding. We gather video samples with fine-grained actions. These actions encompass various fine-grained activities performed by humans, including pushing, opening, closing, spreading, sitting, etc. This category presents a challenge to the model\u2019s comprehension of subtle and fine-grained actions through carefully crafted questions. 3) Partial actions. Based on our observations that Video-LMMs predominantly generate content that may be contextually relevant and likely to co-occur with the depicted scene in the video, we compile videos featuring actions that have a high probability of being followed by subsequent actions but are not executed in the video. For instance, an action such as cracking an egg in a kitchen setting often anticipates the subsequent action of frying/cooking the egg. 4) Time order understanding. Accurately recognizing the temporal sequence of activities in videos is crucial for distinguishing between atomic actions, such as pushing and pulling. We collect videos of fine-grained actions occurring in a particular temporal direction and curate challenging questions. 5) Non-existent actions with existent scene depictions. This category examines the model\u2019s robustness and reasoning behavior in scenarios where we introduce non-existent activities into the video without altering the physical and spatial scenes or environmental details in it. 6) Non-existent actions with non-existent scene depictions. In this evaluation category, we make the QA task more challenging by creating questions that include both non-existent activities and non-existent scene comprehension. Non-existent scene comprehension involves changing the objects, attributes of objects, and background scene description. This evaluates the model\u2019s reliability to correct misleading questions and avoid generating imaginary content. 7) Continuity and object instance count. This category contains videos (both real and simulations) designed to test the models\u2019 ability to accurately recognize the number of instances of objects, people, etc., and distinguish between existing objects and new ones introduced in the same video scene. 8) Unusual and physically anomalous activities. This category consists of videos with unconventional activities and physical phenomena that seemingly defy the laws of physics. We meticulously 5 \fcollect relevant videos from various sources on the internet, focusing on capturing unusual activities such as a person floating in the air or driving a motorbike on a running river. We believe that assessing Video-LMMs in such scenarios is crucial, as it allows us to determine whether they can generalize to understand actions in out-of-distribution videos that can occur in practical situations. 9) Interpretation of social context. In the real world, human actions are often influenced by social context in their surroundings. For instance, a person might be helping an elderly individual cross the road. This category evaluates Video-LMMs on such scenarios to determine their ability to accurately infer the rationale behind actions based on the depicted social context. We gather diverse videos from the internet and create challenging questions that encompass the social context dimension. 10) Understanding of emotional context. Similar to social context, humans can accurately understand and interpret each other\u2019s actions by considering the emotional context. For example, a person being emotionally moved and crying in a gathering could be a happy moment if it is one stemming from success/joy. We collect videos and curate challenging reasoning questions aimed at recognizing the nature of actions solely based on emotional context for evaluating Video-LMMs. 11) Interpretation of visual context. This dimension focuses on assessing the model\u2019s reasoning abilities to recognize the actions by leveraging the overall visual contextual cues in the video. We curate specific videos containing actions where activity identification and reasoning require visual contextual cues. For example, to identify the number of people present based on the presence of shadows, one must utilize the visual context from the shadows to reason about the question. Qualitative Examples. Fig. 2 shows examples of collected videos for the CVRR-ES benchmark. The curated videos are carefully selected to be diverse and contain rich spatio-temporal content, aligned with the proposed video evaluation dimensions. 3.2 Building CVRR-ES Benchmark After defining the video evaluation dimensions, we now proceed toward building the CVRR-ES benchmark which consists of three stages. We present each stage in detail below. Stage 1: Data collection and Annotation. We first collect high-quality videos and annotate each video using human assistance. To ensure that each evaluation dimension captures the relevant attributes and information, we meticulously select videos that are representative of specific characteristics associated with that dimension. Across the 11 dimensions, 214 unique videos are selected for the benchmark with around 20 videos per evaluation category. Around 60% of these videos are collected from public academic datasets. To introduce diversity in the benchmark distribution, we incorporate video samples from multiple academic datasets including Something-Something-v2 [Goyal et al., 2017], CATER [Girdhar and Ramanan, 2020], Charades [Sigurdsson et al., 2016], ActivityNet [Caba Heilbron et al., 2015], HMDB51 [Kuehne et al., 2011], YFCC100M [Thomee et al., 2016]. The remaining 40% of videos are collected from the internet. Following the video collection process, two experienced human annotators are assigned to generate captions for each video. For videos where initial captions or metadata are available from academic datasets, the captions are generated by the annotators based on them. For videos collected from the internet, captions are entirely generated by human annotators. To ensure consistency and high quality, we provide annotation instructions to annotators, who generate captions accordingly. Personalized annotation guidelines are used for each video category. Refer to additional details in Appendix B. Stage 2: Question-Answer Generation. The first challenge is to select an evaluation setting to assess Video-LMMs. Humans typically engage in free-form conversation to interact with each other in day-to-day life. Inspired by this, we aim to simulate a similar style of interaction with Video-LMMs by curating open-ended QA pairs to evaluate these models for robustness and reasoning. We feed detailed ground-truth video captions to GPT-3.5 LLM, which are utilized to generate open-ended questions covering both reasoning and robustness aspects. Reasoning QA pairs: With Video-LMMs beginning to interact more directly with humans in our lives, it\u2019s crucial to validate the reasoning abilities of Video-LMMs for more reliable Human-AI interaction. When evaluating the reasoning capabilities of Video-LMMs, we aim to determine whether these models can understand the input video not only by analyzing spatial content but also by grasping the underlying rationale behind the occurring activities and their relationships with the surrounding context. This involves creating questions that go beyond simple video comprehension and scene 6 \fdescription and require the model to engage in complex logical inference, contextual understanding, and reasoning about counterfactual and hypothetical scenarios. Robustness QA pairs: In addition to evaluating the reasoning capabilities of LLMs, it is important to assess Video-LMMs to ensure their robust and responsible performance in real-world scenarios. In the context of Video-LMMs, robustness can be evaluated from both visual (video input) and textual interfaces. Our focus in this work lies on textual interface robustness by particularly testing the model\u2019s comprehension when posed with misleading or confusing questions. This scenario mirrors realistic situations where users, based on their expertise levels, may pose irrelevant, misleading, or confusing questions. It is crucial for models to demonstrate reliability and robustness in handling such queries and avoid generating unreal or hallucinated content for input videos. We curate specific prompts for each evaluation dimension to instruct LLM in generating QA pairs. Example prompts used as an instruction to LLMs for curating QA pairs for robustness and reasoning aspects are provided in Fig. 14 in the Appendix D. Stage 3: QA Pairs Filtration. After generating QA pairs, a manual filtration step is employed, with human assistance to verify each generated QA pair. Approximately 30% of the QA pairs generated by GPT-3.5 are found to be noisy, containing questions that are unrelated to the video evaluation dimensions or unanswerable based on the provided ground-truth captions. Additionally, many questions contain answers within the question itself. Therefore, an exhaustive filtering process is conducted which involves QA rectification and removing those samples which are not relevant to the video or evaluation type. This process results in a final set of 2400 high-quality QA pairs for the CVRR-ES benchmark. Examples of QA pairs are shown in Tab. 4 in the Appendix. Stage 4: Evaluation Procedure. Previous methods in the literature [Maaz et al., 2023, Cai et al., 2023, Liu et al., 2023a, Qian et al., 2024] have explored using LLM models as judges for quantifying results in open-ended QA benchmarks. We adopt a similar approach and instruct LLMs to act as teachers to assess the correctness of predicted responses from Video-LMMs compared to ground-truth answers. We generate open-ended predictions from Video-LMMs by providing video-question pairs as inputs and then present the model predictions and their corresponding ground-truth responses to the LLM Judge alongside the evaluation prompt. The Judge determines whether the prediction is correct or incorrect through a binary judgment, assigns a score from 1 to 5 representing the quality of the prediction, and provides a reasoning to explain its decision. Our ablative analysis in the Appendix. D demonstrates that reasoning-constrained LLM-based evaluation aligns well with human-based judgment. The evaluation prompt is shown in Fig. 13 in the Appendix D. 4 Dual-Step Contextual Prompting for Video-LMMs. Given their wide-scale potential in practical downstream applications, new Video-LMMs are frequently introduced by the research community. Despite the availability of numerous Video-LMMs, the majority of them are trained using only positive examples and video-conversational templates that are primarily limited to tasks such as video-captioning and video question answering. This leads to highly over-affirmative behavior and a lack of self-rectification abilities in these models (Sec. 5.4). Dual Step Contextual Prompting for Video-LMMs Retrieving Contextual reasoning information (Step 1) As an intelligent video comprehension model, focus on these guidelines: 1. Differentiate recurring objects, count accurately, and identify movements and poses. 2. Understand directional movements and temporal order. 3. Pay attention to fine-grained actions with precision. 4. Assess incomplete actions without assuming completion. 5. Detect emotional, social, and visual cues. 6. Capture and analyze all relevant actions. 7. Identify unusual actions accurately. 8. Disagree with incorrect information given in question. 9. If you do not find the evidence in the frames, you can give a definite answer by assuming that the asked action/attribute is not present. 10. Provide to the point and concise response. Now, proceed with answering the following question faithfully while keeping above guidelines in mind: Question: What is happening in the video? Context conditioned question-answering (Step 2) Context for the given video is: {step 1 response}. Now answer a question truthfully based on the video and the provided context. Question: {User question} Figure 4: Principled prompt instructions in our DSCP method for improving reasoning and robustness in Video-LMMs. Additionally, the templates have minimal focus on enhancing reasoning and robustness capabilities through reasoning-based instruction-tuning pairs, resulting in weak performance of such models against robustness and reasoning QA evaluations in the CVRR-ES benchmark. Furthermore, curating reasoning-based instruction fine-tuning datasets requires meticulous data curation steps, and retraining these models is computationally expensive [Li et al., 2023d, Ren et al., 2023]. Alternatively, training-free prompting techniques in NLP literature have shown effectiveness in eliciting reasoning abilities in LLMs such as chain of thought and self-consistency prompting [Wei et al., 2022b, Wang et al., 2022a]. Inspired by these approaches, we introduce a prompting technique called Dual Step Contextual Prompting (DSCP), which aims to steer VideoLMM focus for enhanced reasoning while simultaneously encouraging the models to provide robust and 7 \fVideo-LLaVA LLaMA-VID MovieChat How many different wigs with different colors did the young girl try before settling on the black one? I can see the young girl wearing a black wig, and a brunette wig is in the background. The girl is wearing a pink shirt in the beginning of the video The young girl tried on three different wigs with different colors before settling on the black one. The young girl tried on three different wigs with different colors before settling on the black one. Improved video reasoning and robustness with our DSCP Method The video does not provide information about the number of different wigs the young girl tried before settling on the black one. I cannot give an accurate answer as the video does not show the different wigs that the young girl tried. The video does not provide information about the number of wigs the young girl tried before settling on the black one Is the atmosphere in the dressing room tense and uncomfortable as the girl receives her new hair? Yes, the atmosphere in the dressing room is tense and uncomfortable as the girl receives her new hair. The video does not provide any information about the atmosphere in the dressing room. The video does not provide any information about the atmosphere in the dressing room. However, it is possible \u2026 Improved video reasoning and robustness with our DSCP Method No, the atmosphere in the dressing room is not tense and uncomfortable. The woman brushing the hair is smiling, and the woman in the chair is smiling as \u2026 \u2026, but the video does not suggest that the girl is uncomfortable or tense while receiving her new hair. No, the atmosphere in the dressing room is not tense or uncomfortable. The girl is laughing and smiling \u2026, indicating that she is enjoying the experience. Can we see visitors petting the white tiger through the cage bars? Sure, in this video we can see a baby panda inside a cage with bars. There are also other \u2026 Yes, we can see visitors petting the white tiger through the cage bars in the video. Yes, we can see visitors petting the white tiger through the cage bars. Improved video reasoning and robustness with our DSCP Method No, we cannot see visitors petting the white tiger through the cage bars in the video. No, we do not see visitors petting the white tiger through the cage bars in the video. No, we cannot see visitors petting the white tiger through the cage bars in the video. Does the chimpanzee use tools to repair the zoo vehicle in the video? Yes, the chimpanzee in the zoo is seen using tools to fix the zoo vehicle. Yes, the chimpanzee is seen using a tool to fix the vehicle in the video. Yes, the chimpanzee is seen using a tool to repair the zoo vehicle in the video. Improved video reasoning and robustness with our DSCP Method No, the chimpanzee does not use tools to repair the zoo vehicle in the video. \u2026 the chimpanzee driving the car \u2026 The chimpanzee does not use tools to repair the zoo vehicle in the video. It can be seen driving the car \u2026 No, the chimpanzee does not use tools to repair the zoo vehicle in the video. Figure 5: Qualitative results of DSCP prompting method. Using our DSCP approach, Video-LMMs demonstrate enhanced robustness and reasoning capabilities over complex videos. grounded answers. DSCP is a two-step prompting method that 1) ensures that the model comprehends the video while reasoning over crucial aspects of complex video understanding such as contextual information and decoding the complex relationships between objects and motions, etc., and 2) encourages robustness by generating the response against the question while conditioning both on video and the context retrieved in the first step. Below we discuss each step of DSCP in detail. Step 1: Reasoning over the video. We first guide Video-LMMs using principled prompts to interpret video content from a reasoning perspective. As shown in Fig. 4 (in blue), we formulate ten principled reasoning-based instructions for prompting, Preason, which directs Video-LMMs to not only comprehend the general video content but also steers them to reason over the rationale behind occurring activities and their relationships with the surrounding context. These prompt instructions include specific considerations like contextual priors, the temporal order of actions, instance count, and attributes. Additionally, the prompting technique incorporates instructions to ensure conciseness and factuality, aiming to mitigate hallucinations. Given a Video-LMM F and input video V, we retrieve contextual reasoning information Icontext by providing principled reasoning prompt Preason along with the video to the LMM, Icontext = F(Preason|V). The contextual information is utilized in the second step of DSCP to generate a more grounded response to the user question. Step 2: Context conditioned question answering. As discussed earlier, Video-LMMs are primarily trained with positive examples to answer questions, with limited emphasis on reasoning and robustness aspects. Consequently, enabling direct interaction of Video-LMMs with users in real-world scenarios can result in undesired responses when the user question is confusing and deceiving due to their extreme over-affirmative behavior. To address these challenges, we propose incorporating an additional inference step in Video-LMMs before answering the user\u2019s question. We note that Video-LMMs often possess factual knowledge about the video content but may become distracted and produce hallucinations when prompted with confusing or misleading questions (more details in Appendix C). Specifically, we devise a prompting method that conditions the model to first comprehend the video in detail without attending to the user question, thereby eliminating the influence of the question. The complex video comprehension information refers to Icontext formulated in step 1. Subsequently, we pose the user question in the second step using prompt Puser which combines user question and the contextual reasoning information (Fig. 4, in green) while conditioning the model on both the video and the contextual reasoning information Icontext. Concretely, Final response = F(Puser|V), where Puser = [question; Icontext]. 8 \fTable 2: Evaluation results of Video LLMs across various video-evaluation categories on the CVRR-ES benchmark. We present results for both open-source and closed-source models, alongside human evaluation results which serves as the upper bound on the benchmark. Benchmark Category Video-LLaMA-2 VideoChat Video-ChatGPT Video-LLaVA MovieChat LLaMA-VID TimeChat Gemini-V Pro GPT4V Human Multiple Actions in 16.98 23.90 27.67 15.72 12.58 17.92 28.30 43.08 57.55 93.40 single video. Fine-grained action 29.57 33.48 26.96 25.22 23.48 26.09 39.13 51.61 77.39 95.65 understanding. Partial 24.76 33.01 22.82 13.59 21.36 14.56 49.51 67.48 73.79 98.54 actions. Time order 16.45 31.58 27.63 21.05 16.45 19.74 34.21 45.39 57.89 97.37 understanding. Non-existent actions with 10.14 15.22 23.19 5.07 5.07 2.90 23.19 57.25 71.01 97.10 existent scene. Non-existent actions with 13.19 14.58 17.36 3.47 11.81 6.94 13.89 49.64 75.00 100.00 non-existent scene. Continuity and Object 28.25 24.29 28.41 21.47 19.77 24.86 34.46 36.16 62.71 96.49 instance Count. Unusual and Physically 18.95 18.42 18.95 15.79 17.89 16.32 27.37 60.00 74.74 96.84 Anomalous activities. Interpretation of 25.00 31.07 32.50 18.93 17.14 13.93 39.29 64.29 79.64 97.51 social context. Understanding of 21.92 23.63 21.23 15.07 13.70 14.73 27.40 47.26 66.44 95.55 emotional context. Interpretation of 32.60 34.43 27.84 19.78 21.25 23.08 45.05 63.00 82.42 94.87 visual context. Average 21.62 25.78 24.96 15.92 16.41 16.46 32.89 53.20 70.78 96.67 Intuitively, the factual content generated in the first step will guide the model towards a robust response in the second step to produce factual and correct responses, even in the presence of noisy/misleading user questions. We illustrate the qualitative results of the DSCP method in Fig. 5. This approach leads to responses that are better grounded with the actual video content and are robust against potential lesser-quality user queries. As we will later show, the DSCP technique effectively enhances the performance of Video-LMMs on the CVRR-ES benchmark. 5 Evaluation Experiments on CVRR-ES. Video-LMMs. Both open-source and closed-source models are selected for the evaluation. Among the open-source models, we evaluate 7 recent Video-LMMs, including Video-LLaVA [Lin et al., 2023], TimeChat [Ren et al., 2023], MovieChat [Song et al., 2023], LLaMA-ViD [Li et al., 2023d], VideoChat [Li et al., 2023b] Video-ChatGPT [Maaz et al., 2023], and Video-LLaMA-2 [Zhang et al., 2023]. For evaluating closed-source models, we use Gemini-Pro-Vision [Google, 2023] and GPT-4V(vision) [OpenAI, 2023]. Refer to the Appendix A for implementation details. 5.1 Main Experiments on CVRR-ES. In Tab. 2, we present the evaluation results of Video-LMMs on the 11 dimension categories of the CVRR-ES benchmark. Below, we present several key findings. Open Source Video-LMMs struggles on CVRR-ES benchmark. All open-source LMMs show inferior performance across the different evaluation dimensions of CVRR-ES. Interestingly, some of the earlier developed open-source Video-LMMs, like Video-LLaMA, VideoChat, and Video-ChatGPT, exhibit higher performance compared to more recent models such as Video-LLaVA, MovieChat, and LLaMA-VID. Overall, TimeChat achieves the highest performance of 32.89% averaged across the 11 evaluation dimensions among open-source LMMs, followed by VideoChat with a score of 25.78%. Humans rank highest in CVRR-ES benchmark. Human studies achieve the highest performance on the CVRR-ES benchmark, with over 95% accuracy across all evaluation dimensions. Furthermore, these results suggest that the CVRR-ES QA pairs are answerable and suitable for benchmarking. Closed source models perform competitively on CVRR-ES. As shown in Tab. 2, both Gemini and GPT4V surpass the performance of open-source models and achieve high gains across all evaluation dimensions. The competitive results of GPT4V and Gemini on complex video evaluation dimensions such as partial actions, non-existent action/scene depiction, and context-dependent categories show 9 \fPrompting Method VideoChat Video-LLaVA MovieChat LLaMA-VID TimeChat Standard prompting 25.78 15.92 16.41 16.46 32.89 Chain of Thought (CoT) prompting 22.44 25.87 15.89 29.68 39.57 DSCP (Stage 1) 38.07 32.12 28.05 25.13 33.04 DSCP (Both stages) 47.92 37.93 35.87 46.85 39.45 Table 3: Prompting methods. DSCP stage 1 uses only the principled instructions designed in step 1, while DSCP (Both stages) uses the complete dual-step prompting technique. that these models have a more sophisticated understanding of the complex visual contents of videos and have strong capabilities to rectify misleading and confusing user questions. Overall, GTP4V improves over Gemini by 17.58% and provides an average accuracy of 70.78% on CVRR-ES. 5.2 Effectiveness of DSCP method for improving Video-LMMs performance 0 10 20 30 40 50 60 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro Video LMMs with DSCP +22.01 +19.46 +30.39 +16.15 +8.93 +22.14 +6.56 +5.02 Figure 6: Video-LMMs with DSCP technique effectively improves their performance (gains are shown in green) on CVRR-ES benchmark. We next integrate DSCP technique with VideoLMMs and present results on the CVRR-ES benchmark in Fig. 6. The results indicate that DSCP improves the model\u2019s performance compared with models that use standard prompting (i.e., using only the question itself). These results suggest that prompting techniques in Video-LMMs can better guide models for improved reasoning and robustness. With DSCP, initially low-performing Video-LMMs such as Video-LLaVa, MovieChat, and LLaMA-Vid show much better relative gains and become competitive with other models. The highest relative gain of 184% is achieved by LLaMA-ViD, which moves from 7th place in the leaderboard to 2nd among the open-source models after utilizing DSCP prompting. We observe similar overall positive trends of using DSCP with closed-source model Gemini, which improves on the benchmark by an absolute overall gain of 5.02%. We provide more detailed results comparisons in Appendix C. 5.3 Different prompting techniques. We study the contribution of each step of DSCP and compare it with chain-of-thought prompting [Wei et al., 2022b]. The results for the top 5 performing Video-LMMs are shown in Tab. 3. Chainof-thought prompting improves over the standard prompting technique in 3 out of 5 Video-LMMs, suggesting that prompting techniques from NLP literature can effectively guide multi-modal VideoLMMs to enhance reasoning and robustness. Next, we ablate on the first step of DSCP prompting, which uses the principled instructions of DSCP step 1 as a prefix alongside the actual user question. Using the first step prompting technique of DSCP substantially improves model performance on all Video-LMMs, suggesting the effectiveness of the principled prompt instructions designed specifically for Video models. DSCP with both steps, which integrates an additional thinking step in the prompting step, further improves the results and provides the highest results on 4 out of 5 Video-LMMs. 5.4 Main findings and Qualitative Results Based on the results of Video-LMMs on CVRR-ES, we draw key findings and show qualitative results. These insights can serve as valuable guidance for developing the next generation of Video-LMMs, aiming to make them more robust and reliable when deployed in real-world applications. Models excelling at standard VQA benchmarks struggle on CVRR-ES benchmark. Our analysis in Sec. 5.1 reveals that the latest open-source Video-LMMs, such as Video-LLaVA, MovieChat, and LLaMA-VID, perform less effectively on the CVRR-ES benchmark compared to Video-LMMs that were introduced earlier in the community, such as VideoChat and Video-ChatGPT. Interestingly, the same recent models demonstrate superior performance on general video comprehension benchmarks. This discrepancy suggests that current VQA benchmarks, like ActivityNet-QA [Yu et al., 2019] and MSRVTT [Xu et al., 2017], do not adequately correlate with the complex video reasoning and robustness scenarios highlighted in our benchmark. Consequently, this also indicates that most newer Video-LMMs are heavily trained to excel on the general video comprehension benchmarks while reducing their generalizability, reasoning, and robustness capabilities. Over-affirmative behavior of open-source Video-LMMs. Another important observation about open-source models is their tendency to exhibit excessively positive and affirmative responses. As shown in Fig. 7, open-source Video-LMMs consistently respond with \"Yes\" even when faced with 10 \fconfusing questions that describe non-existent actions and objects. This highlights the vulnerability of these models when interacting with users in real-world scenarios. In our CVRR-ES benchmark, opensource models are particularly vulnerable to our evaluation dimensions of \"Non-existent actions with the existent scene\" and \"Non-existent actions with the non-existent scene\" compared to closed-source models. These models lack negation and self-rectification capabilities, especially when users provide misleading or confusing questions. We conjecture that such behavior arises due to the absence of negative instruction tuning pairs during the training of Video-LMMs. Tendency towards activity completion. Most open-source Video-LMMs have shown weak performance on the evaluation dimension of partial actions in CVRR-ES, which contains videos focusing on incomplete or atomic actions. To further analyze the models\u2019 behavior, we show qualitative results on such videos in Fig. 8. It can be observed that most open-source models tend to complete actions, even when only part of the action is provided in the video. For instance, Video-LLaVA struggles to reason over the video and describes the man as kicking the soccer ball, while the action in the video stops at the point of the man placing his foot beside the ball. We observe similar behavior in other Video-LMMs. Upon examining the fine-tuning strategies [Maaz et al., 2023, Liu et al., 2023b], we find that almost all models are trained on end-to-end actions-based instruction-tuning data, causing them to generate complete action descriptions at inference. This tendency highlights the vulnerability of Video-LMMs after deployment, as real-world scenarios often involve atomic, sub-atomic, and general actions alike. To improve the performance of Video-LMMs, it is crucial to incorporate diverse action types during training, including partial and incomplete actions. Weak Generalization to extreme OOD videos. The evaluation dimension of unusual and physically anomalous activities in CVRR-ES resembles extreme out-of-distribution video examples. With the exception of GPT4V and Gemini, Video-LMMs struggle with this dimension, indicating weak generalizability towards OOD videos containing the coexistence of unusual objects and activities that are extremely rare in typical videos. For instance, Video-LLaVA in Fig. 9 describes a person falling on the street, while the video actually shows the person performing an optical illusion. To be responsibly deployed in real-world applications, where OOD actions occur more frequently, Video-LMMs need to be trained to perform more robustly on OOD samples. This may involve incorporating diverse and atypical examples in the training data to improve the model\u2019s ability to handle unusual situations. Limited understanding of temporal order in complex videos. The CVRR-ES benchmark results show that Video-LMMs perform relatively better on the fine-grained action dimension compared to the time-order understanding dimension. While these models can accurately identify fine-grained actions, they struggle with comprehending the correct temporal order of these actions within a video. This limitation can lead to misinterpretations of the underlying information depending on temporal order. We present failure cases of this dimension in Fig. 10. For building more advanced world-centric Video-LMMs, it is crucial to enhance their ability to process and interpret event sequences accurately. Video-LMMs struggles in understanding the emotional and social context. For more reliable interaction between Video-LMMs and humans in practical scenarios, these models should comprehend the spatio-temporal scenes with social and contextual reasoning capabilities similar to humans. The lower performance of Video-LMMs on social and emotional contextual dimensions in CVRR-ES highlights their limitations and lack of understanding of scenes based on contextual cues. For instance, as shown in Fig. 11 (bottom row), GPT-4V struggles to comprehend a scene where a worker is attempting to prevent shoes from getting wet due to the rain by moving them under the shade. Instead, GPT-4V provides a response that contradicts the social cues present in the video. 6 Conclusion Given the expanding role of Video-LMMs in practical world-centric applications, it is vital to ensure that these models perform robustly and exhibit human-like reasoning and interaction capabilities across various complex and real-world contexts. In this work, we present the CVRR-ES benchmark for Video-LMMs, aiming to evaluate Video-LMMs on these very fronts. Through extensive evaluations, we find that Video-LMMs, especially open-source ones, exhibit limited robustness and reasoning capabilities over complex videos involving real-world contexts. Based on our analysis, we formulate a training-free prompting technique that effectively improves the performance of Video-LMMs across various evaluation dimensions of the CVRR-ES benchmark. Furthermore, we analyze and investigate the failure cases of Video-LMMs on the CVRR-ES benchmark and deduce several important findings. We hope that the CVRR-ES benchmark, accompanied by our extensive analysis, will contribute towards building the next generation of advanced world-centric video understanding models. 11 \fAre there other players playing in the background while the person picks up the ball? No, there are no other players shown playing in the background while the person picks up the ball. No, there are no other players shown playing in the background.. yes. Yes, there is a football field in the background where other players can be seen \u2026 GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, there are other players playing in the background... Yes, there are other players shown playing in the background while the person picks up the ball \u2026 Yes, there are other players shown playing in the background ... Does the video feature a hand filling the cup with liquid? The video does not feature the hand filling the cup with liquid. a liquid is visible on the surface \u2026 liquid was spilled from the cup rather than poured into it. No. Yes. Yes, the video shows a man using a plastic bottle to fill a red glass cup with a pink liquid. Yes, the video features the hand filling the cup with liquid. Yes, the video shows the hand filling the cup with liquid. Yes, the video shows the hand pouring water into the cup and then pouring it out. Video-ChatGPT Figure 7: Over affirmative behaviour. Most open-source Video-LMMs exhibit overly affirmative behavior by consistently agreeing with user questions, even when the questions are confusing or inaccurate. Does the woman mount the camel after caressing it? \u2026 woman is seen interacting with the camel, where she appears to be caressing or patting it. However, none of the images show her mounting the camel. No, the woman does not mount the camel after caressing it. Yes. Yes, at the end of the video, the woman is mounting the camel after caressing it. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the woman mounts the camel after caressing it. Yes, after caressing the camel, the woman mounts it and rides it around the pen. Yes, after caressing the camel, the woman mounts it. What is happening in the video? The video appears to be a tutorial demonstrating the technique of kicking \u2026 approaching the ball from an angle and planting their foot beside the ball just before making the kick. A man is demonstrating how to place your plant foot beside the ball when shooting a soccer ball. In the video, a man is showing off his ball control skills by doing various moves on a field of grass. In the given video, we see a young man playing soccer with an orange soccer ball on a green field. There is a green soccer ball visible, and we can also see a man wearing a black shirt playing soccer. In the video, a man is seen kicking a soccer ball on a field. \u2026 and is seen kicking \u2026 \u2026 a woman is seen standing on a field and kicking a ball around. She then proceeds to kick the ball In the video, a man is seen kicking a soccer ball on a field while wearing a red shirt and black shorts. Video-ChatGPT Figure 8: Action completion tendency. Most open-source Video-LMMs tend to generate captions corresponding to complete actions and struggle with determining incomplete or partial actions. 12 \fWhat is happening in the video? \u2026 performing a lifting exercise known as the bench press. The individual is lying on a bench, pushing a barbell with weights upward and then lowering it back towards their chest in a controlled manner. A woman is lying on a yoga mat with a barbell resting on her lower abdomen. She then lifts her legs and places her feet on the barbell and proceeds to do a series of leg and hip exercises. A woman is lying on the floor and performing squats while holding a heavyweight barbell. we see a young woman working out on a mat with a black bar and a gym. She lifts the bar over her head and continues to do so for several minutes. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat a woman is seen doing a workout on a weight machine. She is seen lifting the weight and performing a variety of \u2026 a woman is seen lying on the floor with a barbell on her stomach. She then lifts the barbell up and down several times before lying back down. a woman is seen lying on the floor with her arms and legs crossed. She then lifts her head and legs simultaneously and holds them up for a few seconds before dropping them back down. What is the unusual aspect shown in the video? The unusual aspect shown in the video appears to be a person performing a \\\"levitating\\\" trick where they seem to be floating in mid-air while holding an umbrella and a hat has fallen onto the ground. \u2026 The unusual aspect of the video is that the man appears to be defying gravity by leaning backwards at an impossible angle. The unusual aspect shown in the video is a man doing an elaborate dance routine using a large and heavy cart. The video shows a group of people dressed in black suits and white shirts doing a dance on a busy city street. The unusual aspect shown in the video is the man doing a handstand on a skateboard. \u2026 in the video is the man's ability to breakdance while holding an umbrella. \u2026 The unusual aspect of the video is that the man is dancing and jumping on the sidewalk while holding an umbrella. Video-ChatGPT Figure 9: Weak generalization on OOD videos. Open-source Video-LMMs struggle to correctly reason over videos containing rare and unusual actions. Is the hand movement from left to right in the video? It appears that the hand movement shown is from right to left, rather than from left to right. yes. yes. Yes, the hand movement in the video is from left to right. The hand appears to be pointing at the food. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the hand is moving from left to right in the video.. Yes, the hand movement in the video is from left to right in the \u2026 Yes, the hand movement in the video is from left to right. The .... Is the person running towards the camera or running in the direction away from the camera? The person in the images is running backwards, away from the camera. You can tell because they are \u2026 The person is running backwards toward the camera. running towards the camera In the given video, the person is running towards the camera. The person is running towards the camera. The person is running towards the camera. The person is running towards the camera. Video-ChatGPT Figure 10: Limited temporal understanding. Most Video-LMMs struggle to accurately determine the temporal order of actions in videos. The bottom video shows a man running backward along a track. 13 \fIdentify if the man calmly walk away after putting the ball in the hole? the man does not calmly walk away after putting the ball in the hole. The sequence shows him experiencing frustration after the putt; \u2026 No, he takes the flag out of the hole and throws it. He then walks away. yes. The man is walking calmly after putting the ball into the hole during a golf game. The scene shows \u2026 GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the man calmly walks away after putting the ball \u2026 Yes, the man in the video calmly walks away after putting the ball in the hole. Yes, the man calmly walks away after putting the ball in the hole. What is the person in orange uniform doing and why? The person in the orange uniform appears to be a worker involved in some sort of maintenance or cleaning activity. Looking at the \u2026 The person in orange uniform is a street sweeper. She is sweeping up the sidewalk and putting the trash into a cart. The person in orange uniform is sweeping the street and removing trash. \u2026 \u2026 It is not clear what the person in the orange uniform is doing, but it appears to be standing and observing the street corner. The person in the orange uniform is standing on a street corner and appears to be talking to someone. \u2026 The person in orange uniform is cleaning the sidewalk with a small white cloth. \u2026 The person in the orange uniform is picking up trash on the side of the road. Video-ChatGPT Figure 11: Limited contextual understanding. Most Video-LMMs exhibit a weak understanding of complex videos that contain emotional (e.g., an angry player in the top video) and social cues (e.g., a person saving shoes from getting wet due to rain in the bottom video)."
16
+ }
title_10K/test_title_short_2405.03894v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03894v1",
3
+ "title": "MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View",
4
+ "abstract": "Generating consistent multiple views for 3D reconstruction tasks is still a\nchallenge to existing image-to-3D diffusion models. Generally, incorporating 3D\nrepresentations into diffusion model decrease the model's speed as well as\ngeneralizability and quality. This paper proposes a general framework to\ngenerate consistent multi-view images from single image or leveraging scene\nrepresentation transformer and view-conditioned diffusion model. In the model,\nwe introduce epipolar geometry constraints and multi-view attention to enforce\n3D consistency. From as few as one image input, our model is able to generate\n3D meshes surpassing baselines methods in evaluation metrics, including PSNR,\nSSIM and LPIPS.",
5
+ "authors": "Emmanuelle Bourigault, Pauline Bourigault",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View",
16
+ "main_content": "Introduction Consistent and high-quality novel view synthesis of realworld objects from a single input image is a remaining challenge in computer vision. There is a myriad of applications in virtual reality, augmented reality, robotic navigation, content creation, and filmmaking. Recent advances in the field of deep learning such as diffusion-based models [2, 13, 22, 36, 37] significantly improved mesh generation by denoising process from Gaussian noise. Text-to-image generation has shown great progress with the development of efficient approaches as generative adversarial networks [3, 11, 16], autoregressive transformers [9, 28, 39], and more recently, diffusion models [12, 14, 27, 32]. DALL-E 2 [27] and Imagen [32] are such models capable of generating of photorealistic images with large-scale diffusion models. Latent diffusion models [31] apply the diffusion process in the latent space, enabling for faster image synthesis. Although, image-to-3D generation has shown impressive results, there is still room for improvement in terms of consistency, rendering and efficiency. Generating 3D representations from single view is a difficult task. It requires extensive knowledge of the 3D world. Although diffusion models have achieved impressive performance, they require expensive per-scene optimization. Zero123 [18] proposes a diffusion model conditioned on view features and camera parameters trained on persepective images [6]. However, the main drawback is the lack of multiview consistency in the generation process impeding high-quality 3D shape reconstruction with good camera control. SyncDreamer [19] proposes a 3D feature volume into the Zero123 [18] backbone to improve the multiview consistency. However, the volume conditioning significantly reduces the speed of generation and it overfits to some viewpoints, with 3D shapes displaying distortions. In this paper, we present MVDiff, a multiview diffusion model using epipolar geometry and transformers to generate consistent target views. The main idea is to incorporate epipolar geometry constraints in the model via selfattention and multi-view attention in the UNet to learn the geometry correspondence. We first need to define a scene transformation transformer (SRT) to learn an implicit 3D representation given a set of input views. Then, given an input view and its relative camera pose, we use a viewconditioned diffusion model to estimate the conditional distribution of the target view. We show that this framework presents dual improvements compared to existing baselines in improving the 3D reconstruction from generated multi-view images and in terms of generalization capability. In summary, the paper presents a multi-view generation framework from single image that is transferable to various datasets requiring little amount of changes. We show high performance on the GSO dataset for 3D mesh generation. The model is able to extrapolate one view image of a 3D arXiv:2405.03894v1 [cs.CV] 6 May 2024 \fobject to 360-view with high fidelity. Despite being trained on one dataset of natural objects, it can create diverse and realistic meshes. We summarise our contributions as follows: \u2022 Implicit 3D representation learning with geometrical guidance \u2022 Multi-view self-attention to reinforce view consistency \u2022 Scalable and flexible framework 2. Related Work 2.1. Diffusion for 3D Generation Recently, the field of 3D generation has demonstrated rapid progress with the use of diffusion models. Several studies showed remarkable performance by training models from scratch on large datasets to generate point clouds [21, 24], meshes [10, 20] or neural radiance fields (NeRFs) at inference. Nevertheless, these models lack generalizability as they are trained on specific categories of natural objects. DreamFusion [26] explored leveraging 2D priors to guide 3D generation. Inspired by DreamFusion, several studies adopted a similar pipeline using distillation of a pretrained 2D text-to-image generation model for generating 3D shapes [1, 4, 5, 23, 43]. The per-scene optimisation process typically lacks in efficiency with times ranging from minutes to hours to generate single scenes. Recently, 2D diffusion models for multi-view synthesis from single view have raised interest for their fast 3D shape generation with appealing visuals [17, 18, 34]. However, they generally do not consider consistency of multi-view in the network design. Zero123 proposes relative viewpoint as conditioning in 2D diffusion models, in order to generate novel views from a single image [18]. However, this work does not consider other views in the learning process and this causes inconsistencies for complex shapes. One2-3-45 [17] decodes signed distance functions (SDF) [25] for 3D shape generation given multi-view images from Zero123 [18], but the 3D reconstruction is not smooth and artifacts are present. More recently, SyncDreamer [19] suggests a 3D global feature volume, in order to tackle inconsistencies in multiview generation. 3D volumes are used with depth-wise attention for maintaining multi-view consistency. The heavy 3D global modeling tend to reduce the speed of the generation and quality of the generated meshes. MVDream [35] on the other hand incorporates 3D self-attention with improved generalisability to unseen datasets. 2.2. Sparse-View Reconstruction Sparse-view image reconstruction [15, 45] is a challenging task where only a limited number of images, generally less than 10, are given. Traditional 3D reconstruction methods start by estimating camera poses, then as a second step perform dense reconstruction with multi-view stereo [38, 46] or NeRF [40]. Estimating camera poses in the context of sparse-view reconstruction is a challenging task as there is little or no overlap between views. [45] aimed at addressing this challenge by optimising camera poses and 3D shapes simultaneously. In the same line of research, PF-LRM [42] suggests a pose-free approach to tackle the uncertainty in camera poses. In our work, we learn the relative camera poses of the 3D representation implicitly via a transformer encoder-decoder network and a view-conditioned diffusion model capable of generating consistent multi-view images directly. We then employ a reconstruction system Neus [41] to recover a mesh. 3. Methodology 3.1. Multi-view Conditional Diffusion Model The rationale behind multi-view conditioning in diffusion models is to infer precisely the 3D shape of an object with the constraint that regions of the 3D object are unobserved. Direct 3D predictions for sequential targets as in Zero123 [18] might lead to implausible novel views. To control the uncertainty in novel view synthesis, we choose to enforce multi-view consistency during training. Given an input image or sparse-view input images of a 3D object, denoted as xI, with known camera parameters \u03c0I, and target camera parameters \u03c0T, our aim is to synthesize novel views that recover the geometry of the object. Our framework can be broken down into two parts: (i) first a scene representation transformer (SRT) [33] that learns the latent 3D representation given a single or few input views, and (ii) second a view-conditioned diffusion model to generate novel views. 3.2. Novel View Synthesis via Epipolar Geometry To perform novel view synthesis, we employ a scene representation transformer (SRT) [33]. In the work of [33], a transformer encoder-decoder architecture learns an implicit 3D latent representation given a set of images with camera poses (xI, \u03c0I). First, a CNN extracts features from xI and feeds them as tokens to the transformer encoder fE. The transformer encoder then outputs a set-latent scene representation z via self-attention. For novel view rendering, the decoder transformer of SRT queries the pixel color via cross-attention between the ray associated to that pixel r and the set-latent scene representation z. The aim is to minimize the pixel-level reconstruction loss in Eq. (1), \\lab e l {e q : rec_l o s s} \\ m a t h c al {L}_{\\mathrm {recon}} =\\sum _{\\mathbf {r} \\in \\mathcal {R}}\\left \\|C(\\mathbf {r})-\\hat {C}(\\mathbf {r})\\right \\|_2^2, (1) \fFigure 1. Pipeline of MVDiff. From a single input or few input images, the transformer encoder translates the image(s) into latent scene representations, implicitely capturing 3D information. The intermediate outputs from the scene representation transformer are used as input by the view-conditioned latent diffusion UNet, generating multi-view consistent images from varying viewpoints. where C(r) is the ground truth color of the ray and R is the set of rays sampled from target views. We aim to leverage cross-interaction between images through relative camera poses using epipolar geometrical constraints. For each pixel in a given view i, we compute the epipolar line and the epipolar distance for all pixels in view j to build a weighted affinity matrix A\u2032 i,j = Ai,j+Wi,j where Wi,j is the weighted map obtained from the inverse epipolar distance. View-Conditioned Latent Diffusion. The outputs from SRT do not recover fine details with simple pixel-level reconstruction loss. We employ a view-conditioned diffusion model LDM from [29] to estimate the conditional distribution of the target view given the source view and the relative camera pose: p (xT | \u03c0T, xI, \u03c0I). First, the SRT predicts a low-resolution 32 \u00d7 32 latent image \u02dc xT based on the target view \u03c0T for computationally efficiency. The latent image from SRT is concatenated with the noisy image y and fed into the latent diffusion UNet E\u03b8. In addition, we condition E\u03b8 on the latent scene representation z via cross-attention layers (see Fig. 1). The generated images \u02c6 \u03f5t can be denoted as \\ h at {\\ b old sy mbol {\\mathcal {E}}_t} &= \\boldsymbol {\\mathcal {E}}_\\theta (\\boldsymbol {y}, \\tilde {\\boldsymbol {x}}_{\\mathrm {\\textit {I}}}, \\boldsymbol {z}, t), (2) where t is the timestep. We optimize a simplified variational lower bound, that is \\ma t h c al { L}_{\\ m ath rm {VLD M}}=\\mathbb {E}\\left [\\left \\| \\boldsymbol {\\mathcal {E}}_t \\boldsymbol {\\mathcal {E}}_\\theta (\\boldsymbol {y}, \\tilde {\\boldsymbol {x}}_{\\mathrm {\\textit {T}}}, \\boldsymbol {z}, t) \\right \\|^2\\right ]. (3) Multi-View Attention. As previously stated, in Zero123 [18], multiple images are generated in sequence from a given input view based on camera parameters. This approach can introduce inconsistencies between generated views. To address this issue, we apply modifications to the UNet in order to feed multi-view images. This way, we can predict simultaneously multiple novel views. We employ self-attention block to ensure consistency for different viewpoints. 4. Experiments This section presents the novel view synthesis experiments in Sec. 4.1, and the 3D generation experiments in Sec. 4.2. We present ablation experiments in Sec. 4.3 and ethical considerations in Sec. 4.4. Training Data. For training our model for novel view synthesis, we use 800k 3D object models from Objaverse [6]. For a fair comparison with other 3D diffusion baselines, we use the same training dataset. Input condition views are chosen in a similar way as Zero123 [18]. An azimuth angle is randomly chosen from one of the eight discrete angles of the output cameras. The elevation angle is randomly selected in the range [\u221210\u25e6, 45\u25e6]. For data quality purposes, we discard empty rendered images. This represents about one per cent of the training data. 3D objects are centered and we apply uniform scaling in the range [-1,1] so that dimensions matches. Input images to our pipeline are RGB images 256x256. Test Data. We use the Google Scanned Object (GSO) [8] as our testing dataset, and use the same 30 objects as SyncDreamer [19]. There are 16 images per 3D object, with a fixed elevation of 30\u25e6and every 22.5\u25e6for azimuth. Implementation Details. Our model is trained using the AdamW optimiser [24] with a learning rate of 10\u22124 and weight decay of 0.01. We reduce the learning rate to 10\u22125 for a total of 100k training steps. For our training batches, we use 3 input views and 3 target views randomly sampled with replacement from 12 views for each object, with \fa batch size of 356. We train our model for 6 days on 4 A6000 (48GB) GPUs. Evaluation Metrics. For novel view synthesis, we report the PSNR, SSIM [44], and LPIPS [47]. For 3D reconstruction from single-view or few views, we use the Chamfer Distances (CD) and 3D IoU between the ground-truth and reconstructed volumes. 4.1. Novel View Synthesis We show in Tab. 1 the performance of MVDiff compared to baselines for novel view synthesis on an unseen dataset [8]. Qualitative results are shown in Fig. 2. Our model surpasses baseline Zero-123XL by a margin and benefits from additional views. Given the probabilistic nature of the model, it is able to generate diverse and realistic shapes given a single view (see Fig. 3). Training Sample # Ref. Views GSO NeRF Synthetic PSNR\u2191SSIM\u2191LPIPS\u2193Runtime\u2193PSNR\u2191SSIM\u2191LPIPS\u2193Runtime\u2193 Zero123 800K 1 18.51 0.856 0.127 7s 12.13 0.601 0.421 7s Zero123-XL 10M 1 18.93 0.856 0.124 8s 12.61 0.620 0.381 8s MVDiff 800k 1 20.24 0.884 0.095 9s 12.66 0.638 0.342 9s MVDiff 800k 2 22.91 0.908 0.064 9s 13.42 0.685 0.321 10s MVDiff 800k 3 24.09 0.918 0.052 10s 13.58 0.741 0.301 11s MVDiff 800k 5 25.09 0.927 0.043 11s 14.55 0.833 0.288 12s MVDiff 800k 10 25.90 0.935 0.036 12s 14.51 0.657 0.215 13s Table 1. Novel view synthesis performance on GSO and NeRF Synthetic datasets. MVDiff outperforms Zero-123XL with significantly less training data. Additionally, MVDiff performance exhibits further improvement with the inclusion of more reference views. 4.2. 3D Generation We showed in Sec. 4.1 that our model can generate multiple consistent novel views. In this section, we perform single and few-images 3D generation on the GSO dataset. We generate 16 views with azimuths uniformly distributed in the range 0\u25e6to 360\u25e6. For a fixed elevation angle of 30\u25e6, SyncDreamer may fail to recover the shape of 3D objects at the top and bottom since the camera angle does not cover those regions. Therefore, we also use different elevation angles from \u221210\u25e6to 40\u25e6. Then, we adopt NeuS [40] for 3D reconstruction. The foreground masks of the generated images are initially predicted using CarveKit. It takes around 3 minutes to reconstruct a textured mesh. We compare our 3D recontructions with SoTA 3D generation models, including One-2-3-45 [17] for decoding an SDF using multiple views predicted from Zero123, and SyncDreamer [19] for fitting an SDF using NeuS [40] from 16 consistent fixed generated views. Given two or more reference views, MVDiff outperforms all other baselines (see Tab. 2). MVDiff generates meshes that are visually consistent and resembles the ground-truth (see Fig. 4). # Input Views Chamfer Dist. \u2193 Volume IoU \u2191 Point-E 1 0.0561 0.2034 Shape-E 1 0.0681 0.2467 One2345 1 0.0759 0.2969 LGM 1 0.0524 0.3851 SyncDreamer 1 0.0493 0.4581 MVDiff 1 0.0411 0.4357 MVDiff 2 0.0341 0.5562 MVDiff 3 0.0264 0.5894 MVDiff 5 0.0252 0.6635 MVDiff 10 0.0254 0.6721 Table 2. 3D reconstruction performance on GSO dataset. MVDiff outperforms other image-to-3D baselines in generating high-quality 3D objects, with improved performance for multiple input views. PSNR\u2191 SSIM\u2191 LPIPS\u2193 MVDiff 20.24 0.884 0.095 w/o epipolar att. 19.14 0.864 0.118 w/o multi-view att. 19.92 0.871 0.113 Table 3. Effect of Self-Attention Mechanisms. We report PSNR, SSIM [44], and LPIPS [47] for novel view synthesis from single view on GSO dataset. Results show that epipolar attention and multi-view attention lead to superior performance. 4.3. Ablation Study Multi-View Consistency. The generated images may not always plausible and we need to generate multiple instances with different seeds and select a desirable instance for 3D reconstruction based on higher overall PSNR, SSIM and LPIPS for the view generated. Experiments show that we need 5 generations to obtain optimal reconstruction. Effect of Epipolar and Mult-View Attention. We evaluate the benefits of epipolar attention and multi-view attention on novel view synthesis performing ablation experiments on those components. In particular, we observe a significant drop in performance metrics when removing epipolar attention suggesting that the model is effectively able to implicitely learn 3D object geometry by enforcing geometrical guidance (see Tab. 3). Weight Initialisation. An alternative to initialising weights trained from Zero123 on view-dependent objects [7] is to use weights from Stable Diffusion [30]. We compare the performance of our model initializing weights from Stable Diffusion v2 [30] with a drop in performance of -2.58 PSNR compared to Zero123 [18] weight initialisation. This shows that initializing from Stable Diffusion v2 leads to poorer performance on the novel view task and worse generalisability. 4.4. Risks and Ethical Considerations There are several promising applications of synthetic data, notably in medicine. Synthetic data could make significant \fFigure 2. Zero-Shot Novel View Synthesis on GSO. MVDiff outperforms Zero123-XL for single view generation with greater camera control and generation quality. As more views are added, MVDiff resembles the ground-truth with fine details being captured such as elephant tail and turtle shell design. Input \u2190\u2212\u2212\u2212\u2212\u2212Generated \u2212\u2212\u2212\u2212\u2212\u2192 GT Figure 3. Diversity of Novel View Diffusion with MVDiff on NeRF-Synthetic Dataset. We show nearby views (top and bottom row) displaying good consistency, while more distant views (middle) are more diverse but still realistic. improvement in surgery planning and tailored patient diagnosis leveraging 3D information and its assets of quantitative parameters. Nevertheless, there are ethical considerations associated with the use of synthetic data in medicine. We should ensure the synthetic data is anonymised such that no particular features of the synthetic meshes could link back to a specific patient. In that light, there are transformations that can be applied to the meshes. We should also make sure that the synthetic data is not used in a way it could harm or be detrimental. Further validation on different cohorts of people is required before using these synthetic data in clinical settings. Despite important ethical considerations we shed light on, we believe these 3D representations of organs could be of great use, on hand for research purposes to run largescale statistical analysis on different cohorts and highlight associations with patient metadata. These cost effective synthetic data could be beneficial to improve the visualisations of bones and organs and be deployed widely. 4.5. Limitations A limitation of this work lies in its computational time and resource requirements. Despite advances in sampling approaches, our model still requires more than 50 steps to generate high-quality images. This is a limit of all diffusion based generation models. Moreover, the reconstructed meshes may not always be plausible. To increase the quality, we may need to use a larger object dataset like Objaverse-XL[7] and manually curate the dataset to filter out uncommon shapes such as point clouds, textureless 3D models and more complex scene representation. \fFigure 4. 3D reconstruction from single-view on GSO dataset. MVDiff produces consistent novel views and improves the 3D geometry compared to baselines. One-2-3-45 and SyncDreamer tend to generate overly-smoothed and incomplete 3D objects, in particular the sofa. 5. Conclusion In our work, we aimed to address the problem of inconsistencies in multi-view synthesis from single view. We specifically apply epipolar attention mechanisms as well as multiview attention to aggregate features from multiple views. We propose a simple and flexible framework capable of generating high-quality multi-view images conditioned on an arbitrary number of images. 5.1. Future Work Combining with graphics. In this study, we show that we can generate view consistent 3D objects by learning geometrical correspondences between views during training. We modified the latent diffusion U-Net model to feed multi view in order to generate consistent multi view for 3D reconstruction. Future work can explore utilising knowledge about lighting, and texture to generate more diverse range of 3D shapes with varying lighting and texture. Acknowledgements E.B is supported by the Centre for Doctoral Training in Sustainable Approaches to Biomedical Science: Responsible and Reproducible Research (SABS: R3), University of Oxford (EP/S024093/1). P.B. is supported by the UKRI CDT in AI for Healthcare http://ai4health.io (Grant No. P/S023283/1)."
17
+ }
title_10K/test_title_short_2405.03958v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03958v1",
3
+ "title": "Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model",
4
+ "abstract": "Current state-of-the-art diffusion models employ U-Net architectures\ncontaining convolutional and (qkv) self-attention layers. The U-Net processes\nimages while being conditioned on the time embedding input for each sampling\nstep and the class or caption embedding input corresponding to the desired\nconditional generation. Such conditioning involves scale-and-shift operations\nto the convolutional layers but does not directly affect the attention layers.\nWhile these standard architectural choices are certainly effective, not\nconditioning the attention layers feels arbitrary and potentially suboptimal.\nIn this work, we show that simply adding LoRA conditioning to the attention\nlayers without changing or tuning the other parts of the U-Net architecture\nimproves the image generation quality. For example, a drop-in addition of LoRA\nconditioning to EDM diffusion model yields FID scores of 1.91/1.75 for\nunconditional and class-conditional CIFAR-10 generation, improving upon the\nbaseline of 1.97/1.79.",
5
+ "authors": "Joo Young Choi, Jaesung R. Park, Inkyu Park, Jaewoong Cho, Albert No, Ernest K. Ryu",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model",
17
+ "main_content": "Introduction In recent years, diffusion models have led to phenomenal advancements in image generation. Many cuttingedge diffusion models leverage U-Net architectures as their backbone, consisting of convolutional and (qkv) self-attention layers Dhariwal & Nichol (2021); Kim et al. (2023); Saharia et al. (2022); Rombach et al. (2022); Podell et al. (2024). In these models, the U-Net architecture-based score network is conditioned on the time, and/or, class, text embedding Ho & Salimans (2021) using scale-and-shift operations applied to the convolutional layers in the so-called residual blocks. Notably, however, the attention layers are not directly affected by the conditioning, and the rationale behind not extending conditioning to attention layers remains unclear. This gap suggests a need for in-depth studies searching for effective conditioning methods for attention layers and assessing their impact on performance. Meanwhile, low-rank adaptation (LoRA) has become the standard approach for parameter-efficient fine-tuning of large language models (LLM) Hu et al. (2022). With LoRA, one trains low-rank updates that are added to frozen pre-trained dense weights in the attention layers of LLMs. The consistent effectiveness of LoRA for LLMs suggests that LoRA may be generally compatible with attention layers used in different architectures and for different tasks Chen et al. (2022); Pan et al. (2022); Lin et al. (2023); Gong et al. (2024). In this work, we introduce a novel method for effectively conditioning the attention layers in the U-Net architectures of diffusion models by jointly training multiple LoRA adapters along with the base model. We call these LoRA adapters TimeLoRA and ClassLoRA for discrete-time settings, and Unified Compositional LoRA (UC-LoRA) for continuous signal-to-ratio (SNR) settings. Simply adding these LoRA adapters in a drop-in fashion without modifying or tuning the original model brings consistent enhancement in FID scores across several popular models applied to CIFAR-10, FFHQ 64x64, and ImageNet datasets. In particular, adding LoRA-conditioning to the EDM model Karras et al. (2022) yields improved FID scores of 1.75, 1.91, 2.31 for class-conditional CIFAR-10, unconditional CIFAR-10, and FFHQ 64x64 datasets, respectively, outperforming the baseline scores of 1.79, 1.97, 2.39. Moreover, we find that LoRA conditioning by itself is 2 \fScale-Shift Group Norm SiLU Convolution Group Norm SiLU Convolution Input Conditioning Linear Input QKV Group Norm \u03c9-scale LoRA LoRA Dot Product Projection \u03c9-scale LoRA LoRA MLP MLP Conditioning A1 Am B1 Bm \u03c91(t) \u03c9m(t) W A\u2032 c B\u2032 c \u00b7 \u00b7 \u00b7 A1 Am B1 Bm \u03c91(cond) \u03c9m(cond) W \u00b7 \u00b7 \u00b7 cond. MLP Unified compositional LoRA TimeLoRA and ClassLoRA Attn. Block LoRA U-Net block LoRA conditioning of attention block Figure 2: Conditioning of U-Net Block: (left) scale-and-shift conditioning on the convolutional block (middle) LoRA conditioning on the attention block (right) top: TimeLoRA and ClassLoRA for the discrete-time setting, bottom: unified composition LoRA for the continuous-SNR setting. powerful enough to perform effectively. Our experiments show that only conditioning the attention layers using LoRA adapters (without the conditioning convolutional layers with scale-and-shift) achieves comparable FID scores compared to the baseline scale-and-shift conditioning (without LoRA). Contribution. Our experiments show that using LoRA to condition time and class information on attention layers is effective across various models and datasets, including nano diffusion Lelarge et al. (2024), IDDPM Nichol & Dhariwal (2021), and EDM Karras et al. (2022) architectures using the MNIST Deng (2012), CIFAR-10 Krizhevsky et al. (2009), and FFHQ Karras et al. (2019) datasets. Our main contributions are as follows. (i) We show that simple drop-in LoRA conditioning on the attention layers improves the image generation quality, as measured by lower FID scores, while incurring minimal (\u223c10%) added memory and compute costs. (ii) We identify the problem of whether to and how to condition attention layers in diffusion models and provide the positive answer that attention layers should be conditioned and LoRA is an effective approach that outperforms the prior approaches of no conditioning or conditioning with adaLN Peebles & Xie (2023). Our results advocate for incorporating LoRA conditioning into the larger state-of-the-art U-Net-based diffusion models and the newer experimental architectures. 2 Prior work and preliminaries 2.1 Diffusion models Diffusion models Sohl-Dickstein et al. (2015); Song & Ermon (2019); Ho et al. (2020); Song et al. (2021b) generate images by iteratively removing noise from a noisy image. This denoising process is defined by the reverse process of the forward diffusion process: given data x0 \u223cq0, progressively inject noise to x0 by q(xt | xt\u22121) = N \u0010p 1 \u2212\u03b2txt\u22121, \u03b2tI \u0011 for t = 1, . . . , T and 0 < \u03b2t < 1. If \u03b2t is sufficiently small, we can approximate the reverse process as q(xt\u22121 | xt) \u2248N (\u00b5t(xt), \u03b2tI) 3 \fwhere \u00b5t(xt) = 1 \u221a1 \u2212\u03b2t (xt + \u03b2t\u2207log pt(xt)). A diffusion model is trained to approximate the score function \u2207log pt(xt) with a score network s\u03b8, which is often modeled with a U-Net architecture Ronneberger et al. (2015); Song & Ermon (2019). With s\u03b8 \u2248\u2207log pt(xt), the diffusion model approximates the reverse process as p\u03b8(xt\u22121|xt) = N \u0012 1 \u221a1 \u2212\u03b2t (xt + \u03b2ts\u03b8(xt, t)), \u03b2tI \u0013 \u2248q(xt\u22121 | xt). To sample from a trained diffusion model, one starts with Gaussian noise xT \u223cN (0, (1 \u2212\u00af \u03b1T )I), where \u00af \u03b1t = Qt s=1(1\u2212\u03b2s), and progressively denoise the image by sampling from p\u03b8(xt\u22121|xt) with t = T, T \u22121, . . . , 2, 1 sequentially to obtain a clean image x0. The above discrete-time description of diffusion models has a continuous-time counterpart based on the theory of stochastic differential equation (SDE) for the forward-corruption process and reversing it based on Anderson\u2019s reverse-time SDE Anderson (1982) or a reverse-time ordinary differential equation (ODE) with equivalent marginal probabilities Song et al. (2021a). Higher-order integrators have been used to reduce the discretization errors in solving the differential equations Karras et al. (2022). Architecture for diffusion models. The initial work of Song & Ermon (2019) first utilized the CNN-based U-Net architecture Ronneberger et al. (2015) as the architecture for the score network. Several improvements have been made by later works Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021); Hoogeboom et al. (2023) incorporating multi-head self-attention Vaswani et al. (2017), group normalization Wu & He (2018), and adaptive layer normalization (adaLN) Perez et al. (2018). Recently, several alternative architectures have been proposed. Jabri et al. (2023) proposed Recurrent Interface Network (RIN), which decouples the core computation and the dimension of the data for more scalable image generation. Peebles & Xie (2023); Bao et al. (2023); Gao et al. (2023); Hatamizadeh et al. (2023) investigated the effectiveness of transformer-based architectures Dosovitskiy et al. (2021) for diffusion models. Yan et al. (2023) utilized state space models Gu et al. (2022) in DiffuSSM to present an attention-free diffusion model architecture. In this work, we propose a conditioning method for attention layers and test it on several CNN-based U-Net architectures. Note that our proposed method is applicable to all diffusion models utilizing attention layers. 2.2 Low-rank adaptation Using trainable adapters for specific tasks has been an effective approach for fine-tuning models in the realm of natural language processing (NLP) Houlsby et al. (2019); Pfeiffer et al. (2020). Low-rank adpatation (LoRA, Hu et al. (2022)) is a parameter-efficient fine-tuning method that updates a low-rank adapter: to fine-tune a pre-trained dense weight matrix W \u2208Rdout\u00d7din, LoRA parameterizes the fine-tuning update \u2206W with a low-rank factorization W + \u2206W = W + BA, where B \u2208Rdout\u00d7r, A \u2208Rr\u00d7din, and r \u226amin{din, dout}. LoRA and diffusion. Although initially proposed for fine-tuning LLMs, LoRA is generally applicable to a wide range of other deep-learning modalities. Recent works used LoRA with diffusion models for various tasks including image generation Ryu (2023); Gu et al. (2023); Go et al. (2023), image editing Shi et al. (2023), continual learning Smith et al. (2023), and distillation Golnari (2023); Wang et al. (2023b). While all these works demonstrate the flexibility and efficacy of the LoRA architecture used for fine-tuning diffusion models, to the best of our knowledge, our work is the first attempt to use LoRA as part of the core U-Net for diffusion models for full training, not fine-tuning. 4 \f2.3 Conditioning the score network For diffusion models to work properly, it is crucial that the score network s\u03b8 is conditioned on appropriate side information. In the base formulation, the score function \u2207xpt(x), which the score network s\u03b8 learns, depends on the time t, so this t-dependence must be incorporated into the model via time conditioning. When class-labeled training data is available, class-conditional sampling requires class conditioning of the score network Ho & Salimans (2021). To take advantage of data augmentation and thereby avoid overfitting, EDM Karras et al. (2022) utilizes augmentation conditioning Jun et al. (2020), where the model is conditioned on the data augmentation information such as the degree of image rotation or blurring. Similarly, SDXL Podell et al. (2024) uses micro-conditioning, where the network is conditioned on image resolution or cropping information. Finally, text-to-image diffusion models Saharia et al. (2022); Ramesh et al. (2022); Rombach et al. (2022); Podell et al. (2024) use text conditioning, which conditions the score network with caption embeddings so that the model generates images aligned with the text description. Conditioning attention layers. Prior diffusion models using CNN-based U-Net architectures condition only convolutional layers in the residual blocks by applying scale-and-shift or adaLN (see (left) of Figure 2). In particular, attention blocks are not directly conditioned in such models. This includes the stateof-the-art diffusion models such as Imagen Saharia et al. (2022), DALL\u00b7E 2 Ramesh et al. (2022), Stable Diffusion Rombach et al. (2022), and SDXL Podell et al. (2024). To clarify, Latent Diffusion Model Rombach et al. (2022) based models use cross-attention method for class and text conditioning, but they still utilize scale-and-shift for time conditioning. There is a line of research proposing transformer-based architectures (without convolutions) for diffusion models, and these work do propose methods for conditioning attention layers. For instance, DiT Peebles & Xie (2023) conditioned attention layers using adaLN and DiffiT Hatamizadeh et al. (2023) introduced time-dependent multi-head self-attention (TMSA), which can be viewed as scale-and-shift conditioning applied to attention layers. Although such transformer-based architectures have shown to be effective, whether conditioning the attention layers with adaLN or scale-and-shift is optimal was not investigated. In Section 5.5 of this work, we compare our proposed LoRA conditioning on attention layers with the prior adaLN conditioning on attention layers, and show that LoRA is the more effective mechanism for conditioning attention layers. Diffusion models as multi-task learners. Multi-task learning Caruana (1997) is a framework where a single model is trained on multiple related tasks simultaneously, leveraging shared representations between the tasks. If one views the denoising tasks for different timesteps (or SNR) of diffusion models as related but different tasks, the training of diffusion models can be interpreted as an instance of the multi-task learning. Following the use of trainable lightweight adapters for Mixture-of-Expert (MoE) Jacobs et al. (1991); Ma et al. (2018), several works have utilized LoRA as the expert adapter for the multi-task learning Caccia et al. (2023); Wang et al. (2023a; 2024); Zadouri et al. (2024). Similarly, MORRIS Audibert et al. (2023) and LoRAHub Huang et al. (2023) proposed using the weighted sum of multiple LoRA adapters to effectively tackle general tasks. In this work, we took inspiration from theses works by using a composition of LoRA adapters to condition diffusion models. 3 Discrete-time LoRA conditioning Diffusion models such as DDPM Ho et al. (2020) and IDDPM Nichol & Dhariwal (2021) have a predetermined number of discrete timesteps t = 1, 2, . . . , T used for both training and sampling. We refer to this setting as the discrete-time setting. We first propose a method to condition the attention layers with LoRA in the discrete-time setting. In particular, we implement LoRA conditioning on IDDPM by conditioning the score network with (discrete) time and (discrete) class information. 5 \f3.1 TimeLoRA TimeLoRA conditions the score network for the discrete time steps t = 1, . . . , T. In prior architectures, time information is typically injected into only the residual blocks containing convolutional layers. TimeLoRA instead conditions the attention blocks. See (right) of Figure 2. Non-compositional LoRA. Non-compositional LoRA instantiates T independent rank-r LoRA weights A1, A2, . . . , AT , B1, B2, . . . , BT . The dense layer at time t becomes Wt = W + \u2206W(t) = W + BtAt for t = 1, . . . , T. To clarify, the trainable parameters for each linear layer are W, A1, A2, . . . , AT , and B1, B2, . . . , BT . In particular, W is trained concurrently with A1, A2, . . . , AT , and B1, B2, . . . , BT . However, this approach has two drawbacks. First, since T is typically large (up to 4000), instantiating T independent LoRAs can occupy significant memory. Second, since each LoRA (At, Bt) is trained independently, it disregards the fact that LoRAs of nearby time steps should likely be correlated/similar. It would be preferable for the architecture to incorporate the inductive bias that the behavior at nearby timesteps are similar. Compositional LoRA. Compositional LoRA composes m LoRA bases, A1, . . . , Am and B1, . . . , Bm, where m \u226aT. Each LoRA basis (Ai, Bi) corresponds to time ti for 1 \u2264t1 < \u00b7 \u00b7 \u00b7 < tm \u2264T. The dense layer at time t becomes Wt = W + \u2206W(t) = W + m X i=1 (\u03c9t)i BiAi, where \u03c9t = ((\u03c9t)1 , . . . , (\u03c9t)m) is the time-dependent trainable weights composing the LoRA bases. To clarify, the trainable parameters for each linear layer are W, A1, A1, . . . , Am, B1, B1, . . . , Bm, and \u03c9t. Since the score network is a continuous function of t, we expect \u03c9t \u2248\u03c9t\u2032 if t \u2248t\u2032. Therefore, to exploit the task similarity between nearby timesteps, we initialize (\u03c9t)i with a linear interpolation scheme: for tj \u2264t < tj+1, (\u03c9t)i = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 tj+1 \u2212t tj+1 \u2212tj i = j t \u2212tj tj+1 \u2212tj i = j + 1 0 otherwise. In short, at initialization, \u2206W(t) uses a linear combination of the two closest LoRA bases. During training, \u03c9t can learn to utilize more than two LoRA bases, i.e., \u03c9t can learn to have more than two non-zeros through training. Specifically, (\u03c91, . . . , \u03c9T ) \u2208Rm\u00d7T is represented as an m \u00d7 T trainable table implemented as nn.Embedding in Pytorch. 3.2 ClassLoRA Consider a conditional diffusion model with C classes. ClassLoRA conditions the attention layers in the score network with the class label. Again, this contrasts with the typical approach of injecting class information only into the residual blocks containing convolutional layers. See (right) of Figure 2. Since C is small for CIFAR-10 (C = 10) and the correlations between different classes are likely not strong, we only use the non-compositional ClassLoRA: Wc = W + \u2206W(c) = W + B\u2032 cA\u2032 c for c = 1, . . . , C. In other words, each LoRA (A\u2032 c, B\u2032 c) handles a single class c. When C is large, such as in the case of ImageNet1k, one may consider using a compositional version of ClassLoRA. 6 \f4 Continuous-SNR LoRA conditioning Motivated by (Kingma et al., 2021), some recent models such as EDM Karras et al. (2022) consider parameterizing the score function as a function of noise or signal-to-noise ratio (SNR) level instead of time. In particular, EDM Karras et al. (2022) considers the probability flow ODE Xt = \u2212\u02d9 \u03c3(t)\u03c3(t)s\u03b8(x; \u03c3(t)) dt, where s\u03b8(x; \u03c3) is the score network conditioned on the SNR level \u03c3. We refer to this setting as the continuousSNR setting. The main distinction between Sections 3 and 4 is in the discrete vs. continuous parameterization, since continuous-time and continuous-SNR parameterizations of score functions are equivalent. We choose to consider continuous-SNR (instead of continuous-time) parameterizations for the sake of consistency with the EDM model Karras et al. (2022). Two additional issues arise in the present setup compared to the setting of Section 3. First, by considering a continuum of SNR levels, there is no intuitive way to assign a single basis LoRA to a specific noise level. Second, to accommodate additional conditioning elements such as augmentations or even captions, allocating independent LoRA for each conditioning element could lead to memory inefficiency. 4.1 Unified compositional LoRA (UC-LoRA) Consider the general setting where the diffusion model is conditioned with N attributes cond1, . . . , condN, which can be a mixture of continuous and discrete information. In our EDM experiments, we condition the score network with N = 3 attributes: SNR level (time), class, and augmentation information. Unified compositional LoRA (UC-LoRA) composes m LoRA bases A1, . . . , Am and B1, . . . , Bm to simultaneously condition the information of cond1, . . . condN into the attention layer. The compositional weight \u03c9 = (\u03c91, . . . , \u03c9m) of the UC-LoRA is obtained by passing cond1, . . . condN through an MLP. Prior diffusion models typically process cond1, . . . , condN with an MLP to obtain a condition embedding v, which is then shared by all residual blocks for conditioning. For the j-th residual block, v is further processed by an MLP to get scale and shift parameters \u03b3j and \u03b2j: v = SharedMLP(cond1, . . . , condN) (\u03b3j, \u03b2j) = MLPj(v). The (\u03b3j, \u03b2j) is then used for the scale-and-shift conditioning of the j-th residual block in the prior architectures. In our UC-LoRA, we similarly use the shared embedding v and an individual MLP for the j-th attention block to obtain the composition weight \u03c9j(v): v = SharedMLP(cond1, \u00b7 \u00b7 \u00b7 , condN) \u03c9j(v) = MLPj(v). Then, the j-th dense layer of the attention block becomes W(cond1, . . . , condN) = W + \u2206W(cond1, . . . , condN) = W + m X i=1 \u03c9j,i(v)BiAi. To clarify, the trainable parameters for the j-th dense layer are W, A1, A2, . . . , Am, B1, B2, . . . , Bm, and the weights in MLPj. Shared across the entire architecture, the weights in SharedMLP are also trainable parameters. 7 \f5 Experiments In this section, we present our experimental findings. Section 5.1 describes the experimental setup. Section 5.2 first presents a toy, proof-of-concept experiment to validate the proposed LoRA conditioning. Section 5.3 evaluates the effectiveness of LoRA conditioning on attention layers with a quantitative comparison between diffusion models with (baseline) conventional scale-and-shift conditioning on convolutional layers; (only LoRA) LoRA conditioning on attention layers without conditioning convolutional layers; and (with LoRA) conditioning both convolutional layers and attention layers with scale-and-shift and LoRA conditioning, respectively. Section 5.4 investigates the effect of tuning the LoRA rank and the number of LoRA bases. Section 5.5 compares our proposed LoRA conditioning with the adaLN conditioning on attention layers. Section 5.6 explores the robustness of ClassLoRA conditioning compared to conventional scale-and-shift conditioning in extrapolating conditioning information. 5.1 Experimental Setup Diffusion models. We implement LoRA conditioning on three different diffusion models: nano diffusion Lelarge et al. (2024), IDDPM Nichol & Dhariwal (2021), and EDM-vp Karras et al. (2022). With nano diffusion, we conduct a proof-of-concept experiment. With IDDPM, we test TimeLoRA and ClassLoRA for the discrete-time setting, and with EDM, we test UC-LoRA for the continuous-SNR setting. Datasets. For nano diffusion, we use MNIST. For IDDPM, we use CIFAR-10 for both unconditional and class-conditional sampling, and ImageNet64, a downsampled version of the ImageNet1k, for unconditional sampling. For EDM-vp, we also use CIFAR-10 for both unconditional and class-conditional sampling and FFHQ64 for unconditional sampling. Configurations. We follow the training and architecture configurations proposed by the baseline works and only tune the LoRA adapters. For IDDPM, we train the model for 500K iterations for CIFAR-10 with batch size of 128 and learning rate of 1 \u00d7 10\u22124, and 1.5M iterations for ImageNet64 with batch size of 128 and learning rate of 1 \u00d7 10\u22124. For EDM, we train the model with batch size of 512 and learning rate of 1 \u00d7 10\u22123 for CIFAR-10, and with batch size of 256 and learning rate of 2 \u00d7 10\u22124 for FFHQ64. For sampling, in IDDPM, we use 4000 and 4001 timesteps for the baseline and LoRA conditioning respectively, and in EDM, we use the proposed Heun\u2019s method and sample images with 18 timesteps (35 NFE) for CIFAR-10 and 40 timesteps (79 NFE) for FFHQ64. Here, NFE is the number of forward evaluation of the score network and it differs from the number of timesteps by a factor of 2 because Heun\u2019s method is a 2-stage Runge\u2013Kutta method. Appendix A provides further details of the experiment configurations. Note that the baseline works heavily optimized the hyperparameters such as learning rate, dropout probability, and augmentations. Although we do not modify any configurations of the baseline and simply add LoRA conditioning in a drop-in fashion, we expect further improvements from further optimizing the configuration for the entire architecture and training procedure. LoRA. We use the standard LoRA initialization as in the original LoRA paper Hu et al. (2022): for the LoRA matrices (A, B) with rank r, A is initialized as Aij \u223cN(0, 1/r) and B as the zero matrix. Following Ryu (2023), we set the rank of each basis LoRA to 4. For TimeLoRA and ClassLoRA, we use 11 and 10 LoRA bases, and for UC-LoRA we use 18 and 20 LoRA bases for CIFAR-10 and FFHQ. Due to our constrained computational budget, we were not able to conduct a full investigation on the optimal LoRA rank or the number LoRA bases. However, we experiment with the effect of rank and number of LoRA bases to limited extent and report the result in Section 5.4. 5.2 Proof-of-concept experiments We conduct toy experiments with nano diffusion for both discrete-time and continuous-SNR settings. Nano diffusion is a small diffusion model with a CNN-based U-Net architecture with no skip connections with about 500, 000 trainable parameters. We train nano diffusion on unconditional MNIST generation with 8 \f3 different conditioning methods: conventional scale-and-shift, TimeLoRA, and UC-LoRA. As shown in Figure 3, conditioning with TimeLoRA or UC-LoRA yields competitive result compared to the conventional scale-and-shift conditioning. Figure 3: MNIST samples generated by nano diffusion trained with (1st row) conventional scale-and-shift conditioning; (2nd row) TimeLoRA with linear interpolation initialization; (3rd row) UC-LoRA; and (4th row) TimeLoRA with random initialization. Initialization of \u03c9i(t) for TimeLoRA. As shown in Figure 3 the choice of initialization of \u03c9i(t) for TimeLoRA impacts performance. With randomly initialized \u03c9i(t), nano diffusion did not converge after 100 epochs, whereas with \u03c9i(t) initialized with the linear interpolation scheme, it did converge. Moreover, Figure 4 shows that even in UC-LoRA, \u03c9(t) shows higher similarity between nearby timesteps than between distant timesteps after training. This is consistent with our expectation that \u03c9i(t) \u2248\u03c9i(t\u2032) if t \u2248t\u2032. 250 500 750 1000 t1 200 400 600 800 1000 t2 250 500 750 1000 t1 1.0 0.5 0.0 0.5 1.0 Figure 4: Cosine similarity between \u03c9(t1) and \u03c9(t2) for UC-LoRA applied to nano diffusion (left) at initialization and (right) after training. At initialization, the cosine similarity between \u03c9(t1) and \u03c9(t2) has no discernible pattern. After training, however, the cosine similarity between \u03c9(t1) and \u03c9(t2) for t1 \u2248t2 is close to 1, implying their high similarity. 5.3 Main quantitative results Simply adding LoRA conditioning yields improvements. To evaluate the effectiveness of the drop-in addition of LoRA conditioning to the attention layers, we implement TimeLoRA and ClassLoRA to IDDPM and UC-LoRA to EDM, both with the conventional scale-and-shift conditioning on the convolutional layers unchanged. We train IDDPM with CIFAR-10, ImageNet64 and EDM with CIFAR-10, FFHQ64. As reported in Table 1, the addition of LoRA conditioning to the attention layers consistently improves the image generation quality as measured by FID scores Heusel et al. (2017) across different diffusion models and datasets with only (\u223c10%) addition of the parameter counts. Note these improvements are achieved without tuning any hyperparameters of the base model components. 9 \fInitializing the base model with pre-trained weights. We further test UC-LoRA on pre-trained EDM base models for unconditional CIFAR-10 and FFHQ64 generations. As reported in Table 1, using pre-trained weights showed additional gain on FID score with fewer number of interations (\u223c50%). To clarify, although we initialize the base model with pre-trained weights, we fully train both base model and LoRA modules rather than finetuning. LoRA can even replace scale-and-shift. We further evaluate the effectiveness of LoRA conditioning by replacing the scale-and-shift conditioning for the convolutional layers in residual blocks with LoRA conditioning for the attention blocks. The results of Table 1 suggest that solely using LoRA conditioning on attention layers achieves competitive FID scores while being more efficient in memory compared to the baseline score network trained with scale-and-shift conditioning on convolutional layers. For IDDPM, using LoRA in place of the conventional scale-and-shift conditioning consistently produces better results. Significant improvement is observed especially for class-conditional generation of CIFAR-10. For EDM, replacing the scale-and-shift conditioning did not yield an improvement, but nevertheless performed comparably. We note that in all cases, LoRA conditioning is more parameter-efficient (\u223c10%) than the conventional scale-and-shift conditioning. 5.4 Effect of LoRA rank and number of LoRA bases We investigate the effect of tuning the LoRA rank and the number of LoRA bases on the EDM model for unconditional CIFAR-10 generation and report the results in Table 2. Our findings indicate that using more LoRA bases consistently improves the quality of image generations. On the other hand, increasing LoRA rank does not guarantee better performance. These findings suggest an avenue of further optimizing and improving our main quantitative results of Section 5.3 and Table 1, which we have not yet been able to pursue due to our constrained computational budget. # basis rank FID # Params Varying # basis 9 4 1.99 57185519 18 4 1.96 57745499 36 4 1.95 58865459 Varying rank 18 2 1.93 57192539 18 4 1.96 57745499 18 8 1.96 58851419 Table 2: Effect of the number of LoRA bases and the LoRA rank on unconditional CIFAR-10 sampling of EDM with LoRA 5.5 Comparison with adaLN We compare the effectiveness of our proposed LoRA conditioning with adaLN conditioning applied to attention layers. Specifically, we conduct an experiment on EDM with scale-and-shift conditioning on convolutional layers removed and with (i) adaLN conditioning attention layers or (ii) LoRA conditioning attention layers. We compare the sample quality of unconditional and class-conditional CIFAR-10 generation and report the results in Table 3. We find that LoRA conditioning significantly outperforms adaLN conditioning for both unconditional and conditional CIFAR-10 generation. This indicates that our proposed LoRA conditioning is the more effective mechanism for conditioning attention layers in the U-Net architectures for diffusion models. Type uncond. cond. adaLN conditioning 2.16 2.0 LoRA conditioning 1.99 1.82 Table 3: Comparison of adaLN conditioning and LoRA conditioning on attention layers on EDM (without conditioning convolutional layers). We consider both unconditional and conditional CIFAR-10 generation. 10 \f5.6 Extrapolating conditioning information We conduct an experiment comparing two class-conditional EDM models each conditioned by scale-and-shift and ClassLoRA, for the CIFAR-10 dataset. During training, both models receive size-10 one-hot vectors (ci)j = \u03b4ij representing the class information. First, we input the linear interpolation \u03b1ci +(1\u2212\u03b1)cj (0 \u2264\u03b1 \u22641) of two class inputs ci and cj (corresponding to \u2018airplane\u2019 and \u2018horse\u2019, respectively) to observe the continuous transition between classes. As shown in the top of Figure 5, both the scale-and-shift EDM and ClassLoRA EDM models effectively interpolate semantic information across different classes. However, when a scaled input \u03b2ci is received, with \u03b2 ranging from -1 to 1, scale-and-shift EDM generates unrecognizable images when \u03b2 < 0, while ClassLoRA EDM generates plausible images throughout the whole range, as shown in the bottom of Figure 5. This toy experiment shows that LoRA-based conditioning may be more robust to extrapolating conditioning information beyond the range encountered during training. Appendix C provides further details. Figure 5: Results of (Top) interpolation of class labels in class-conditional EDM with (row1) ClassLoRA; (row2) scale-and-shift; (bottom) extrapolation of class labels in class-conditional EDM with (row1) ClassLoRA; (row2) scale-and-shift 6 Conclusion In this work, we show that simply adding Low-Rank Adaptation (LoRA) conditioning to the attention layers in the U-Net architectures improves the performance of the diffusion models. Our work shows that we should condition the attention layers in diffusion models and provides a prescription for effectively doing so. Some prior works have conditioned attention layers in diffusion models with adaLN or scale-and-shift operations, but we find that LoRA conditioning is much more effective as discussed in Section 5.5. Implementing LoRA conditioning on different and larger diffusion model architectures is a natural and interesting direction of future work. Since almost all state-of-the-art (SOTA) or near-SOTA diffusion models utilize attention layers, LoRA conditioning is broadly and immediately applicable to all such architectures. In particular, incorporating LoRA conditioning into large-scale diffusion models such as Imagen Saharia et al. (2022), DALL\u00b7E 2 Ramesh et al. (2022), Stable Diffusion Rombach et al. (2022), and SDXL Podell et al. (2024), or transformer-based diffusion models such as U-ViT Bao et al. (2023), DiT Peebles & Xie (2023), and DiffiT Hatamizadeh et al. (2023) are interesting directions. Finally, using LoRA for the text conditioning of text-to-image diffusion models is another direction with much potential impact. 11"
18
+ }
title_10K/test_title_short_2405.03962v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03962v1",
3
+ "title": "AdsorbDiff: Adsorbate Placement via Conditional Denoising Diffusion",
4
+ "abstract": "Determining the optimal configuration of adsorbates on a slab (adslab) is\npivotal in the exploration of novel catalysts across diverse applications.\nTraditionally, the quest for the lowest energy adslab configuration involves\nplacing the adsorbate onto the slab followed by an optimization process. Prior\nmethodologies have relied on heuristics, problem-specific intuitions, or\nbrute-force approaches to guide adsorbate placement. In this work, we propose a\nnovel framework for adsorbate placement using denoising diffusion. The model is\ndesigned to predict the optimal adsorbate site and orientation corresponding to\nthe lowest energy configuration. Further, we have an end-to-end evaluation\nframework where diffusion-predicted adslab configuration is optimized with a\npretrained machine learning force field and finally evaluated with Density\nFunctional Theory (DFT). Our findings demonstrate an acceleration of up to 5x\nor 3.5x improvement in accuracy compared to the previous best approach. Given\nthe novelty of this framework and application, we provide insights into the\nimpact of pre-training, model architectures, and conduct extensive experiments\nto underscore the significance of this approach.",
5
+ "authors": "Adeesh Kolluru, John R Kitchin",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "physics.chem-ph"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "AdsorbDiff: Adsorbate Placement via Conditional Denoising Diffusion",
16
+ "main_content": "Introduction Heterogenous catalysis plays an important role in developing chemicals in industries, environmental protection through converters, and the synthesis of alternative fuels (Liu & Li, 2017; Zitnick et al., 2020). Modeling these chemical reactions involve an intermediate adsorbate on a catalyst slab which determines the efficacy of the catalyst for that particular reaction. Discovering a novel catalyst computationally involves screening through billions of candidates and finding the lowest energy configuration. 1Department of Chemical Engineering, Carnegie Mellon University. Correspondence to: Adeesh Kolluru <[email protected]>, John R. Kitchin <[email protected]>. Finding the lowest energy configuration for an adsorbate and slab requires a global optimum (which is non-convex) search across different sites on the slab. Conventional approaches solve this in two steps (1) heuristically place the adsorbate on certain important sites and (2) perform optimization with quantum mechanical calculators like Density Functional Theory (DFT) on each of these sites. The lowest energy site out of these is considered for calculating adsorption energy, which is a thermodynamic descriptor for how good that catalyst is. With recent advances in machine learning methods for predicting forces, it has become possible to perform optimization with ML force fields (MLFFs) instead of Density Functional Theory (DFT) making this process faster and easier to test many sites and find better minima. These ML force fields are trained on DFT data to predict energies and forces corresponding to different adslab configurations. The recent release of the OC20-Dense dataset (Lan et al., 2023) signifies a significant advancement in the computation of the lowest energy adslab configuration. This work employs a blend of heuristic and random adsorbate placements across 100 sites, with subsequent optimizations across each site using Density Functional Theory (DFT) to calculate adsorption energy. The study further introduces AdsorbML, a paradigm characterized by a brute-force exploration of initial adsorbate placements. Employing pre-trained machine learning (ML) force fields from OC20, AdsorbML streamlines the optimization process, culminating in the determination of the lowest energy adsorbate-slab (adslab) configuration. The predictive accuracy of these configurations is rigorously validated against DFT single-points or complete DFT optimization. This hybrid approach results in a computational acceleration of 2000-fold in adsorption energy calculations compared to the sole reliance on DFT calculations. Recent developments in graph neural network (GNN) based ML architectures have increased the accuracies of adsorption energy prediction significantly by encoding geometric information of atoms in more explicit ways. However, there\u2019s little to no work done on improving the adsorption site prediction which could help us get away with the currently used brute-force approach. In this work, we develop a novel conditional denoising diffu1 arXiv:2405.03962v1 [cs.LG] 7 May 2024 \fAdsorbate placement via conditional denoising diffusion sion framework for adsorbate placement. We first formulate a diffusion framework over the space of the 2D translation and 3D rigid rotation of an adsorbate molecule over the slab considering periodic boundary conditions (PBC) of the slab. Through the learned diffusion process, we sample the most stable site by iteratively updating the center of mass of adsorbate and rigid orientation. Performing a naive unconditional diffusion framework on the most optimal adsorbate site and orientation \u2014 corresponding to the lowest energy adslab configuration out of 100 densely sampled calculations in OC20-Dense \u2014 leads to throwing away 99% of DFT optimal energy data. Therefore, we modify the diffusion training to be conditional on relative energies (relative across densely sampled sites of an adslab combination). This leads to significant improvements in accuracies and sample efficiency during diffusion training. After sampling for the optimal site and orientation of adsorbate on the slab, we perform ML force field (MLFF) optimization and DFT single-point verification similar to AdsorbML. This comprehensive end-to-end evaluation helps in robust assessment of the practical impact of the learned diffusion model. There have been significant advances in diffusion generative models in molecular and material discovery, and analogous problems in molecular docking on proteins. However, this is the first work to frame the adsorbate placement problem considering all its symmetries with the slab in a diffusion framework. Intuitively, the reverse diffusion process of AdsorbDiff helps in skipping multiple minima sites due to its energy-based conditional sampling which is followed by a local optimization with a DFT-learned MLFF to find a global optimum. To facilitate further research on this problem, we provide comprehensive results on the importance of GNN architectures for the diffusion task, show the importance of pretraining, and demonstrate the success of our approach to in-distribution (ID) and out-of-distribution (OOD) splits. The summary of contributions of this work are \u2022 We propose AdsorbDiff, a novel conditional denoising diffusion framework designed to leverage the translation, rotation, and periodic symmetries inherent in adsorbate and slab interactions. Additionally, this framework is adept at efficiently predicting the lowest energy site by conditional training on relative energies. \u2022 We present our results in a comprehensive end-to-end evaluation framework, integrated with DFT, to accurately gauge the true capability of our approach in predicting optimal adsorption energies. \u2022 We achieve a 31.8% success rate, 3.5x higher than the naive AdsorbML baseline of 9.1% with a single site prediction. Alternatively, we demonstrate that a comparable level of accuracy could be achieved by AdsorbML by employing 5x more placements. \u2022 We demonstrate that pretraining on large-scale local optimization data can significantly improve the results on the search for global optima. \u2022 We show that diffusion results exhibit insignificant dependence on GNN architectures, in contrast to the notable differences observed for the same architectures when trained on DFT forces. \u2022 We highlight the model\u2019s generalization capabilities to previously unseen adsorbates and slabs. 2. Background and Related Work Force-fields: Energy and forces (as a gradient of energy with respect to positions) are calculated using ab initio quantum mechanical methods like Density Functional Theory (DFT). ML models can be trained to predict these energies and forces, and are called ML force-fields (MLFFs). These force fields can be utilized to perform structure optimization to get the lowest energy structures. Optimization: For adsorption energy prediction, we start with an optimized adsorbate and slab, place the adsorbate on a slab, and perform optimization to get an adslab configuration with the lowest energy. Usually, second-order optimizers like BFGS, L-BFGS, Conjugate gradient descent, etc are used to solve this optimization problem. Since this is non-convex, the initial guess of adsorbate placement or the strategy of optimization is critical to finding an adslab configuration corresponding to the global optimum. AdsorbML (Lan et al., 2023) method starts with combining heuristic and random initial placements which is a brute-force approach to finding better minima. \u201dEasy Potential\u201d from (Schaarschmidt et al., 2022) trains a simple harmonic potential to guess this initial placement. Learn2Hop (Merchant et al., 2021) also learns the optimization landscape to navigate through better and hop through local minima. There are approaches like minima hopping that help in navigating through the entire optimization landscape with a force-field (Jung et al., 2023) and help in finding better minima, but these could be computationally expensive. GNNs: Message-Passing Neural Networks (MPNN) are a class of graph neural networks (GNN) that are utilized across material property prediction tasks. Different architectures encode the geometric information in different ways. SchNet (Sch\u00a8 utt et al., 2018) only encodes the distance information. Including more explicit geometric features have improved the model prediction as DimeNet (Gasteiger et al., 2020b;a) incorporates triplets. SphereNet (Liu et al., 2021), GemNet (Gasteiger et al., 2021; 2022) incorporates complete geometric information explicitly by giving triplets and quadruplets information. PaiNN (Sch\u00a8 utt et al., 2021) incorporates directional information and applies only linear operations on those features. Equivariant models like NequIP (Batzner et al., 2022), Allegro (Musaelian et al., 2023), MACE (Batatia et al., 2022), SCN (Zitnick et al., 2 \fAdsorbate placement via conditional denoising diffusion Figure 1. Overview of AdsorbDiff: Random initial site and orientation for the adsorbate are selected, followed by sampling over 2D translation, 3D rigid rotations, and considering periodic boundary conditions (PBC) to predict the optimal site and orientation. MLFF optimization is then conducted from the predicted site with a fixed interstitial gap until convergence. The final prediction undergoes constraint verification, and DFT verification is performed on valid structures to calculate success rates. 2022), Equiformer (Liao & Smidt, 2022; Liao et al., 2023) utilize spherical harmonics in representing the geometric features. Diffusion Models: Diffusion models are a class of generative models that have shown impressive results across different domains starting from computer vision (Dhariwal & Nichol, 2021; Croitoru et al., 2023), language models (Gong et al., 2022), temporal data modeling, to applications in molecules (Xu et al., 2022; 2023; Arts et al., 2023; Hoogeboom et al., 2022; Jing et al., 2022), proteins (Wu et al., 2022; Trippe et al., 2022; Watson et al., 2022; 2023) and materials (Xie et al., 2021; Fu et al., 2023; Zeni et al., 2023; Merchant et al., 2023; Yang et al., 2023b). There are different kinds of formulations proposed for diffusion models like denoising diffusion probabilistic models (DDPMs), score-based generative models (SGMs), and stochastic differential equations (Score SDEs) (Yang et al., 2023a). Many of these formulations have been adapted to problems in molecular and material discovery. For example, CDVAE (Xie et al., 2021) adapts concepts from noise-conditioned score networks (NCSN) for bulk discovery. Conditional diffusion has also been recently utilized across proteins (Krishna et al., 2024), catalyst and materials (Zheng et al., 2023) for generating structures with required properties. Diffusion models have also been recently utilized for molecular docking on proteins (Corso et al., 2022). Although this problem is somewhat analogous to placing adsorbate on a slab, as far as we know there hasn\u2019t been previous work on formulating adsorbate placement in a diffusion framework. AdsorbDiff also differs from molecular docking in several key aspects \u2013 2D translation formulation, periodic boundary conditions, conditional denoising formulation, and the requirement of DFT level accuracy as opposed to simple force-fields for proteins making our end-to-end evaluation with DFT critical. 3. AdsorbDiff 3.1. Overview The objective of this research is to enhance the efficiency of adsorption energy calculation, representing the lowest energy configuration of an adsorbate on a slab. The methodology of this work involves the initial placement of an adsorbate on a random site within the 2D surface of the slab, followed by reverse diffusion to predict the optimal adsorption site and orientation. Employing machine learning force field optimization, the structure undergoes iterative updates with an optimizer until forces converge close to 0. Subsequently, the final structure is verified for compliance with constraints essential for defining adsorption energy. On the optimized structure, a single Density Functional Theory (DFT) calculation is conducted to obtain the predicted energy (EP red). A successful outcome is determined by the predicted energy being within 0.1 eV or lower than the DFT baseline of adsorption energy in OC20-Dense data, indicating the model\u2019s ability to provide a comparable or superior estimate of adsorption energy (shown in Figure 1). 3 \fAdsorbate placement via conditional denoising diffusion The code is open-sourced with MIT License1. 3.2. Adsorbate placement Various adsorbate placement strategies were explored for the OC20-Dense dataset, incorporating a combination of heuristic and random approaches. Specifically, 100 sites were selected for each adslab configuration, utilizing a blend of heuristic and random placements. The heuristic placement involved strategically situating the adsorbate\u2019s binding site on either an on-top site, hollow site, or bridge site, with a specified interstitial gap denoting the distance between the connecting atom of the slab and the corresponding adsorbate atom. Additional random sites are introduced through the random rotation of the adsorbate along the normal of the slab, accompanied by a slight translational wobble along the surface from the heuristic site. 3.3. Diffusion for adsorbate placement In this work, our objective is to develop a diffusion model aimed at predicting the adsorbate orientation and site corresponding to the lowest energy, as established through benchmarking with the OC20-Dense dataset. The adsorbate motion is constrained within a manifold (Mc) and utilizes the combined action group (A), as described in DiffDock (Corso et al., 2022). This manifold permits the adsorbate to navigate towards configurations with lowenergy adslab states through a combination of translations, rotations, and torsion angle adjustments. Note, for fair comparisons with our baselines, torsion angle alterations are disregarded in our analysis due to the smaller size of the adsorbate employed in this study. This approach aligns with the methodology of AdsorbML, which does not introduce randomness in torsion angles as part of its benchmark. In our framework, we specifically consider translations in the 2D plane parallel to the slab while accounting for periodic boundary conditions (PBC). The z-coordinate is meticulously aligned to denote the normal direction of the slab and the diffusion process is executed across the xycoordinates. Therefore, the adsorbate movements are associated with the 2D translation group T(2), and rigid rotations are modeled using the SO(3) group. The translation operation, denoted as Atr : T(2) \u00d7 R2n \u2192R2n, is defined as Atr(r, x)i = xi + r, employing the isomorphism T(2) \u223c = R2, where xi \u2208R2 represents the position of the i-th adsorbate atom. Similarly, the rotation operation, denoted as Arot : SO(3) \u00d7 R3n \u2192R3n, is defined by Arot(R, x)i = R(xi \u2212\u00af x) + \u00af x, where \u00af x = 1 n P i xi, signifying rotations around the center-of-mass of the adsorbate. For the initial coordinates of adsorbate, we select a random 1https://github.com/AdeeshKolluru/ AdsorbDiff point on the slab. This point is considered as the center-ofmass of the adsorbate in fractional coordinates. We then convert from fractional coordinates to real coordinates and perform a reverse diffusion process to get to the lowest energy site (as shown in Algorithm 1). The work conducted by De et al. (De Bortoli et al., 2022) and Corso et al. (Corso et al., 2022) has demonstrated the applicability of the diffusion framework to Riemannian manifolds. In this context, the score model constitutes the tangent space, and a geodesic random walk serves as the reverse stochastic differential equation (SDE) solver. The score model is trained using denoising score matching (Song & Ermon, 2019), wherein a score function s\u03b8(x) is learned to approximate the gradient of the probability density \u2207xp(x) at varying noise levels (as shown in Algorithm 2). The learned scores for translations and rotations are treated as independent entities, assuming the tangent space is a direct sum of individual tangent spaces, with contributions from torsion being neglected. The forward SDE for both translation and rotation is defined as dx = q d\u03c32(t) dt dw, 4 \fAdsorbate placement via conditional denoising diffusion where w represents the corresponding Wiener process. In the translational scenario within T(2), the model learns a score for a standard Gaussian distribution with variance \u03c32(t). For rotations in SO(3), the diffusion kernel is governed by the IGSO(3) distribution, which can be sampled in the axis-angle parameterization. This involves sampling a unit vector \u03c9\u2032 \u2208so(3) uniformly and a random angle \u03c9 from the interval [0, \u03c0], as outlined by Equations 1 and 2. The score of diffusion kernel is defined in Equation 3. The computation of R\u2032 = R(\u03c9\u02c6 \u03c9)R, where R is the result of applying the Euler vector \u03c9\u02c6 \u03c9 to R, has been established in prior work by Yim et al. (Yim et al., 2023). To efficiently carry out the score computation and sampling processes, it is feasible to precompute the truncated infinite series and interpolate the cumulative distribution function (CDF) of p(\u03c9). p(\u03c9) = 1 \u2212cos(\u03c9) \u03c0 f(\u03c9) (1) f(\u03c9) = \u221e X l=0 (2l + 1) exp \u0012 \u2212l(l + 1)\u03c32 2 \u0013 \u00d7 sin \u0012\u0012 l + 1 2 \u0013 \u03c9 \u0013 sin \u0010\u03c9 2 \u0011 (2) \u2207ln pt(R\u2032|R) = \u0012 d d\u03c9 log f(\u03c9) \u0013 \u02c6 \u03c9 (3) 3.4. Conditional denoising diffusion for adsorbate placement While the OC Challenge set provides densely calculated adsorption energies for 244 systems, a total of 244 * 100 DFT optimization benchmarks were conducted. This involved performing 100 different random placements for each configuration. Notably, the naive denoising diffusion setup was exclusively trained on the 244 lowest energy configurations. To leverage the entirety of the DFT optimization data, a conditional diffusion model is employed. In this model, the optimized position is conditioned on the relative energy, specifically relative to the energy of the lowest energy configuration (Ec rel-i = Ec min \u2212Ec i ). This approach allows for a more comprehensive utilization of the available DFT optimization data. 3.5. Graph Neural Network (GNN) architecture The inputs to the ML model are the 3D positions of all input atoms from the adslab configuration and their corresponding atomic numbers. The outputs predict per-atom 3D vectors. These vectors are forces in the case of force fields and the score function in the case of diffusion. To predict multiple score functions (for translation and rotation), multiple output heads are trained each predicting independent score functions. All architectures used in this work come under the messagepassing neural network (MPNN) framework of graph neural networks (GNNs). MPNNs operate by passing messages between nodes in the graph, allowing information to be exchanged and aggregated iteratively. The key components of an MPNN include message passing, updating node states, and global readout. In the message-passing step, nodes exchange information based on their local context, and this information is then used to update the states of the nodes (as shown in Equation 4). h(t+1) v = Update \u0010 h(t) v , Aggregate \u0010 {m(t) u\u2192v | u \u2208N(v)} \u0011\u0011 (4) Here, h(t) v represents embeddings of node v at iteration t, m(t) u\u2192v denotes the message from node u to v at iteration t, N(v) represents the neighborhood of node v, and Update and Aggregate are differentiable functions for updating node states and aggregating messages, respectively. In our study, we systematically investigate diverse architectures employed in the training of diffusion models to discern the significance of architectural decisions in this context. Specifically, we have chosen to assess the performance of PaiNN, GemNet-OC, and EquiformerV2, each distinguished by its treatment of explicit geometric information and rotational symmetries (Duval et al., 2023). This selection is grounded in the diverse characteristics they bring to the table. Furthermore, we employ these architectures in benchmarking against OC20 force-field evaluation, thereby facilitating comparative analysis of architectural significance in the realms of force-fields and diffusion. 4. Results In this section, we present results demonstrating the impact of AdsorbDiff in accelerating the search for adsorption energy or better global optima. Specifically, we demonstrate the impact of conditional denoising training over unconditional training and a randomly placed adsorbate baseline. This random baseline is equivalent to performing AdsorbML on a single site (Nsite=1). Additionally, we demonstrate the impact of pretraining, model architectures, and the generalization of this approach to new adsorbates and slabs. 4.1. Datasets We utilize two publicly available datasets for this work OC20-Dense (Lan et al., 2023) and OC20 (Chanussot et al., 2021). OC20: Open Catalyst 2020 (OC20) is a large-scale dataset that contains converged DFT optimization trajectories of 5 \fAdsorbate placement via conditional denoising diffusion 460k unique adslab configurations, encompassing 55 unique elements and 74 adsorbates. Note that these optimizations are local optimizations performed with a single heuristic placement. ML force field models are trained on the forces derived from these DFT trajectories. Additionally, the optimized structure from OC20 is utilized for pre-training the diffusion model. OC20-Dense: The OC20-Dense dataset serves as a DFT benchmark for adsorption energies, employing dense placement on 100 random sites per adslab configuration, followed by DFT optimization. This dataset releases both in-distribution (ID) and out-of-distribution (OOD) data, relative to OC20. The ID data incorporates adsorbates and slabs from OC20\u2019s training set but presents different combinations and configurations, while OOD introduces new adsorbates and/or slabs not found in the OC20 training set. A subset of OC20-Dense ID and OOD was utilized in the Open Catalyst Challenge 2023, hosted at the AI for Science Workshop during NeurIPS 2023 2. We split the ID data into 80/20 ratios for training the diffusion model and validating the sampling process. These smaller subsets make it computationally cheaper to perform end-to-end iterations. 4.2. Metric and constraints Our success metric is defined by the final energy calculated through DFT. For real-world applications, this energy (DDF T T otal) is used in calculating the adsorption energy EDF T Ads as EDF T Adsorption = EDF T T otal \u2212EDF T Slab \u2212EDF T Adsorbate, where EDF T Slab and EDF T Adsorbate are the independent energies of slab and adsorbate respectively. This adsorption energy acts as a thermodynamic description of how good a catalyst is for downstream application. The DFT Success Rate (SR) is defined as the percentage of valid structures within 0.1 eV or lower of the DFT computed adsorption energy benchmark in the OC20-Dense data (as described in AdsorbML). This is computationally expensive to calculate but is accurate. Metrics calculated from ML predictions are inexpensive but are also inaccurate, discussed further in Appendix C. Since we calculate adsorption energies, the adsorbate and slab must not change during optimization. Therefore, the structures are considered an anomaly due to (1) adsorbate desorption: adsorbate moves far away from the slab, (2) adsorbate dissociation: atoms in adsorbate dissociate into multiple adsorbates, (3) slab mismatch/reconstruction: slab reconstructs into a completely different structure during optimization (4) adsorbate intercalation: when any of the adsorbate atoms detaches and get into the slab. Experimental setup: All presented results are based on the DFT success rate metric as defined in the preceding 2https://opencatalystproject.org/ challenge.html section. Throughout the diffusion process, we employ the EquiformerV2 architecture, unless explicitly stated otherwise, owing to its state-of-the-art performance in AdsorbML. Additionally, for MLFF optimization, we utilize GemNetOC pre-trained on OC20, chosen for its lower inference cost. Further specifics regarding model and training hyperparameters are available in Appendix D. All results are shown on the val ID split apart from the OOD section. 4.3. Conditional vs Unconditional diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random Unconditional Conditional 9.1% 11.4% 31.8% Conditional vs Unconditional Diffusion (Nsite=1) Figure 2. Comparison of conditional and unconditional diffusion with a baseline of random placement. Conditional diffusion training on relative energies of configurations of adslab significantly improves success rates over unconditional training and AdsorbML baseline. We demonstrate the importance of conditional training on relative energies (as shown in Section 3.4) over unconditional diffusion training in Figure 2. We compare both of these approaches to a naive baseline of AdsorbML with a single site (Nsite=1) where MLFF optimization is performed on a random adsorbate placement. It is noteworthy that the performance of unconditional training is suboptimal, this may be ascribed to the unexploited potential of additional data made available through conditional training. 4.4. AdsorbDiff vs AdsorbML AdsorbML conducts MLFF optimization and DFT evaluations on adsorption sites randomly placed within the system. A comparative analysis is drawn with AdsorbDiff, where the prediction of adsorption sites is facilitated through the utilization of diffusion models. As depicted in Figure 3, it is evident that AdsorbDiff exhibits notably superior performance, particularly at lower Nsites. However, as the number of adsorption sites (Nsites) increases, AdsorbDiff tends to either converge to or underperform in comparison to the brute force approach employed by AdsorbML. Adsorbate sites sampled from AdsorbDiff have less diversity by design as it\u2019s trained to predict the global optima. We calculate the average across the standard deviation of the points sampled at 10 Nsites and get 8.1 \u02da A for AdsorbML and 2.7 \u02da A for AdsorbDiff. AdsorbML\u2019s brute force placements have more randomness which leads to fewer anomalies post the MLFF 6 \fAdsorbate placement via conditional denoising diffusion 2 4 6 8 10 Number of Sites 10 15 20 25 30 35 40 45 DFT Success Rate (%) 9.1% 31.8% 20.5% 34.1% 34.1% 36.3% 47.7% 41.0% AdsorbDiff vs AdsorbML AdsorbML AdsorbDiff AdsorbDiff (Nsite=1) Figure 3. DFT Success Rates (%) for AdsorbDiff and AdsorbML across a varying number of site predictions. AdsorbDiff performs 3.5x better than AdsorbML utilizing a single site prediction. At higher sites, AdsorbML performs better due to the brute-force nature of site prediction that reduces anomalies. 2 4 6 8 10 Number of Sites 10 15 20 25 30 Anomalies 31.8% 25.0% 18.2% 20.5% 11.4% 22.7% 6.8% 13.6% AdsorbML AdsorbDiff Figure 4. Anomalies in AdsorbDiff and AdsorbML with respect to Nsites. A system is labeled as anomalous if all its predicted sites result in anomalies. AdsorbML has fewer anomalies than AdsorbDiff at higher Nsites due to more randomness in initial sites. optimization process shown in Figure 4. 4.5. Impact of pretraining Conditional diffusion benefits from training on a dataset that is 100 times more extensive than the unconditional approach, a consequence of leveraging multiple local optima within a unique adslab configuration. The substantial increase in training data size manifests in a notable enhancement in the success rate for the conditional approach. The OC20 IS2RE dataset, containing optimization data for 460,000 distinct adslab combinations, serves as a valuable resource for pretraining the diffusion model. It is important to acknowledge that this pretraining process results in a model that learns the local optima of an adslab combination, with the caveat that the model may not capture global optima for an adslab combination. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random PT Zero-shot PT Conditional 9.1% 29.6% 31.8% Impact of Pre-training (Nsite=1) Figure 5. Impact of pretraining on 460k OC20 local optima data on DFT Success Rate. PT Zero-shot measures zero-shot generalization of OC20 pre-trained model to OC20-Dense data. PT Conditional is finetuned on OC20 Dense data conditionally on relative energies of adslab configurations. Random baseline corresponds to randomly placed adsorbate. IS2RS Pretraining (PT) Zero-shot: Taking advantage of the diffusion model pre-trained on OC20 IS2RE data, we conduct a zero-shot validation on the OC20-Dense ID val split. This experimental setup allows us to assess the model\u2019s ability to predict better global optima having trained on a large dataset of local optima. Notably, we observe a substantial increase in DFT success rate in the zero-shot setting (as shown in Figure 5). IS2RS Pretraining (PT) Conditional: In this approach, we utilize the pre-trained model using the OC20-Dense data as described in Section 3.4. We observe that although this gives a 2% improvement over zero-shot, it converges to the same results as just training conditionally on OC20-Dense (shown in Figure 5). 4.6. Impact of architectures Architectures characterized by richer geometric information and extensive many-body interaction capabilities, such as eSCN and EquiformerV2, have demonstrated superior performance in force evaluations within the OC20 dataset compared to simpler models like PaiNN, which primarily encode directional information and apply linear transformations. Our benchmarking involves the evaluation of three architectures that exhibit progressively improved performance in OC20 Force MAE, revealing significant differences among them. This evaluation is specifically conducted in the context of the zero-shot assessment following pretraining (PT zeroshot) on an extensive dataset encompassing 460,000 OC20 instances. This choice is inspired by insights from the GemNet-OC paper (Gasteiger et al., 2022), suggesting that certain architectural choices manifest optimal performance only at higher data scales. 7 \fAdsorbate placement via conditional denoising diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) PaiNN GemNet-OC EquiformerV2 27.3% 27.3% 29.6% Impact of GNN architectures on diffusion Figure 6. Impact of Graph Neural Network (GNN) architectures on the diffusion process for DFT Success Rate keeping other parts of the framework same. Different architectures perform similarly on the task of diffusion sampling. Interestingly, in the realm of the diffusion task, we note that the disparity in success rates among these architectures is marginal (as shown in Figure 6) which has been recently demonstrated in applications of molecular generation tasks as well (Wang et al., 2023). The intuition behind this result is that the diffusion model\u2019s score function can be thought of as learning a harmonic potential (Xie et al., 2021). Harmonic potentials are simpler force-fields than ab-initio DFT calculations involved in OC20 forces. This could result in simpler architectures being able to capture the underlying complexity of the diffusion task defined in our work. 4.7. OOD generalization We measure the success of AdsorbDiff in out-of-distribution (OOD) cases where the model hasn\u2019t seen the adsorbate or the slab even during the pre-training on OC20. We pick a random 50 samples out of 200 validation OOD split defined in Open Catalyst Challenge 2023. We observe a marginal decrease of only 3.8% in results for the OOD case compared to the ID scenario and consistently observe significant improvement over the AdsorbML (Nsite=1) baseline. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random AdsorbDiff 8.4% 28% OOD Results Figure 7. Comparison of DFT Success Rate for In-Distribution (ID) and Out-of-Distribution (OOD) splits using the AdsorbDiff method. Random baseline corresponds to randomly placed adsorbate. 4.8. Inference cost In the case of conditional diffusion, our approach maintains a maximum step limit of 100, with adsorbate placement converging, on average, within 98 steps. In contrast, for MLFF optimization with a maximum step limit of 300 and Fmax criteria of 0.01 eV/A (consistent with AdsorbML), the convergence occurs in approximately 286 steps. Consequently, for scenarios with a single adsorption site (Nsite 1), AdsorbDiff incurs approximately 34% more inference cost than AdsorbML, given the GNN architecture for diffusion and MLFF optimization is the same. This end-to-end ML framework is O(104) times faster than the conventional DFT pipelines (Lan et al., 2023). In Section 4.6, we illustrate that simpler and faster models such as PaiNN yield comparable performance to more intricate and slower models like EquiformerV2. This enhances the efficiency of our diffusion-based approach, as its computational burden becomes negligible in comparison to MLFF optimization, which would require more computationally intensive ML architectures (details in Appendix B). 5. Conclusion This work introduces AdsorbDiff, a novel conditional denoising diffusion framework adept at leveraging inherent symmetries in adsorbate and slab interactions, enabling efficient prediction of the lowest energy site. The proposed end-to-end evaluation framework, coupled with Density Functional Theory (DFT), provides a robust assessment of our approach\u2019s capability to predict optimal adsorption energies. Notably, AdsorbDiff achieves a remarkable 31.8% success rate with a single site prediction, surpassing the naive AdsorbML baseline (9.1%) by 3.5x. We demonstrate the benefits of pretraining on large-scale local optima of adsorption sites. Interestingly, we find the diffusion method\u2019s performance to be not significantly dependent on the GNN architecture choice. Furthermore, our model\u2019s demonstrated generalization to previously unseen adsorbates and slabs underscores its adaptability and robustness. 6. Limitations and Future Work Our findings emphasize that anomalies play a substantial role in diminishing success rates, particularly in the context of multiple site predictions. While certain works have successfully employed constraints, such as Hookean constraints, to mitigate these anomalies, their implementation in a computationally efficient manner for larger adsorbates remains non-trivial. Addressing this challenge stands out as a crucial avenue for future research. Furthermore, the incorporation of torsion angles presents a promising direction for further improvement, especially when dealing with larger adsorbates. 8 \fAdsorbate placement via conditional denoising diffusion Impact statement This work\u2019s goal is to accelerate catalyst discovery using machine learning. AdsorbDiff substantially accelerates catalyst search which has a positive impact in the field of developing renewable energy technologies and various chemicals. However, there\u2019s a possibility of utilizing this work to accelerate the search for catalysts for hazardous chemicals. Acknowledgements We thank Minkai Xu, Muhammed Shuaibi, Nima Shoghi, Abhishek Das, and the FAIR Chemistry team at Meta for their valuable feedback and discussions."
17
+ }
title_10K/test_title_short_2405.03989v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03989v2",
3
+ "title": "A Method for Parsing and Vectorization of Semi-structured Data used in Retrieval Augmented Generation",
4
+ "abstract": "This paper presents a novel method for parsing and vectorizing\nsemi-structured data to enhance the functionality of Retrieval-Augmented\nGeneration (RAG) within Large Language Models (LLMs). We developed a\ncomprehensive pipeline for converting various data formats into .docx, enabling\nefficient parsing and structured data extraction. The core of our methodology\ninvolves the construction of a vector database using Pinecone, which integrates\nseamlessly with LLMs to provide accurate, context-specific responses,\nparticularly in environmental management and wastewater treatment operations.\nThrough rigorous testing with both English and Chinese texts in diverse\ndocument formats, our results demonstrate a marked improvement in the precision\nand reliability of LLMs outputs. The RAG-enhanced models displayed enhanced\nability to generate contextually rich and technically accurate responses,\nunderscoring the potential of vector knowledge bases in significantly boosting\nthe performance of LLMs in specialized domains. This research not only\nillustrates the effectiveness of our method but also highlights its potential\nto revolutionize data processing and analysis in environmental sciences,\nsetting a precedent for future advancements in AI-driven applications. Our code\nis available at https://github.com/linancn/TianGong-AI-Unstructure.git.",
5
+ "authors": "Hang Yang, Jing Guo, Jianchuan Qi, Jinliang Xie, Si Zhang, Siqi Yang, Nan Li, Ming Xu",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.DB",
9
+ "cats": [
10
+ "cs.DB"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
14
+ "gt": "A Method for Parsing and Vectorization of Semi-structured Data used in Retrieval Augmented Generation",
15
+ "main_content": "Introduction Large Language Models (LLMs) present substantial benefits in various specialized fields, particularly due to their proficiency in processing and deriving \finsights from extensive volumes of unstructured text. These models excel in converting intricate, unstructured data into organized formats, which is crucial for tasks such as predicting reaction conditions in scientific studies or isolating pertinent legal clauses from extensive documents. This capability is invaluable, especially for augmenting experimental databases and melding computational and experimental data, with notable applications in environmental science(Rillig et al., 2023). In the medical sector, LLMs have shown remarkable efficacy in named entity recognition (NER) tasks, facilitating the extraction and categorization of biomedical information from expansive data sets(Lee et al., 2020). This has significantly contributed to both research and clinical practice. Similarly, in the legal realm, LLMs have proven effective in analyzing complex legal documents, pinpointing crucial legal terms, and enhancing contract analysis(L. Yue et al., 2024). These applications underscore the transformative impact of LLMs in processing large and complex datasets into actionable insights, thus optimizing operations in specialized domains such as healthcare and law. However, the integration of LLMs in specialized domains still faces challenges(Peng et al., 2023.). A notable issue is the generation of 'hallucinations' (L. Yang et al., 2024),which means the creation of factually incorrect, yet seemingly plausible information. This problem is compounded when addressing highly specialized or nuanced queries within professional contexts. This limitation predominantly originates from the generalized nature of the datasets used to train these models, which often lack the depth and specificity required for particular legal and medical scenarios(S. Pan et al., 2024). Consequently, this underscores the critical need for a strategic integration of LLMs with domain-specific expertise. Such a fusion, complemented by continuous evaluation and refinement, is essential to ensure the accuracy and relevance of the models' outputs, especially in fields where precision is paramount. In the realm of ecological environmental management, the Retrieval-Augmented Generation (RAG) approach is highly relevant for LLMs applications. RAG integrates the capabilities of LLMs with external databases, enabling access to and incorporation \fof essential data during generation. This enhances the model's ability to provide accurate, context-specific information, crucial in environmental management's complex domain. However, implementing RAG faces significant challenges, notably in developing a vector-based knowledge base essential for accurate data retrieval. The complexity of creating this base from vast, unstructured environmental data is compounded by a lack of efficient structuring methods. Addressing these data processing challenges is imperative to fully utilize RAG's potential, thereby improving LLMs' effectiveness in ecological environmental governance. In this study, we present an efficient method for processing documents in the `.docx` format and constructing a vector database, leveraging an unstructured open-source toolkit, the function calling capacity of OpenAI and the vector database platform of Pinecone. This paper details the method and their application in processing professional books for wastewater treatment plant operation and constructing a vector database for use with Retrieval-Augmented Generation (RAG), aiming to improve the expertise of large language models in the domain of wastewater treatment plant operation. 2 Background and Related work Retrieval Augmented Generation (RAG) within large language models (LLMs) marks a significant stride in AI research, blending advanced knowledge retrieval with the generation capabilities of LLMs. This approach aims to boost the accuracy and relevance of the models' responses while preserving their contextual depth. Current research focuses on fine-tuning the retrieval process, ensuring that the information fetched aligns closely with user queries and enhances the quality of the model's output(Lewis et al., 2021.). A key challenge lies in integrating this retrieved information smoothly into the generation process, creating responses that are both coherent and contextually appropriate(Rohde et al., 2021). A significant area of exploration is in improving the retrieval phase to filter out irrelevant information or 'noise', ensuring that the data used by the model is of high quality and relevance(Karpukhin et al., 2020). Researchers are also working on \fmaking LLMs more adaptable in using this retrieved data across various topics, enhancing the algorithms that control how the model accesses and uses this information(Kalyan et al., 2021). Central to RAG's function in LLMs is the creation of vector databases from unstructured or semi-structured data like texts and web pages. These databases store information in a format that LLMs can easily access and use. Current research, including work on Transformer-based models, is pivotal in developing methods to efficiently transform vast amounts of data into these useful vector formats (Devlin et al., 2019). However, a noticeable gap in this area is the lack of simple, efficient methods for creating these vector databases. Existing techniques, while effective, tend to be complex and resource-heavy, limiting their broader application. Addressing this challenge with more user-friendly vectorization methods is crucial. Such advancements would significantly widen the scope and effectiveness of LLMs, enabling them to process and generate more nuanced, context-rich language responses in a range of fields, thus enhancing the practical utility and reach of LLMs in various applications. 3 Core Functions However, a noticeable gap in this area is the lack of simple, efficient methods for creating these vector databases. Existing techniques, while effective, tend to be complex and resource-heavy, limiting their broader application. Addressing this challenge with more user-friendly vectorization methods is crucial. Such advancements would significantly widen the scope and effectiveness of LLMs, enabling them to process and generate more nuanced, context-rich language responses in a range of fields, thus enhancing the practical utility and reach of LLMs in various applications. \fFig. 1 Parsing and Vectorization of Semi-structured Data process framework 3.1 Data Preparation In this phase, a diverse array of sources including books, reports, scholarly articles, and data tables is compiled.These data largely consists of semi-unstructured data, encompassing a variety of file formats such as `.html`, `pdf`, `xml`, `docx`, `xlsx` and etc. Considering the substantial volume of data to be processed, the `.docx` format stands out due to its uniform standardization, high-quality text, ease of editing, broad compatibility, and rich metadata content, making it highly advantageous for efficient bulk processing and structured data extraction.In this project, API functionalities are employed to integrate open-source tools for the purpose of converting diverse data formats into the .docx format. For the assurance of effective post-processing, it is imperative that the content in the transformed `.docx` files, including headings, textual elements, and tables, be conformed to a standardized format. This standardization process involves harmonizing the font type, font size, inter-paragraph spacing, and line spacing across all headings, main text, and table contents. 3.2 Automated parsing and splitting During the parsing process, the `.docx` files are divided into multiple elements including titles, texts, images, tables, headers and footers with the partitioning function, utilizing detectron2, a deep learning-based object detection system (Unstructured, 2023). This partition function uses a combination of the styling information in the document and the structure of the text to determine the type of a text element. \fAs part of data preparation for an NLP model, these elements require further filtering, to mitigate potential detrimental impacts on model efficiency caused by superfluous content. This ensuing phase entails a deliberate omission of specific components, particularly 'Headers' and 'Footers'. As a result, this refinement process retains only four core elements: 'Title', 'Text', 'Image', and 'Table', thereby ensuring a concise and targeted dataset for advanced analysis.. For the \"Title\" and \"Text\" elements, prior to integration into NLP models, rigorous data cleaning is essential to avoid efficiency losses caused by extraneous information. To tackle this issue, specialized functions within the 'Unstructured Documentation' cleaning framework are utilized (Unstructured, 2023). These functions effectively merge paragraphs separated by newlines, remove initial bullets and dashes, and eliminate surplus whitespace. This process significantly enhances the textual data's clarity and structural integrity, which is crucial for effective model performance. For the \"Table\" elements, the core textual information is retained in the element's 'text attribute'. To preserve the formatting fidelity of these tables, their HTML representation is also stored, specifically within 'element.metadata.text_as_html'. This dual-storage approach is critical for ensuring that the table's structural and visual integrity is maintained in its rendered form. For the \"Image\" elements, the 'vision_completion' approach leverages the capabilities of the 'gpt-4-vision-preview' API. This method involves generating specific queries that prompt GPT to provide detailed textual descriptions of images. Once these descriptions are obtained, they are inserted back into the data collection, replacing the positions originally occupied by the images. This process ensures a seamless transition from visual to textual data representation in the dataset.. 3.3 Chunking In the 'Unstructured Core Library,' essential for document processing in RAG contexts, the 'chunk_by_title' function is noteworthy for its methodical segmentation of documents into distinct subsections, identifying titles as section markers \f(Unstructured, 2023). Notably, it treats elements like tables and images as separate sections. The inclusion of the 'multi-page_sections' parameter is significant, facilitating the formation of multi-page sections that maintain thematic continuity. Unlike common practices, the 'combine_text_under_n_chars' parameter set to zero allows each text piece, regardless of length, to be recognized as an individual section, preserving the document's detailed structure. The default 'new_after_n_chars' parameter relies on the function\u2019s internal logic for starting new sections. The 'max_characters' parameter, adjusted to 4096, accommodates larger sections, tailored to the specific requirements of the document structure and content 3.4 Vector Database construction By leveraging OpenAI's \"text-embedding-ada-002\" model via API, embedding vectors are generated that correspond to specific content. This involves transforming data, initially partitioned into chunks through a preceding chunking process, into vector formats. The utilization of the \"text-embedding-ada-002\" model is pivotal in enabling large language models to locate content in our dataset that aligns with the given input prompt. The resultant vector data are then stored in Pinecone's vector database, where the feature vectors maintain a dimensionality of 1536. This strategic configuration significantly enhances the database's ability to conduct similarity searches and offers notable advantages in data storage capacity. The application of the \"text-embedding-ada-002\" model thus integrates OpenAI's advanced natural language processing prowess with Pinecone's efficient vector data management, providing a powerful and versatile solution for text search and analysis purposes. 4 Experiments and Discussion In this segment of the research, we have selected one scholarly papers in Chinese and another in English, along with one book in each language, to evaluate the efficacy of the methodologies employed in this study and the performance of the Retrieval-Augmented Generation (RAG) technique. These papers and books include textual, pictorial, and tabular elements. These two categories represent the predominant forms of publicly released documents at present. Papers are commonly \favailable in an editable PDF format, whereas publicly released books are often found in scanned or image-based PDF formats.The specifics of the documents and books utilized for testing are detailed in Table 1. 4.1 Data Processing Results 4.1.1 Results of Text Processing Results The processing results for text information are displayed in Figure 2 and 3, featuring four distinct text blocks from the test papers and books: two in Chinese and two in English. The outcomes are evident in the \"Title\" and \"Cleaned Text\" sections. Upon converting all documents to the `.docx` format and applying the prescribed process, the methodology proficiently identifies \"Title\" across various text types and performs comprehensive text cleaning and organization. This underscores the method's robustness in managing different data structures and multiple languages. Table 1 Information of papers and books Type Title Page Count Language Paper Full-scale upgrade activated sludge to continuous-flow aerobic granular sludge Implementing microaerobic-aerobic configuration with internal separators 12 English \u63d0\u8d28\u589e\u6548\u80cc\u666f\u4e0b\u6392\u6c34\u7ba1\u7f51\u68c0\u6d4b\u6280\u672f\u7684 \u5e94\u7528\u4e0e\u603b\u7ed3 8 Chinese Book Modelling plastic flows in the European Union value chain 132 English \u6c61\u6c34\u5904\u7406\u8bbe\u5907\u64cd\u4f5c\u7ef4\u62a4\u95ee\u7b54 369 Chinese \fFig. 2 Text Processing Results Instances of papers: (a) and (c) are instances of original texts from English and Chinese papers, respectively,while (b) and (d) represent the results of the segmentation into chunks. \fFig. 3 Text Processing Results Instances of books: (a) and (b) are instances of original texts from English and Chinese books, respectively,while (c) and (d) represent the results of the segmentation into chunks. 4.1.2 Results of Image Processing Results The results of transforming images into textual descriptions using LLM are presented in Table 2. This research employs an embedding method that leverages the GPT 4.0 LLM to convert images into text, thereby preserving the completeness of the information. The findings indicate that the key information in both English and Chinese images can be effectively extracted. However, due to the model's limited support for Chinese elements, images containing Chinese require additional inputs such as captions or related information to improve the model\u2019s recognition accuracy and efficacy, preventing ineffective identifications. \fTable 2 Image processing results NO. Original Image Cleaned Text in Chunks 1 2 3 4 4.1.3 Results of Table Processing Results In the process of data handling, table processing presents significant challenges as tables often contain extensive parameter and comparative analysis information. Such information significantly enhances a LLM's capabilities in data understanding, pattern recognition, and knowledge integration, thereby improving the accuracy and relevance of text generation. In this study, we employed the \"text_as_html\" method to handle tabular data, with the results displayed in table 3.The corresponding text, \frendered as an HTML document, appears as demonstrated in Figure 4.Our analysis indicates that the sections of tables within chunks are expressed in HTML syntax, allowing the saved HTML files to accurately restore the original structure and hierarchy of the tables when opened, ensuring the correct identification and extraction of information. Table 3 Table processing results NO. Original Table Cleaned text in Chunks 1 2 3 \fFig. 4 Results of tables elements in chunks converted to html file 4.2 Zero-shot Question Answering Results under RAG To evaluate the effectiveness of vector knowledge bases constructed using the methodologies outlined in this study for enhancing the expertise of large language models, GPT 4.0 was employed to process the papers and books utilized in this research. A set of fifty questions was randomly generated, focusing on the content of the selected documents. Subsequently, three questions in English and two in Chinese were randomly chosen for testing purposes. GPT 4.0 was then tasked with scoring the responses obtained from these tests, providing an objective measure of the effectiveness of the vector knowledge bases in augmenting the domain-specific knowledge of the language model across different languages. The results of the English and Chinese assessments are presented in Tables 4 and 5, respectively, offering a clear overview of the performance of the vector knowledge bases in enhancing the expertise of GPT 4.0. \fTable 4 Zero-shot question answer results in English NO. Question and answer Scores 1 Question1\uff1aExplain how the \"Transfer Coefficients\" (TCs) are used to simulate plastic flows in the form of a paragraph? Answer by GPT 4.0 75/100 Answer by RAG 95/100 \fNO. Question and answer Scores 2 Question2\uff1aWhich predefined scenarios showed the greatest potential improvement when assessing the 2025 plastic recycling targets? Answer by GPT 4.0 60/100 Answer by RAG 95/100 \fNO. Question and answer Scores 3 Question3\uff1aHow did the microaerobic-aerobic configuration impact the microbial community structure and pollutant removal pathways? Answer by GPT 4.0 85/100 Answer by RAG 95/100 \fTable 5 Zero-shot question answer results in Chinese NO. Question and answer Scores 1 Question1\uff1a\u51e0\u79cd\u5e38\u7528\u6811\u8102\u518d\u751f\u5242\u7684\u9002\u7528\u5bf9\u8c61\u3001\u6d53\u5ea6\u8303\u56f4\u53ca\u76f8\u5bf9\u7528\u91cf\u662f\u591a \u5c11\uff1f Answer by GPT 4.0 80/100 Answer by RAG 95/100 \fNO. Question and answer Scores 2 Question2\uff1a\u5728\u6392\u6c34\u7ba1\u7f51\u68c0\u67e5\u4e2d\u7535\u78c1\u68c0\u67e5\u6cd5\u6709\u54ea\u4e9b\u5e94\u7528\u6848\u4f8b\uff1f Answer by GPT 4.0 75/100 Answer by RAG 90/100 The results presented in this study provide compelling evidence that vector knowledge bases constructed using the methodologies described herein can significantly enhance the ability of large language models to acquire and apply domain-specific information. This improvement is manifested across several critical dimensions, including clarity, specificity, accuracy, technical depth, and comprehensiveness. By effectively augmenting the knowledge acquisition process, \fthese vector knowledge bases enable language models to generate responses of substantially higher quality, demonstrating their efficacy in improving the performance of large language models in specialized domains. These findings underscore the potential of vector knowledge bases as a powerful tool for enhancing the accuracy and relevance of language model outputs in domain-specific contexts, paving the way for more effective and efficient natural language processing applications in various specialized fields. Conclusion The methodologies developed in this study significantly enhance the capability of LLMs to leverage domain-specific knowledge through the construction of vector knowledge bases. Our experiments demonstrate the effectiveness of the RAG approach, where LLMs, equipped with these bases, show substantial improvements in generating precise, relevant, and contextually rich responses. This advancement is particularly evident in the environmental science and wastewater treatment sectors, where the integration of vector databases enables the detailed understanding and management of complex data. The successful application of these methods promises a broader utility of LLMs, paving the way for more sophisticated natural language processing applications in various specialized fields. This research not only validates the feasibility of enhancing LLMs performance with structured vector databases but also sets a foundation for future innovations in AI-driven data processing and analysis in environmental engineering."
16
+ }
title_10K/test_title_short_2405.04003v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04003v1",
3
+ "title": "High Energy Density Radiative Transfer in the Diffusion Regime with Fourier Neural Operators",
4
+ "abstract": "Radiative heat transfer is a fundamental process in high energy density\nphysics and inertial fusion. Accurately predicting the behavior of Marshak\nwaves across a wide range of material properties and drive conditions is\ncrucial for design and analysis of these systems. Conventional numerical\nsolvers and analytical approximations often face challenges in terms of\naccuracy and computational efficiency. In this work, we propose a novel\napproach to model Marshak waves using Fourier Neural Operators (FNO). We\ndevelop two FNO-based models: (1) a base model that learns the mapping between\nthe drive condition and material properties to a solution approximation based\non the widely used analytic model by Hammer & Rosen (2003), and (2) a model\nthat corrects the inaccuracies of the analytic approximation by learning the\nmapping to a more accurate numerical solution. Our results demonstrate the\nstrong generalization capabilities of the FNOs and show significant\nimprovements in prediction accuracy compared to the base analytic model.",
5
+ "authors": "Joseph Farmer, Ethan Smith, William Bennett, Ryan McClarren",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "physics.comp-ph",
9
+ "cats": [
10
+ "physics.comp-ph",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "High Energy Density Radiative Transfer in the Diffusion Regime with Fourier Neural Operators",
16
+ "main_content": "Introduction Marshak waves, a common type of driven supersonic radiative heat waves, play a key part in the physics of internal confinement fusion (ICF) [1\u20134], astrophysics [5\u20137] and other high energy density phenomena [8]. In most cases, a full description of the radiative transfer process is not required. Therefore, approximations are in order. The diffusion approximation is one of these and is considered the simplest [9]. In some cases, analytic solutions to the radiation diffusion equation can be useful in understanding experiments [10\u201316]. These analytic or semi-analytic models can be thought of as a reduced order approximation of the full system, which is itself a simplification. As examples, [10] reduces a two dimensional diffusion system via asymptotic expansion. The diffusion system is an approximation to higher order radiation transport equations. Marshak, the namesake of these waves, reduced a partial differential equation (PDE) into an ordinary differential equation (ODE) [13, 14]. Reduced order solutions have the benefit of simpler calculation, as solving an ODE is usually preferable to solving a PDE, and they can be interrogated to clarify physical relationships between parameters. However, coming to a semi-analytic or analytic solution often involves invoking simplifications which may debase the accuracy of the prediction. Thus, the motive for this inquiry is to take a widely used and appreciated semi-analytic diffusion model, the Hammer and Rosen Marshak wave model (HR) [11], and provide a correction to the model\u2019s limiting assumptions in a computationally efficient manner. Classical numerical solvers such as finite difference, finite element, or finite volume methods discretize continuous equations into a finite set of algebraic equations [17\u2013 22]. These numerical solvers can be computationally expensive for high dimensional problems and for domains with complex geometries. In recent years, approaches that leverage ML have garnered support to alleviate these challenges [23\u201325]. In particular, neural operators, a class of ML models, have emerged as a promising solution to these challenges. These operators learn mappings between infinite-dimensional function spaces, effectively approximating differential or integral operators that govern PDEs in a data driven manner [26, 27]. One of the key advantages of neural operators is that they only need to be trained once to learn a family of PDEs, and obtaining a solution for a new instance of a PDE parameter requires only a forward pass of the network. Furthermore, neural operators are discretizationinvariant as they share network parameters across discretizations, allowing for the transfer of solutions between meshes. The Fourier neural operator (FNO) [28] is a seminal neural operator that learns network parameters in Fourier space. The FNO uses fast Fourier transform (FFT) for spectral decomposition of the input and computation of the convolution integral kernel in the Fourier space. This approach has shown promising results in learning the underlying physics of various PDEs including Burgers, Darcy, and Navier-Stokes equations. In this work, we propose to use FNO to learn the physics of Marshak waves for various input-output pairs. We develop two models: a base model which takes the physical parameters of the Marshak wave problem as input and outputs the time dependent wavefront position and temperature distribution as given by the HR model, 2 \fand a hybrid approach which corrects the analytic HR solution to output the numerical solution to the full flux-limited diffusion equation. The structure of this paper is as follows. The diffusion model for Marshak waves is introduced in Section 2. Hammer and Rosen\u2019s approximation is summarized in Section 3. The neural network that is employed to correct the HR model is discussed in Section 4. Finally, results and conclusions are offered in Sections 5 and 6. 2 Marshak wave problem We study radiation diffusion in planar geometry, which assumes variation of the dependent variables only in a single direction, x. The evolutions of the radiation and material energy density are governed by [29], \u2202er \u2202t = \u2202 \u2202x c 3\u03ba(\u03c1, T) \u2202er \u2202x + c\u03ba(aT 4 \u2212er), (1) \u2202e \u2202t = c\u03ba(e \u2212aT 4) (2) where, er is the energy density of the radiation and e is the energy density of the material. c is the speed of light, \u03ba is the opacity with units of inverse length, a is the radiation constant, defined a \u22614\u03c3 c where \u03c3 is the Stefan-Boltzmann constant. T is the material temperature and \u03c1 is the material density. A Marshak boundary condition will specify the incoming radiation flux [29], er(x = 0, t) \u2212 \u0012 2 3\u03ba \u2202er \u2202x \u0013 \f \f \f \f x=0 = 4 c Finc. (3) where Finc is the incident flux on the surface at x = 0. The material energy density is found via integration of the specific heat, e = Z T 0 dT \u2032 Cv(T \u2032). (4) Solutions to Eq. (1) in the optically thick limit are recognizable by sharp drops in temperature near the wavefront and gradual temperature variation behind the front. This is because the radiation temperature and material temperature are in equilibrium behind the wavefront. Thus, is often valid to assume equilibrium between the radiation temperature and and material temperature, i.e. er = aT 4. This assumption simplifies Eqs. (1) and (2) to a single equation for the material temperature, \u2202e \u2202t = 4 3 \u2202 \u2202x 1 \u03ba(\u03c1, T) \u0012 \u2202 \u2202x\u03c3T 4 \u0013 (5) with the boundary condition at the surface, T(x = 0, t) = Ts(t). (6) 3 \fFurthermore, the equation of state is specified so that, e = fT \u03b2\u03c1\u2212\u00b5, (7) This is the formulation given in [11]. The parameters f, \u03b2, \u00b5 are found by fitting experimental data, as in [30]. 3 Hammer and Rosen approximation The Hammer and Rosen model for supersonic thermal radiation diffusion is a perturbative, semi-analytic, one dimensional solution to the diffusion equation under mild limiting assumptions. In particular, this model assumes planar geometry, power law representations for the opacity, 1 K = gT \u03b1\u03c1\u2212\u03bb, and material internal energy, e = fT \u03b2\u03c1\u2212\u00b5, and a constant density. These assumptions transform Eq. (5) into, \u03c1\u2202e \u2202t = 4 3 \u2202 \u2202x \u0012 1 K\u03c1 \u2202 \u2202x\u03c3T 4 \u0013 , (8) where \u03c1 is the material density, e is the internal energy, \u03c3 is the Stefan-Boltzmann constant, and T is the radiation temperature. The application of these assumptions and some simplification leads to the expression \u2202T \u03b2 \u2202t = C \u22022 \u2202x2 T 4+\u03b1 (9) where our constants are collected into the term C = 4 4 + \u03b1 4 3 1 f g\u03c1\u00b5\u22122\u2212\u03bb (10) This model predicts the position of the wave front as a function of time as the solution to an integral expression, then provides an explicit expression for the temperature profile in the material. The model can accommodate an arbitrary radiation temperature boundary condition. The Hammer and Rosen model gives the position of the wavefront, xf, as x2 f (t) = 2 + \u03f5 1 \u2212\u03f5CT \u2212\u03b2 s Z t 0 T 4+\u03b1 s d\u02c6 t (11) where Ts is the boundary temperature, \u03f5 = \u03b2 4+\u03b1 is a combination of terms from the power laws, and xf is the heat front position as a function of time, t. With knowledge of the wavefront position a simple expression can be evaluated for the temperature profile: T 4+\u03b1 T 4+\u03b1 s (x, t) = \u0014\u0012 1 \u2212x xf \u0013 \u0012 1 + \u03f5 2 \u0012 1 \u2212 x2 f CH2\u2212\u03f5 dH dt \u0013 x xf \u0013\u00151/(1\u2212\u03f5) . (12) Here H = T 4+\u03b1 s . One hallmark of this approximate solution is that it is very inexpensive to evaluate. In practice, and when compared to computing a numerical solution, 4 \fthis method is effectively immediate. For this reason, it has proven to be particularly helpful for rapid iteration during the design process. 4 Fourier neural operator model We now turn to the consideration of producing a machine learning model to compute Marshak wave solutions. For this task we turn to the Fourier Neural Operator. In this section we use standard notation from the ML literature; regrettably, this overlaps with the standard notation for Marshak waves at times. g f \u00c6 \u00d8 \u220f \u00b5 \u03a9 Parameters 1.0 \u00a3 10\u00b04 1.0 \u00a3 10\u00b02 1.0 \u00a3 100 1.0 \u00a3 102 Values 0 1 2 3 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) 0.00 0.02 0.04 0.06 xf (cm) 0 1 2 T (HeV) 0 1 2 3 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) P Fourier layer 1 Fourier layer 2 Fourier layer l Q a(x) u(x) v(x) F R F\u22121 Fourier layer W + \u03c3 Fig. 1: Fourier neural operator architecture for solving the Marshak wave problem. The input function a(x) is projected to a higher representation v0(x) by the projection layer P. This is then processed through l iterations of Fourier layers. Each Fourier layer consists of a Fourier transform F that maps vi(x) to the Fourier domain, multiplication with the weight tensor R and filtering of higher Fourier modes, and an inverse Fourier transform F\u22121 to return to the spatial domain. The output is linearly transformed by W and passed through a nonlinear activation function \u03c3. This is added to the previous Fourier layer\u2019s output to produce the updated representation vi+1(x). After l layers, the final representation vl(x) is mapped to the output solution u(x). The boundary temperature drive (top left) and parameters (bottom left) represent the input functions and the front position (top right) and temperature distribution (bottom right) represent the output functions for the Marshak wave problem The primary goal of an operator G is to establish a mapping between infinitedimensional spaces from a finite collection of input-output pairs, denoted as A = A(Rda) \u2282Rda and U = U(Rdu) \u2282Rdu, respectively. Following from [28, 31], consider a partial differential equation (PDE) which maps input function spaces to an output solution space. For a given domain D \u2282Rd with boundary \u2202D, and x \u2208D, an operator would map source terms, f(x, t) : D \u2192R, boundary conditions, u(\u2202D, t) : D \u2192R, and initial conditions u(x, 0) : D \u2192R, to the solution space u(x, t) : D \u2192R, where t is time. In the present work, we aim to learn the nonlinear differential operator G : A \u2192U for various sets of input parameters a \u2208A in the Marshak wave problem. 5 \fBy constructing a parametric map G : A \u00d7 \u0398 \u2192U, the optimal parameter \u03b8 \u2208\u0398 can be approximated with data-driven methods to adjust \u03b8 such that G(\u00b7, \u03b8) approaches the target map G. Classical numerical solvers, be it finite elements, finite differences, or many modern data-driven and physics-informed neural networks attempt to learn the output function u(x, t) which satisfies G for a single instance of input parameter a and can be computationally prohibitive, especially when the solution for the PDE is required for many instances of the parameter. On the other hand, Fourier neural operators (FNO) have been developed to approximate G directly so that solutions to a family of PDEs are realized for different sets of a, thereby enhancing computational efficiency and practical utility. In general, input and output functions a and u are continuous, however, we assume to know only point-wise evaluations. To that end, the problem at hand can be described using the n-point discretization of D, Dj = {x1, . . . , xn} \u2282D with observations of input-output pairs indexed by j \b aj \u2208Rn\u00d7da, uj \u2208Rn\u00d7du\tN j=1, and uj = G(aj). The neural operator to learn the input-output mapping is an iterative architecture. First, the input a(x, t) is transformed to a higher dimensional representation by v0(x) = P(a(x)) where the transformation P(a(x)) : Rda 7\u2192Rdv. In this framework, a shallow fully connected network can achieve this desired transformation. Next a series of l updates vi 7\u2192vi+1 are performed vi+1(x) := \u03c3 (Wvi(x) + (K(a; \u03d5)vi) (x)) , \u2200x \u2208D. (13) with nonlinear activation function \u03c3(\u00b7) : R 7\u2192R and a linear transformation W : Rdv 7\u2192Rdv. Each vi is a dv-dimensional real vector in Rdv. For a vector input x = [x1, x2, . . . , xdv]T \u2208Rdv, \u03c3(x) is applied element-wise, resulting in [\u03c3(x1), \u03c3(x2), . . . , \u03c3(xdv)]T . The integral kernel operator K : A \u00d7 \u03b8 \u2192L(U, U) is parameterized by \u03d5 \u2208\u0398K (K(a; \u03d5)vi) (x) := Z D \u03ba\u03d5(x, y, a(x), a(y); \u03d5)vi(y)dy, \u2200x \u2208D. (14) where \u03ba\u03d5 : R2(d+da) \u2192Rdv\u00d7dv is a neural network parameterized by \u03d5 \u2208\u0398K. After all iterations, a transformation function u(x) = Q (vl(x)) moves vl(x) into the solution space Q (vl(x)) : Rdv 7\u2192Rdu. This approach extends the idea of neural networks to operate on infinite-dimensional function spaces, enabling the learning of mappings between such spaces from finite data samples. By leveraging neural operators, it becomes possible to approximate the nonlinear operators that govern the relationships between infinite-dimensional input and output function spaces, such as those arising in the context of partial differential equations. The FNO is a specific neural operator architecture designed for such nonlinear mappings. It replaces the kernel integral operator in by a Fourier convolution operator F\u22121 (F (\u03ba\u03d5) \u00b7 F (vi)) (x), and applying the convolution theorem. The Fourier kernel integral operator becomes (K(\u03d5)vi) (x) = F\u22121 (R\u03d5 \u00b7 (Fvi)) (x), \u2200x \u2208D, 6 \fwhere F is the Fourier transform of a function and F\u22121 is its inverse transform, R\u03d5 is the Fourier transform of a periodic function \u03ba parameterized by \u03d5 \u2208\u0398K. Given that \u03ba is periodic and can be represented by a Fourier series expansion, only discrete modes are considered k \u2208Zd. To create a finite dimensional representation, the Fourier series is truncated at a maximum number of modes kmax = |{k \u2208Zd : |kj| \u2264kmax,j for j = 1, . . . , d}|. In a discretized domain D with n \u2208N points, vi \u2208Rn\u00d7dv and F(vi) \u2208Cn\u00d7dv is obtained, here C represents the complex space. A convolution of vi with a function that has kmax Fourier modes gives F(vi) \u2208Ckmax\u00d7dv . Then the multiplication with the weight tensor R \u2208Ckmax\u00d7dv\u00d7dv is (R \u00b7 (Fvi))k,l = X j=1 Rk,l,j (Fvi)k,j , k = 1, . . . , kmax, j = 1, . . . , dv (15) With uniform discretization and resolution s1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 sd = n, Fast Fourier Transform (FFT) can replace F. For f \u2208Rn\u00d7dv, k = (k1, . . . , kd) \u2208Zs1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Zsd, and x = (x1, . . . , xd) \u2208D, the FFT \u02c6 F and its inverse \u02c6 F\u22121 are defined as ( \u02c6 Ff)l(k) = s1\u22121 X x1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X xd=0 fl (x1, . . . , xd) e \u22122i\u03c0 Pd j=1 xj kj sj , (16) \u0010 \u02c6 F\u22121f \u0011 l (x) = s1\u22121 X k1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X kd=0 fl (k1, . . . , kd) e 2i\u03c0 Pd j=1 xj kj sj . (17) Finally, since Eq. (13) follows standard neural network structures training a network training is done with an appropriate loss function L = U \u00d7 U \u0398 = arg min \u0398 (L(G(a), G(a, \u0398)). (18) A schematic representation of the Fourier Neural Operator model for the Marshak wave problem is provided in Figure 1. 5 Results 5.1 Problem description and parameter space The Marshak waves we consider concern the propagation of heat waves through lowdensity foam cylinders or other materials driven by a hohlraum similar to those described in [30, 32]. Key parameters in these experiments include density, drive energy and radiation temperature, which typically can range from 100 to 300 eV. Xray imaging is used to track the heat wave, while diagnostic tools measure the flux breaking through the foam edge. The experiments cover a wide range of temperatures, materials, and densities. 7 \fTable 1, adapted from [30], presents material properties used in various Marshak wave experiments. The first ten rows contain parameters for the foams, while the last two rows provide parameters for coating materials. For each material, the numerical parameters were fitted in relevant experimental regimes. Further details about the experiments can be found in [30] and references cited therein. Table 1: Material properties for various Marshak wave experiments Experiment Foam g \u0000g/cm2\u0001 f (MJ) \u03b1 \u03b2 \u03bb \u00b5 \u03c1 \u0000g/cm3\u0001 Massen C11H16Pb0.3852 1/3200 10.17 1.57 1.2 0.1 0 0.080 Xu pure C6H12 1/3926.6 12.27 2.98 1 0.95 0.04 0.05 Xu with copper C6H12Cu0.394 1/7692.9 8.13 3.44 1.1 0.67 0.07 0.05 Back, Moore SiO2 1/9175 8.77 3.53 1.1 0.75 0.09 0.05 Back Ta2O5 1/8433.3 4.78 1.78 1.37 0.24 0.12 0.04 Back low energy SiO2 1/9652 8.4 2.0 1.23 0.61 0.1 0.01 Moore C8H7Cl 1/24466 14.47 5.7 0.96 0.72 0.04 0.105 Keiter Pure C15H20O6 1/26549 11.54 5.29 0.94 0.95 0.038 0.065 Keiter with Gold C15H20O6Au0.172 1/4760 9.81 2.5 1.04 0.35 0.06 0.0625 Ji-Yan C8H8 1/2818.1 21.17 2.79 1.06 0.81 0.06 0.160 Au 1/7200 3.4 1.5 1.6 0.2 0.14 0.160 Be 1/402.8 8.81 4.89 1.09 0.67 0.07 0.160 Numerical approximations for solving the Marshak wave problem can be computationally expensive, especially when exploring a wide range of material properties. To overcome this challenge, we propose using the Fourier Neural Operator (FNO) to learn the mapping between material properties and their corresponding Marshak wave solutions. FNOs have shown success in solving partial differential equations by learning the solution operator from a dataset of input-output pairs. To train the FNO model, we generate a dataset that spans the parameter space defined by the material properties in Table 1. The input consists of a set of material properties, (g, f, \u03b1, \u03b2, \u03bb, \u00b5, \u03c1), while the output corresponds to the solution of the Marshak wave problem in terms of the temperature profile and wave front position at a given time. We create a uniformly spaced grid of values for each material property, covering the range of values found in the experiments: In Table 2, N is the number Table 2: Parameter ranges for generating training data Parameter Range Number of grid points g [min(g), max(g)] N (log-spaced) f [min(f), max(f)] N \u03b1 [min(\u03b1), max(\u03b1)] N \u03b2 [min(\u03b2), max(\u03b2)] N \u03bb [min(\u03bb), max(\u03bb)] N \u00b5 [min(\u00b5), max(\u00b5)] N \u03c1 [min(\u03c1), max(\u03c1)] N 8 \fof grid points for each parameter. For the g parameter, we use logarithmically spaced values to better capture its wide range, while the other parameters are linearly spaced. In addition to the material properties, the Marshak wave problem also depends on the boundary temperature (i.e., the drive temperature). We parameterize the drive with a function Tb(t, a, b, c, d), measured in HeV, defined as follows Tb(t, a, b, c, d) = a + (b(t \u2265c)(t \u2212c))(t < d) + (t \u2265d)(b(d \u2212c)). (19) Here t is time (in ns), and a \u2208[1, 3], b \u2208[0, 1], c \u2208[0.1, 2], and d \u2208[2, 5]. The function consists of a constant term a, and a piecewise function that takes different values based on the conditions involving t, c, and d. We generate a set of boundary temperature functions by sampling the parameters a, b, c, and d from their respective ranges. To create the training set, we take the Cartesian product of the material property values and the boundary temperature function parameters and obtain a set of input parameter combinations that cover the entire parameter space. For each input combination, we solve the Marshak wave problem using a numerical solver to obtain the corresponding output solution. These input-output pairs form our training dataset, which we use to train the FNO model. As will be seen, by learning from this diverse set of input-output pairs, the FNO can effectively capture the underlying physics of the Marshak wave problem across the entire parameter space, including the dependence on the boundary temperature function. This allows the trained model to quickly and accurately predict solutions for new, unseen combinations of material properties and boundary temperature functions within the specified ranges. 5.2 Base model As a starting point, we introduce a base model that takes all material properties and boundary temperature function parameters as inputs and uses the Hammer and Rosen approximation as the output. The Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, which serves as a useful benchmark for evaluating the performance of our FNO model. Figure 2 compares the temperature solutions of the Marshak wave in space for three different boundary temperature functions. The boundary temperature functions, shown in Figure 2a, are generated by varying the parameters a, b, c, and d in Equation 19. The corresponding temperature solutions, obtained using both the Hammer and Rosen approximation and the FNO model, are presented in Figure 2b. The results demonstrate good agreement between the FNO model and the Hammer and Rosen approximation for all three boundary temperature functions. This indicates that the FNO model is capable of accurately capturing the physics of the Marshak wave problem and reproducing the analytical solutions provided by the Hammer and Rosen approximation. 5.3 Hammer and Rosen Correction model While the Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, it suffers from inaccuracies due to the assumptions made in 9 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) Tb1 Tb2 Tb3 (a) Temperature Drive 0.00 0.25 0.50 0.75 1.00 x (cm) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 T (HeV) Tb1 Tb2 Tb3 HR FNO (b) Temperature profile at 3 ns Fig. 2: Comparison of the Hammer and Rosen approximation and the FNO model for a representative material under different boundary temperature drives (a) are characterized by a constant temperature followed by a linear ramp at different times and rates. The corresponding temperature solutions (b) obtained from the Hammer and Rosen approximation (solid lines) and the FNO model (dashed lines) show close agreement. its derivation, Section 3. These inaccuracies become apparent when comparing the Hammer and Rosen solution to more accurate numerical solvers, such as diffusion based methods, and experimental results. To address this issue, we introduce the Hammer and Rosen Correction model, which aims to improve the accuracy of the Hammer and Rosen approximation using FNO. The Hammer and Rosen Correction model is built similarly to the base model but takes the Hammer and Rosen solution for the temperature and the front position as additional inputs. The outputs are generated using a more accurate diffusion solution, and the FNO learns to map the Hammer and Rosen solution to the diffusion solution. By doing so, the Hammer and Rosen Correction model effectively corrects the inaccuracies of the Hammer and Rosen approximation and provides a more accurate prediction of the Marshak wave behavior. Figure 3 illustrates in a parallel axis plot the input parameter values for four different test cases used to evaluate the Hammer and Rosen Correction model. Each line represents a specific test case, with the values of the parameters plotted along the y-axis for each parameter on the x-axis. The boundary temperature drive is given with parameters a = 1.2, b = 0.8, c = 1, and d = 2 for Eq. (19). The output values are produced by a numerical solver we developed to solve radiation diffusion in planar geometry. The solver assumes equilibrium between the radiation temperature and material temperature, reducing Eq. (1) and Eq. (2) to a single equation for the material temperature Eq. (5). The solver employs finite difference method to discretize the spatial domain into a uniform grid. Time integration is performed by the backward differentiation formula, an implicit multi-step method. The spatial derivatives in Eq. (5) are approximated using a second order central difference scheme. The left boundary at the surface (x = 0), Eq. (3), is prescribed as a 10 \fg f \u03b1 \u03b2 \u03bb \u00b5 \u03c1 Parameters 1.0 \u00d7 10\u22124 1.0 \u00d7 10\u22122 1.0 \u00d7 100 1.0 \u00d7 102 Values Case 1 Case 2 Case 3 Case 4 Fig. 3: Parameter values from the test set for four different cases to evaluate the performance of the Hammer and Rosen Correction model function of time and the solver assumes equation of state given by Eq. (7). At each time step, the solver computes the temperature profile across a one-dimensional spatial grid consisting of 100 spatial cells and tracks the position of the wavefront. The Hammer and Rosen correction model is trained and tested using the dataset generated by the numerical solver and the Hammer and solution, paired with the input parameter values. The dataset is split into standard training and testing sets. It is important to note that the testing set contains parameter combinations that may not represent physically realistic scenarios, as they are generated by uniformly sampling the parameter space defined in Table 2. The model is trained using 1.05M input-output pairs, with 58k trainable parameters and is trained over 30 epochs. Figure 4 presents a comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution. The subfigures 4a, 4b, 4c, and 4d show the results for different sets of input parameters. It is evident from the figures that the Hammer and Rosen approximation deviates noticeable from the diffusion solution over time. In contrast, the Hammer and Rosen Correction model accurately predicts the diffusion solution, demonstrating its ability to correct the inaccuracies of the Hammer and Rosen approximation. Figure 5 provides a comparison of the temperature solutions for the same three models. Subfigures 5a, 5b, 5c, and 5d show the temperature profiles at the same time instance. Once again, the Hammer and Rosen Correction model closely matches the diffusion solution, while the Hammer and Rosen approximation exhibits discrepancies. The Hammer and Rosen Correction model both improves the accuracy of the Marshak wave Hammer and Rosen solution and provides a framework for integrating analytical approximations with data-driven approaches. This hybrid approach combines benefits of both analytical and machine learning methods by giving a physical solution to simplify the inference. 11 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) Di\ufb00usion HR HR Correction (a) Case 1 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.07 0.14 0.21 0.28 0.35 0.42 xf (cm) Di\ufb00usion HR HR Correction (b) Case 2 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.012 0.024 0.036 0.048 0.060 0.072 xf (cm) Di\ufb00usion HR HR Correction (c) Case 3 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.0 0.3 0.6 0.9 1.2 1.5 1.8 xf (cm) Di\ufb00usion HR HR Correction (d) Case 4 front position solution Fig. 4: Comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution for different sets of input parameters. The Hammer and Rosen approximation (orange lines), deviates from the diffusion solution (blue lines) over time, while the Hammer and Rosen Correction (dashed green lines) accurately predicts the diffusion solution. 5.4 Model generalization and performance In the previous sections, we demonstrated the effectiveness of the Hammer and Rosen Correction model in accurately predicting the Marshak wave behavior for unseen data. It is important to note that these tests were performed on collocation points of the spacing grid shown in Table 2. To validate generalization capabilities of FNO, we present additional tests on specific physical materials from Table 1. Figure 6 compares the front position solutions obtained from the diffusion solver and the Hammer and Rosen Correction model for four different materials: C15H20O6Au0.172, Be, C15H20O6, and C6H12 with properties as specified in [30]. These materials were not explicitly included in the training data grid but represent realistic physical scenarios. The subfigures 6a, 6b, 6c, and 6d show excellent agreement between diffusion solutions and the Hammer and Rosen Correction model predictions for all four materials. This demonstrates that the FNO has successfully learned the mapping 12 \f0.0 0.2 0.4 0.6 0.8 1.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (a) Case 1 temperature solution 0.00 0.01 0.02 0.03 0.04 0.05 0.06 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (b) Case 2 temperature solution 0.0 0.2 0.4 0.6 0.8 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (c) Case 3 temperature solution 0.0 0.5 1.0 1.5 2.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (d) Case 4 temperature solution Fig. 5: Comparison of the temperature profiles for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution at the same time instance for different sets of input parameters. The Hammer and Rosen approximation (orange line) exhibits discrepancies compared to the diffusion solution (blue line), while the Hammer and Rosen Correction (dashed green lines) closely match the diffusion solution. in the entire parameter space and can accurately predict the Marshak wave behavior for arbitrary material properties within the considered ranges. To quantitatively asses the performance and computational efficiency of the Hammer and Rosen Correction model, we compare it with the base model in Table 3. Both models are trained with the same number of trainable parameters, training data, and epochs to ensure a fair comparison. The mean squared error (MSE) is used as the evaluation metric for both temperature and front position predictions. The results in Table 3 show that the Hammer and Rosen Correction model significantly outperforms the base model in terms of prediction accuracy. The Hammer and Rosen Correction model achieves a 56.16% improvement in temperature MSE and a 13 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 xf (cm) Di\ufb00usion HR HR Correction (a) C15H20O6Au0.172 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.06 0.12 0.18 0.24 0.30 0.36 xf (cm) Di\ufb00usion HR HR Correction (b) Be 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.04 0.08 0.12 0.16 0.20 0.24 xf (cm) Di\ufb00usion HR HR Correction (c) C15H20O6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.08 0.16 0.24 0.32 0.40 0.48 xf (cm) Di\ufb00usion HR HR Correction (d) C6H12 Fig. 6: Comparison of the front positions obtained from the Hammer and Rosen approximation (orange lines), diffusion solver (blue lines), and the Hammer and Rosen Correction model (dashed green lines) for four different materials from the Table 1. Table 3: Prediction performance and computational costs of deep learning models (MSE is the mean squared error) Parameter HR Correction Base model % Improvement Temperature MSE 0.00081 0.00185 56.16 Front position MSE 0.00807 0.01220 33.93 Train data 1.05M 1.05M Trainable parameters 58k 58k Epochs 30 30 Inference time (s) 0.0032 0.0016 33.93% improvement in front position MSE compared to the base model. This superior performance can be attributed to the hybrid-nature approach of the Hammer and Rosen Correction model. 14 \fIn terms of computational efficiency, the Hammer and Rosen Correction model has slightly slower inference time as compared to the base model. This is expected due to the additional complexity introduced by the correction step. However, it is important to note that both models have extremely fast inference times, with the Hammer and Rosen Correction model requiring only 0.0032 seconds per prediction and the base model requiring 0.0016 seconds. These fast inference time highlight the efficiency of the FNO-based approach, enabling real-time predictions of the Marshak wave behavior. 6 Conclusion In this work, we presented a novel approach for modeling Marshak wave experiments using Fourier Neural Operators (FNO). The primary objective was to develop an efficient and accurate method for predicting Marshak wave behavior across a wide range of material properties and boundary temperature functions. We introduced two FNO-based models: a base model and a Hammer and Rosen Correction model. The base model takes material properties and boundary temperature function parameters as inputs and uses a numerical approximation as the output. This model served as a foundation for exploring the capabilities of learning the underlying physics. To address innaccuracies of the Hammer and Rosen approximation, we developed a hybrid datadriven Hammer and Rosen Correction model. This model maps the Hammer and Rosen solution to a more accurate diffusion solution. The performance of these models were evaluated over a wide range of the parameter space. The results demonstrated strong generalization capabilities on unseen data. The Hammer and Rosen Correction model achieved 56.16% improvement in temperature MSE and a 33.93% improvement in front position MSE compared to the base model. These results pave the way for further exploration of more complex models and application to multidimensional problems in high energy density physics."
17
+ }
title_10K/test_title_short_2405.04233v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04233v1",
3
+ "title": "Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models",
4
+ "abstract": "We introduce Vidu, a high-performance text-to-video generator that is capable\nof producing 1080p videos up to 16 seconds in a single generation. Vidu is a\ndiffusion model with U-ViT as its backbone, which unlocks the scalability and\nthe capability for handling long videos. Vidu exhibits strong coherence and\ndynamism, and is capable of generating both realistic and imaginative videos,\nas well as understanding some professional photography techniques, on par with\nSora -- the most powerful reported text-to-video generator. Finally, we perform\ninitial experiments on other controllable video generation, including\ncanny-to-video generation, video prediction and subject-driven generation,\nwhich demonstrate promising results.",
5
+ "authors": "Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, Jun Zhu",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models",
16
+ "main_content": "Introduction Diffusion models have obtained breakthrough progress on generating high-quality images, videos and other types of data, outperforming alternative approaches like auto-regressive networks. Previously, video generation models primarily relied on diffusion models [13, 9, 14] with the U-Net backbone [11], and focused on a single limited duration like 4 seconds [8, 5, 7, 4]. Our model, Vidu, demonstrates that a text-to-video diffusion model with U-ViT [1, 2] as its backbone can break this duration limitation by leveraging the scalability and the long sequence modeling ability of a transformer [15]. Vidu is capable of producing 1080p videos up to 16 seconds in a single generation, as well as images as videos of a single frame. Additionally, Vidu exhibits strong coherence and dynamism, and is capable of generating both realistic and imaginative videos. Vidu also has a preliminary understanding of some professional photography techniques, such as transitions, camera movements, lighting effects and emotional portrayal. We observe that to some extent, the generation performance of Vidu is comparable with that of Sora [6], which is currently the most powerful text-to-video generator, much better than the other text-to-video generators. Finally, we perform initial experiments on other controllable video generation, including canny-to-video generation [16], video prediction and subject-driven generation [12]. All of them demonstrate promising results. 2 Text-to-Video Generation Vidu firstly employs a video autoencoder [10] to reduce both the spatial and temporal dimensions of videos for efficient training and inference. After that, Vidu employs a U-ViT [1] as the noise prediction network to model these compressed representations. Specifically, as shown in Figure 1, U-ViT splits the compressed videos into 3D patches, treats all inputs including the time, text condition \u2217Second authors listed alphabetically. \u2021The corresponding author. arXiv:2405.04233v1 [cs.CV] 7 May 2024 \fTransformer Block Transformer Block Transformer Block Transformer Block t c Embedding Layer Linear 0 1 2 3 4 5 6 L \u00b7\u00b7\u00b7 C \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 Transformer Block Embeddings Norm MLP Multi-Head Attention Norm + + + : Add C : Concatenate + Linear Transformer Block \ud835\udc99\ud835\udc61 C Rearrange to T\u00d73\u00d7H\u00d7W Predicted noise Figure 1: The U-ViT architecture for predicting the noise in videos. and noisy 3D patches as tokens, and employs long skip connections between shallow and deep layers in a transformer. By leveraging the ability of transformers to process variable-length sequences, Vidu can handle videos with variable durations. Vidu is trained on vast amount of text-video pairs, and it is infeasible to have all videos labeled by humans. To address it, we firstly train a high-performance video captioner optimized for understanding dynamic information in videos, and then automatically annotate all the training videos using this captioner. During inference, we apply the re-captioning technique [3] to rephrase user inputs into a form that is more suitable for the model. 2 \f2.1 Generating Videos of Different Lengths Since Vidu is trained on videos of various lengths, it can generate 1080p videos of all lengths up to 16 seconds, including images as videos of a single frame. We present examples in Figure 2. (a) 16 seconds. Prompt: A person clad in a space suit with a helmet and equipped with a chest light and arm device is seen closely examining and interacting with a variety of plants in a lush, indoor botanical setting. (b) 8 seconds. Prompt: A desolate lunar landscape with craters and a large moon in the sky transitions to a warmly lit interior of a spacecraft-like structure where a group of people are engaged in various activities. (c) Image. Prompt: An exquisite silverware piece, aesthetically adorned with intricate patterns and scenes, exhibits the detailed artisanship and metallic sheen. (d) Image. Prompt: Under the veil of nightfall, a rose reveals its subtle, exquisite beauty in the gentle moonlight. Figure 2: Vidu can generate videos of all lengths up to 16 seconds, including images. 3 \f2.2 3D Consistency The video generated by Vidu exhibits strong 3D consistency. As the camera rotates, the video presents projections of the same object from different angles. For instance, as shown in Figure 3, the hair of the generated cat naturally occludes as the camera rotates. (a) Prompt: This portrait depicts an orange cat with blue eyes, slowly rotating, inspired by Vermeer\u2019s \u2019Girl with a Pearl Earring\u2019. The cat is adorned with pearl earrings and has brown fur styled like a Dutch cap against a black background, illuminated by studio lighting. (b) Prompt: In a studio, there is a painting depicting a ship sailing through the rough sea. (c) Prompt: A red car is stuck in the snow, with the entire vehicle emitting green light and red signal lights flashing on the back. The camera slowly pans around the car. Figure 3: 3D consistency of Vidu. 4 \f2.3 Generating Cuts Vidu is capable of generating videos incorporating cuts. As shown in Figure 4, these videos present different perspectives of the same scene by switching camera angles, while maintaining consistency of subjects in the scene. (a) Prompt: A sculptor is intently working on a clay bust, meticulously refining its facial features with precise hand movements. (b) Prompt: Churning ocean waves at night with a lighthouse on the coast create an intense and somewhat foreboding atmosphere. The scene is set under an overcast sky, with the ocean\u2019s dark waters illuminated by natural light, highlighting the white foam of the waves. Figure 4: Vidu is capable of generating videos with cuts. 5 \f2.4 Generating Transitions Vidu is capable of producing videos with transitions in a single generation. As shown in Figure 5, these transitions can connect two different scenes in an engaging manner. (a) Prompt: An elderly man with glasses, dressed in formal attire, is deeply engrossed in examining a large, ornate pocket watch. As the video progresses, there is a cinematic transition to a fantastical mechanical cityscape, viewed through the openwork of the watch. This shift evokes a sense of wonder and transports the viewer into a steampunk-inspired world where buildings and structures are made of metal and gears. (b) Prompt: A person holding a dessert with a fluffy layer of whipped cream elegantly drizzled with smooth chocolate sauce. As a dollop of cream falls, a mini polar bear appears, with floating icebergs nearby, set against a serene blue backdrop. Figure 5: Vidu is capable of generating videos with transitions. 6 \f2.5 Camera Movements Camera movements involve the physical adjustments or movements of a camera during filming, enhancing visual narrative and conveying various perspectives and emotions within scenes. Vidu learned these techniques from the data, enhancing the visual experience of viewers. For instance, as shown in Figure 6, Vidu is capable of generating videos with camera movements including zoom, pan and dolly. (a) Zoom. Prompt: A large sailing ship sails slowly through the fog. (b) Pan. Prompt: An elderly man with a white beard is seated in a room filled with wooden bookshelves, brimming with old books. He is dressed in a dark suit and tie, and he is engrossed in reading a large book. The room is bathed in the warm glow of sunlight streaming through a window, creating a serene and contemplative atmosphere. (c) Dolly. Prompt: An animated hedgehog with distinctive spiky hair and large eyes is seen exploring a lush, grassy environment. Figure 6: Camera movements generated by Vidu. 7 \f2.6 Lighting Effects Vidu is capable of generating videos with impressive lighting effects, which help enhance the overall atmosphere. For example, as shown in Figure 7, the generated videos can evoke atmospheres of mystery and tranquility. Therefore, besides the entities within the video content, Vidu has the preliminary ability to convey some abstract feelings. (a) Prompt: A man wearing a hat and a dark suit walks from the corridor towards the room. The lighting casts a bluish tint over the scene, creating a suspenseful atmosphere. (b) Prompt: A rustic wooden cabin nestles by the shore of a clear, sunlit lake, surrounded by verdant trees and mountains. The water is calm, reflecting the sky above, with a few clouds scattered across it. Sailboats and kayaks are moored on the lake, inviting leisure and tranquility. Figure 7: Lighting effects generated by Vidu. 8 \f2.7 Emotional Portrayal Vidu is able to depict characters\u2019 emotions effectively. For example, as shown in Figure 8, Vidu can express emotions such as happiness, loneliness, embarrassment, and joy. (a) Prompt: A man and a woman are sharing a close and affectionate interaction in an indoor setting that suggests a romantic ambiance. (b) Prompt: An elderly woman with white hair and a lined face is seated inside an older model car, looking out through the side window with a contemplative or mildly sad expression. (c) Prompt: A couple about to get divorced sat awkwardly in the waiting room. (d) Prompt: Audience members in a theater are captured in a series of medium shots, with a young man and woman in formal attire centrally positioned and illuminated by a spotlight effect. Figure 8: Emotional portrayal of Vidu. 9 \f2.8 Imaginative Ability In addition to generating real-world scenes, Vidu also possesses a rich imagination. As shown in Figure 9, Vidu is able to generate scenes that do not exist in the real world. (a) Prompt: A painting of a boat on water comes to life, with waves crashing and the boat becoming submerged. (b) Prompt: An animated rabbit in a playful pink snowboarding outfit is carving its way down a snowy mountain slope under a clear blue sky. (c) Prompt: A model train with a blue engine is seen traveling through a meticulously crafted miniature landscape. The train is pulling several red and cream-colored passenger cars along a track that winds through a rural or suburban setting with small-scale houses, verdant trees, and miniature waterfalls. Figure 9: Imaginative ability of Vidu. 10 \f2.9 Comparison with Sora Sora [6] is currently the most powerful text-to-video generator, capable of producing high-definition videos with high consistency. However, as Sora is not publicly accessible, we compare them by inserting the example prompts released by Sora directly to Vidu. Figure 10 and Figure 11 illustrate the comparison between Vidu and Sora, indicating that to some extent, the generation performance of Vidu is comparable to Sora. (a) Sora (b) Vidu Figure 10: Prompt: The camera rotates around a large stack of vintage televisions all showing different programs \u2014 1950s sci-fi movies, horror movies, news, static, a 1970s sitcom, etc, set inside a large New York museum gallery. 11 \f(a) Sora (b) Vidu Figure 11: Prompt: The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it\u2019s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds. 12 \f3 Other Controllable Video Generation We also perform several initial experiments at 512 resolution on other controllable video generation, including canny-to-video generation [16], video prediction, and subject-driven generation [12]. All of them demonstrate promising results. 3.1 Canny-to-Video Generation Vidu can add additional control by using techniques similar to ControlNet [16], as shown in Figure 12. (a) Input canny. (b) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, realistic visual style. (c) Prompt: During the day, a red car drove towards me and splashed water as it passed by a pond, realistic visual style. (d) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, anime style. Figure 12: Canny-to-video generation examples of Vidu. 13 \f3.2 Video Prediction As shown in Figure 13, Vidu can generate subsequent frames, given an input image, or several input frames (marked with red boxes). (a) Prompt: A pink chrysanthemum flower with intricate petals is the focal point, resting on a wooden surface in an indoor setting. (b) Prompt: A serene mountainous landscape bathed in the warm glow of sunset or twilight, with snow-capped peaks rising above the green vegetation-covered slopes. A calm body of water rests in the foreground, reflecting the sky above, which is dotted with clouds tinged with pink and orange hues. Figure 13: Video prediction examples of Vidu. 14 \f3.3 Subject-Driven Generation We surprisingly find that Vidu can perform subject-driven video generation by finetuning solely on images without videos. For example, we use the DreamBooth [12] technique to designate the learned subject as a special symbol <V> for finetuning. As shown in Figure 14, the generated videos faithfully recreates the learned subject. (a) Input images. (b) Prompt: A <V> dog lies on the ground and then goes to eat from the bowl. (c) Prompt: A <V> dog bit his tail happily and shakes his head. Figure 14: Subject-driven generation examples of Vidu. 15 \f4 Conclusion We present Vidu, a high-definition text-to-video generator that demonstrates strong abilities in various aspects, including duration, coherence, and dynamism of the generated videos, on par with Sora. In the future, Vidu still has room for improvement. For instance, there are occasional flaws in details, and interactions between different subjects in the video sometimes deviate from physical laws. We believe that these issues can be effectively addressed by further scaling Vidu. 5 Acknowledgements We appreciate the support of the data team and the product team for the project at Shengshu. This work was partly supported by NSFC Projects (Nos. 62061136001, 62106123, 61972224), Tsinghua Institute for Guo Qiang, and the High Performance Computing Center, Tsinghua University. J.Z is also supported by the XPlorer Prize."
17
+ }
title_10K/test_title_short_2405.04272v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04272v1",
3
+ "title": "BUDDy: Single-Channel Blind Unsupervised Dereverberation with Diffusion Models",
4
+ "abstract": "In this paper, we present an unsupervised single-channel method for joint\nblind dereverberation and room impulse response estimation, based on posterior\nsampling with diffusion models. We parameterize the reverberation operator\nusing a filter with exponential decay for each frequency subband, and\niteratively estimate the corresponding parameters as the speech utterance gets\nrefined along the reverse diffusion trajectory. A measurement consistency\ncriterion enforces the fidelity of the generated speech with the reverberant\nmeasurement, while an unconditional diffusion model implements a strong prior\nfor clean speech generation. Without any knowledge of the room impulse response\nnor any coupled reverberant-anechoic data, we can successfully perform\ndereverberation in various acoustic scenarios. Our method significantly\noutperforms previous blind unsupervised baselines, and we demonstrate its\nincreased robustness to unseen acoustic conditions in comparison to blind\nsupervised methods. Audio samples and code are available online.",
5
+ "authors": "Eloi Moliner, Jean-Marie Lemercier, Simon Welker, Timo Gerkmann, Vesa V\u00e4lim\u00e4ki",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "eess.AS",
9
+ "cats": [
10
+ "eess.AS",
11
+ "cs.LG",
12
+ "cs.SD"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "BUDDy: Single-Channel Blind Unsupervised Dereverberation with Diffusion Models",
17
+ "main_content": "INTRODUCTION When acoustic waves propagate in enclosures and get reflected by walls, the sound received is perceived as reverberated, which can significantly degrade speech intelligibility and quality [1]. The goal of dereverberation is to recover the anechoic component from reverberant speech. We focus here on the single-channel scenario, where measurements from only one microphone are available, which is significantly more challenging than multi-channel scenarios [2]. Traditional dereverberation algorithms assume some statistical properties, such as Gaussianity or sparsity, about the anechoic and reverberant signals. These properties are leveraged to perform dereverberation in the time, spectral or cepstral domain [3]. These methods can tackle informed scenarios, where the room impulse response (RIR) is known [4, 5] as well as blind scenarios where the RIR is unknown [6, 7]. Informed dereverberation is easier than blind dereverberation, but most scenarios in real-life applications are blind, as the RIR is either not measured beforehand, or becomes invalid even with the slightest deviations in receiver or emitter positions. Data-driven approaches rely less on such assumptions but rather learn the signal properties and structures from data [8]. Most of these methods are based on supervised learning using pairs of anechoic and reverberant speech. Supervised predictive models have been widely used for blind dereverberation, including time-frequency (T-F) maskers [9], time-domain methods [10] and \u2217These authors contributed equally to this work. 1uhh.de/sp-inf-buddy. spectro-temporal mapping [11]. Generative models represent another category of dereverberation algorithms aiming to learn the distribution of anechoic speech conditioned on reverberant input. Some blind supervised methods using generative models such as diffusion models [12,13] have been recently proposed [14,15]. However, supervised approaches struggle with limited generalization to diverse acoustic conditions due to the scarcity and variability of available RIR data. Unsupervised approaches offer the potential to circumvent such limitations as they do not require paired anechoic/reverberant data. This paper builds upon prior work [16], which proposed an unsupervised method for informed single-channel dereverberation based on diffusion posterior sampling. The previous study showed the potential of leveraging diffusion models as a strong clean speech prior, which, when combined with a criterion to match the measurement, reached state-of-the-art dereverberation in an informed scenario [16]. This paper extends the method to blind dereverberation, where the unknown RIR is estimated along the anechoic speech. We parameterize the RIR with a model-based subband filter, where each subband of the reverberation filter is modeled by an exponentially decaying signal. The resulting algorithm is an optimization scheme alternating between the diffusion process generating the anechoic speech, and the parameter search estimating the acoustic conditions. Previous works in related domains explore various parameter estimation techniques for solving blind inverse problems with diffusion posterior sampling. For image deblurring, [17] propose to use a parallel diffusion process to estimate the deblurring kernel, while [18] adopts an expectation-maximization approach. In the audio domain, [19] address the problem of blind bandwidth extension by iteratively refining the parameters of the lowpass filter degradation. Closely related is the work by Saito et al. [20], which perform unsupervised blind dereverberation using DDRM [21] and the weighted-prediction error (WPE) algorithm as initialization [6]. We name our method BUDDy for Blind Unsupervised Dereverberation with Diffusion Models. We show experimentally that BUDDy efficiently removes reverberation from speech utterances in many acoustic scenarios, thereby largely outperforming previous blind unsupervised techniques. As supervision is not required during the training phase, we demonstrate that BUDDy does not lose performance when presented with unseen acoustic conditions, as opposed to existing blind supervised dereverberation approaches. 2. BACKGROUND 2.1. Diffusion-Based Generative Models Diffusion-based generative models, or simply diffusion models [12, 22], emerged as a class of generative models that learn complex data distributions via iterative denoising. At training time, the target data arXiv:2405.04272v1 [eess.AS] 7 May 2024 \fdistribution is transformed into a tractable Gaussian distribution by a forward process, incrementally adding noise. During the inference, the reverse process refines an initial noise sample into a data sample, by progressively removing noise. The reverse diffusion process, which transports noise samples from a Gaussian prior to the data distribution pdata, can be characterized by the following probability flow ordinary differential equation (ODE): dx\u03c4 = [f(x\u03c4, \u03c4) \u22121 2g(\u03c4)2\u2207x\u03c4 log p(x\u03c4)]d\u03c4, (1) where \u03c4 indexes the diffusion steps flowing in reverse from Tmax to 0. The current diffusion state x\u03c4 starts from the initial condition xTmax \u223cN(0, \u03c3(Tmax)2I) and ends at x0 \u223cpdata. We adopt the variance exploding parameterization of Karras et al. [23], where the drift and diffusion are defined as f(x\u03c4, \u03c4) = 0 and g(\u03c4) = \u221a 2\u03c4, respectively. Similarly, we adopt \u03c3(\u03c4) = \u03c4 as the noise variance schedule, which defines the so-called transition kernel i.e. the marginal densities: p\u03c4(x\u03c4|x0) = N(x\u03c4; x0, \u03c3(\u03c4)2I). The score function \u2207x\u03c4 log p(x\u03c4) is intractable at inference time as we do not have access to x0. In practice, a score model parameterized with a deep neural network s\u03b8(x\u03c4, \u03c4) is trained to estimate the score function using a denoising score matching objective [24]. 2.2. Diffusion Posterior Sampling for Dereverberation Single-channel dereverberation can be considered as the inverse problem of retrieving the anechoic utterance x0 \u2208RL from the reverberant measurement y \u2208RL, which is often modelled by convolving the anechoic speech with an RIR h \u2208RLh, expressed as y = h \u2217x0. We aim to solve this inverse problem by sampling from the posterior distribution p(x0|y, h) of anechoic speech given the measurement and the RIR. We adopt diffusion models for this posterior sampling task by replacing the score function \u2207x\u03c4 log p(x\u03c4) in (1) by the posterior score \u2207x\u03c4 log p(x\u03c4|y, h) [13]. Applying Bayes\u2019 rule, the posterior score is obtained as \u2207x\u03c4 log p(x\u03c4|y, h) = \u2207x\u03c4 log p(x\u03c4) + \u2207x\u03c4 log p(y|x\u03c4, h), (2) where the first term, or prior score, can be approximated with a trained score model s\u03b8(x\u03c4, \u03c4) \u2248\u2207x\u03c4 log p(x\u03c4). The likelihood p(y|x\u03c4, h) is generally intractable because we lack a signal model for y given the diffusion state x\u03c4. We will introduce in the next section a series of approximations to make its computation tractable. 3. METHODS 3.1. Likelihood Score Approximation In order to obtain a tractable likelihood computation, we posit as in [25] that a one-step denoising estimate of x0 at time \u03c4 can serve as a sufficient statistic for x\u03c4 in this context, i.e. that p(y|x\u03c4, h) \u2248 p(y|\u02c6 x0, h). Such estimate \u02c6 x0 can be obtained using the score model: \u02c6 x0 \u2206 = \u02c6 x0(x\u03c4, \u03c4) = x\u03c4 \u2212\u03c3(\u03c4)2s\u03b8(x\u03c4, \u03c4). (3) Furthermore, we consider here that the convolution model remains valid when using this denoised estimate, and therefore that p(y|\u02c6 x0, h) \u2248p(y|\u02c6 x0\u2217h). Finally, we model the estimation error as following a Gaussian distribution in the compressed STFT domain. p(y|\u02c6 x0 \u2217h) = N(Scomp(y); Scomp(\u02c6 x0 \u2217h), \u03b72I), (4) where Scomp(y) = |STFT(y)|2/3 exp{j\u2220STFT(y)} is the compressed spectrogram. We apply this compression to account for the heavy-tailedness of speech distributions [26]. With this series of approximations, we obtain the following likelihood score: \u2207x\u03c4 log p(y|x\u03c4, h) \u2248\u2212\u03b6(\u03c4)\u2207x\u03c4 C(y, h \u2217\u02c6 x0), (5) where the function C(\u00b7, \u00b7) is defined as: C(y, \u02c6 y) = 1 M M X m=1 K X k=1 \u2225Scomp(y)m,k \u2212Scomp(\u02c6 y)m,k\u22252 2. (6) The weighting parameter \u03b6(\u03c4) controls the trade-off between adherence to the prior data distribution and fidelity to the observed data. According to our Gaussian assumption (4), its theoretical value should depend on the unknown variance \u03b7 as \u03b6(\u03c4) = 1/2\u03b72. In practice, we resort to the same parameterization as in [19,27]. 3.2. Reverberation Operator The employed reverberation operator relies on a subband filtering approximation [28], which is applied within the Short-Time Fourier Transform (STFT) domain. Let H := STFT(h) \u2208CNh\u00d7K represent the STFT of an RIR h with Nh time frames and K frequency bins. Similarly, let X \u2208CM\u00d7K, and Y \u2208CM+Nh\u22121\u00d7K, denote the STFTs of anechoic x0 and reverberant y speech signals, repectively. The subband convolution operation applies independent convolutions along the time dimension of each frequency band: Ym,k = Nh X n=0 Hn,kXm\u2212n,k. (7) In the blind scenario, we need to estimate H, which is an arduous task without knowledge of the anechoic speech. We constrain the space of possible solutions by designing a structured, differentiable RIR prior whose parameters \u03c8 can be estimated through gradient descent. We denote the complete forward reverberation operator, including forward and inverse STFT, as A\u03c8(\u00b7) : RL \u2192RL. We denote as A \u2208RNh\u00d7K and \u03a6 \u2208RNh\u00d7K the RIR magnitudes and phases of H, respectively. We parameterize the magnitude matrix A as a multi-band exponential decay model defined in B < K frequency bands. Let A\u2032 \u2208RNh\u00d7B be the subsampled version of A in the B selected frequency bands. Each frequency band b is characterized by its weight wb and exponential decay rate \u03b1b, such that the corresponding subband magnitude filter can be expressed as: A\u2032 n,b = wbe\u2212\u03b1bn. (8) Once the weights and decay rates parameters are estimated, we reconstruct the magnitudes A by interpolating the subsampled A\u2032 using A = exp(lerp(log(A\u2032))), where lerp represents linear interpolation of the frequencies. Given the lack of structure of RIR phases, we perform independent optimization for each phase factor in \u03a6. The resulting set of parameters to optimize is therefore \u03c8 = {\u03a6, (wb, \u03b1b)b=1,...,B}. After each optimization step, the estimated time-frequency RIR H is further processed through a projection step: H = STFT(\u03b4 \u2295Pmin(iSTFT(H))). (9) This operation primarily ensures STFT consistency [29] of H. We additionally include a projection Pmin that ensures the time domain RIR has minimum phase lag to guarantee a stable inverse filter, using the Hilbert transform method [30]. Finally, to make the directto-reverberation ratio only depend on the late reverberation and to \fxN \u03c8N xn \u03c8n Score Model s\u03b8(xn, \u03c3n) \u02c6 x0 RIR Optimization \u00d7Nits. Posterior Sampling Step LH Score Approx. \u2212\u03b6(\u03c4n)\u2207xnC(y, A\u03c8n(\u02c6 x0)) xn\u22121 \u03c8n\u22121 x0 \u03c80 Fig. 1: Blind unsupervised dereverberation alternating between RIR estimation and posterior sampling for speech reconstruction. enforce further constraints on \u03c8 for a more stable optimization, we take the direct path to be at the first sample and with amplitude one. This is achieved by replacing the first sample of the time-domain RIR with a unit impulse, as indicated by the operation \u03b4 \u2295(\u00b7). 3.3. Blind Dereverberation Inference The inference process solves the following objective: \u02c6 x0, \u02c6 \u03c8 = arg min x0,\u03c8 C(y, A\u03c8(x0)) + R(\u03c8), s.t. x0 \u223cpdata. (10) This objective seeks to find the optimal speech \u02c6 x0 and RIR parameters \u02c6 \u03c8 that minimize the reconstruction error C(y, A\u03c8(x0)) while also incorporating a regularization term R(\u03c8). An essential aspect is the constraint x0 \u223cpdata, which ensures that the estimated signal \u02c6 x0 adheres to the distribution pdata of anechoic speech samples. This constraint is implemented in a soft manner by leveraging a pretrained score model s\u03b8(x\u03c4, \u03c4) trained on anechoic speech. The inference algorithm is outlined in Algorithm 1 and visualized in Fig. 1, using the discretization further described in Eq. (12). The algorithm employs the likelihood score approximation from Sec. 3.1, but replacing the convolution with the the reverberation operator A\u03c8(\u00b7), while its parameters \u03c8 are optimized in parallel with the speech signal through gradient descent. We introduce in (10) a noise regularization term R(\u03c8): R(\u03c8) = 1 Nh Nh X l=1 K X k=1 \u2225Scomp(\u02c6 h\u03c8)l,k \u2212Scomp(\u02c6 h\u03c8\u2032 + \u03c3\u2032v)l,k\u22252 2, (11) where \u02c6 h\u03c8 = A\u03c8(\u03b4) represents the estimated RIR in the waveform domain, v \u223cN(0, I) is a vector of white Gaussian noise, and \u02c6 h\u03c8\u2032 is a copy of the current estimate of \u02c6 h\u03c8, such that the arg min in (10) does not apply to it. In code, this is analogous to detaching the gradients of \u02c6 h\u03c8 using a stop grad operator. We adopt an annealed schedule for the noise level \u03c3\u2032(\u03c4), resembling the score model schedule \u03c3(\u03c4) but with different hyper-parameters. This regularization term injects noise in the RIR parameter gradients, with decreasing noise power, which enables a wider and smoother exploration while allowing for convergence toward the end of the optimization. 4. EXPERIMENTAL SETUP 4.1. Data We use VCTK [34] as clean speech, selecting 103 speakers for training, 2 for validation and 2 for testing. We curate recorded RIRs Algorithm 1 Inference algorithm Require: reverberant speech y xinit \u2190WPE(y) Sample xN \u223cN(xinit, \u03c32 NI) \u25b7Warm initialization Initialize \u03c8N \u25b7Initialize the RIR parameters for n \u2190N, . . . , 1 do \u25b7Discrete step backwards sn \u2190s\u03b8(xn, \u03c4n) \u25b7Evaluate score model \u02c6 x0 \u2190xn \u2212\u03c32 nsn \u25b7Get one-step denoising estimate \u02c6 x0 \u2190Rescale(\u02c6 x0) \u03c80 n\u22121 \u2190\u03c8n \u25b7Use the RIR parameters from last step for j \u21900, . . . , Nits. do \u25b7RIR optimization JRIR(\u03c8j n\u22121) \u2190C(y, A\u03c8j n\u22121(\u02c6 x0)) + R(\u03c8j n\u22121) \u03c8j+1 n\u22121 \u2190\u03c8j n\u22121 \u2212Adam(JRIR(\u03c8j n\u22121)) \u25b7Optim. step \u03c8j+1 n\u22121 \u2190project(\u03c8j+1 n\u22121) \u25b7Projection step \u03c8n\u22121 \u2190\u03c8M n\u22121 gn \u2190\u03b6(\u03c4n)\u2207xnC(y, A\u03c8n\u22121(\u02c6 x0)) \u25b7LH score approx. xn\u22121 \u2190xn \u2212\u03c3n(\u03c3n\u22121 \u2212\u03c3n)(sn + gn) \u25b7Update step return x0 \u25b7Reconstructed audio signal from various public datasets (please visit our code repository for details). In total we obtain approximately 10,000 RIRs, and split them between training, validation, and testing using ratios 0.9, 0.05, and 0.05, respectively. The training and validation sets are only used to train the baselines which require coupled reverberant/anechoic data. All data is resampled at 16 kHz. 4.2. Baselines We compare our method BUDDy to several blind supervised baselines such as NCSN++M [31] and diffusion-based SGMSE+ [14] and StoRM [15]. We also include blind unsupervised approaches leveraging traditional methods such as WPE [6] and Yohena et al. [7], as well as diffusion models Saito et al. [20] and GibbsDDRM [33] with code provided by the authors. For WPE, we take 5 iterations, a filter length of 50 STFT frames (400 ms) and a delay of 2 STFT frames (16 ms). 4.3. Hyperparameters and Training Configuration Data representation: We train the score model s\u03b8 using only the anechoic data from VCTK. For training, 4-s segments are randomly extracted from the utterances. Using publicly available code, the blind supervised models NCSN++M [31], SGMSE+ [14] and StoRM [15] are trained using coupled reverberant/anechoic speech, where the reverberant speech is obtained by convolving the anechoic speech from VCTK with the normalized RIRs. Reverberation operator: For all methods, STFTs are computed using a Hann window of 32 ms and a hop size of 8 ms. For subband filtering, we further employ 50% zero-padding to avoid aliasing artifacts. Given our sampling rate of fs = 16 kHz, this results in K = 513 frequency bins. We set the number of STFT frames of our operator to Nh = 100 (800 ms). We subsample the frequency scale in B = 26 bands, with a 125-Hz spacing between 0 and 1 kHz, a 250-Hz spacing between 1 and 3 kHz, and a 500-Hz spacing between 3 and 8 kHz. We optimize the RIR parameters \u03c8 with Adam, where the learning rate is set to 0.1, the momentum parameters to \u03b21 = 0.9, and \u03b22 = 0.99, and Nits. = 10 optimization iterations per diffusion step. We constrain the weights wb between 0 and 40 dB, \fTable 1: Dereverberation results obtained on VCTK-based reverberant datasets. Values indicate mean and standard deviation. We indicate for each method in the table if is blind (i.e. have no knowledge of the RIR) and/or unsupervised. Boldface numbers indicate best performance for supervised and unsupervised methods separately. For all metrics, higher is better. Matched Mismatched Method Blind Unsup. DNS-MOS PESQ ESTOI DNS-MOS PESQ ESTOI Reverberant 3.14 \u00b1 0.52 1.61 \u00b1 0.37 0.50 \u00b1 0.14 3.05 \u00b1 0.47 1.57 \u00b1 0.29 0.47 \u00b1 0.11 RIF+Post [5] \u2717 \u2713 3.41 \u00b1 0.47 2.66 \u00b1 0.40 0.76 \u00b1 0.09 3.55 \u00b1 0.45 2.86 \u00b1 0.31 0.78 \u00b1 0.09 InfDerevDPS [16] \u2717 \u2713 3.91 \u00b1 0.35 3.77 \u00b1 0.41 0.83 \u00b1 0.09 3.92 \u00b1 0.32 3.69 \u00b1 0.31 0.84 \u00b1 0.08 NCSN++M [31] \u2713 \u2717 3.75 \u00b1 0.38 2.85 \u00b1 0.55 0.80 \u00b1 0.10 3.61 \u00b1 0.39 2.08 \u00b1 0.47 0.64 \u00b1 0.09 SGMSE+M [14,31] \u2713 \u2717 3.88 \u00b1 0.32 2.99 \u00b1 0.48 0.78 \u00b1 0.09 3.74 \u00b1 0.34 2.48 \u00b1 0.47 0.69 \u00b1 0.09 StoRM [15] \u2713 \u2717 3.90 \u00b1 0.33 3.33 \u00b1 0.48 0.82 \u00b1 0.10 3.83 \u00b1 0.32 2.51 \u00b1 0.53 0.67 \u00b1 0.09 Yohena and Yatabe [7] \u2713 \u2713 2.99 \u00b1 0.56 1.80 \u00b1 0.33 0.55 \u00b1 0.12 2.94 \u00b1 0.44 1.71 \u00b1 0.29 0.51 \u00b1 0.10 WPE [32] \u2713 \u2713 3.24 \u00b1 0.54 1.81 \u00b1 0.42 0.57 \u00b1 0.14 3.10 \u00b1 0.48 1.74 \u00b1 0.37 0.54 \u00b1 0.12 Saito et al. [20] \u2713 \u2713 3.22 \u00b1 0.56 1.68 \u00b1 0.40 0.51 \u00b1 0.13 3.12 \u00b1 0.52 1.70 \u00b1 0.33 0.52 \u00b1 0.10 GibbsDDRM [33] \u2713 \u2713 3.33 \u00b1 0.53 1.70 \u00b1 0.37 0.51 \u00b1 0.13 3.30 \u00b1 0.52 1.75 \u00b1 0.36 0.52 \u00b1 0.11 BUDDy (proposed) \u2713 \u2713 3.76 \u00b1 0.41 2.30 \u00b1 0.53 0.66 \u00b1 0.12 3.74 \u00b1 0.38 2.24 \u00b1 0.54 0.65 \u00b1 0.12 and the decays \u03b1b between 0.5 and 28. This prevents the optimization from approaching degenerate solutions at early sampling stages. Furthermore, we rescale the denoised estimate \u02c6 x0 at each step to match the empirical dataset standard deviation \u03c3data = 5 \u00b7 10\u22122, so as to enforce a constraint on the absolute magnitudes of \u02c6 h\u03c8 and \u02c6 x0. Forward and reverse diffusion We set the extremal diffusion times to Tmax = 0.5 and Tmin = 10\u22124. For reverse diffusion, we follow Karras et al. [23] and employ a discretization of the diffusion time axis using N = 200 steps according to: \u2200n < N, \u03c4n = \u03c3n = \u0012 T 1/\u03c1 max + n N \u22121(T n/\u03c1 min \u2212T 1/\u03c1 max) \u0013\u03c1 , (12) with warping \u03c1 = 10. We use the second-order Euler-Heun stochastic sampler in [23] with Schurn = 50 and \u03b6\u2032 = 0.5 (prior scaling, see [27]), and the initial point xinit is taken to be the output of WPE [6] (with same parameters as the WPE baseline) plus Gaussian noise with standard deviation \u03c3 = Tmax. The annealing schedule \u03c3\u2032(\u03c4) in the noise regularization term in (11) is the same as the diffusion noise schedule \u03c3(\u03c4) but we bound it between extremal values \u03c3\u2032 min = 5 \u00d7 10\u22124 and \u03c3\u2032 max = 10\u22122. Network architecture: To remain consistent with [16], the unconditional score network architecture is NCSN++M [15, 31], a lighter variant of the NCSN++ [13] with 27.8M parameters instead of 65M. Training configuration: We adopt Adam as the optimizer to train the unconditional score model, with a learning rate of 10\u22124 and an effective batch size of 16 for 190k steps. We track an exponential moving average of the DNN weights with a decay of 0.999. Evaluation metrics: We assess the quality and intelligibility of speech using the intrusive Perceptual Evaluation of Speech Quality (PESQ) [35] and extended short-term objective intelligibility (ESTOI) [36]. We also employ the non-intrusive DNS-MOS [37], as a DNN-based mean opinion score (MOS) approximation. 5. RESULTS AND DISCUSSION Table 1 shows the dereverberation results for all baselines and indicates whether each approach is blind and/or unsupervised. We included the results for RIF+Post [5] and InfDerevDPS [16] in the informed scenario to show the upper bound of dereveberation quality one can achieve with perfect knowledge of the room acoustics. We use the same score model s\u03b8 and cost function C(\u00b7, \u00b7) for InfDerevDPS [16] as for BUDDy. Blind supervised approaches NCSN++M, SGMSE+M, and StoRM largely profit from the supervision during training, and boast a better performance compared to the unsupervised methods. However, in the mismatched setting, their performance dwindles because of their limited generalizability. In contrast, the proposed method BUDDy benefits from unsupervised training, and therefore, modifying the acoustic conditions does not impact performance at all: typically NCSN++M loses 0.78 PESQ by switching from the matched case to the mismatched case, where BUDDy loses 0.06. Our method then outperforms NCSN++M and comes within reach of other supervised approaches, although the generative nature of SGMSE+ and StoRM allow them to retain a relatively high generalization ability. We also observe that the traditional blind unsupervised methods such as WPE [6] and Yohena and Yatabe [7] can only perform limited dereverberation, as they do not benefit from the strong anechoic speech prior that learning-based methods parameterized with deep neural networks offer. Finally, we note that BUDDy performs significantly better on all metrics than the diffusion-based blind unsupervised baselines Saito et al. [20] and GibbsDDRM [33], as these perform mild dereverberation in the presented acoustic conditions, where the input direct-to-reverberant ratio is significanty lower than in the authors\u2019 setup. 6. CONCLUSIONS This paper presents BUDDy, the first unsupervised method simultaneously performing blind dereverberation and RIR estimation using diffusion posterior sampling. BUDDy significantly outperforms traditional and diffusion-based unsupervised blind approaches. Unlike blind supervised methods, which often struggle with generalization to unseen acoustic conditions, our unsupervised approach overcomes this limitation due to its ability to adapt the reverberation operator to a broad range of room impulse responses. While blind supervised methods outperform our approach when the tested conditions match those at training time, our method is on par or even outperforms some supervised baselines in a mismatched setting. \f7."
18
+ }
title_10K/test_title_short_2405.04356v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04356v1",
3
+ "title": "Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation",
4
+ "abstract": "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.",
5
+ "authors": "Jihyun Kim, Changjae Oh, Hoseok Do, Soohyun Kim, Kwanghoon Sohn",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation",
15
+ "main_content": "Introduction In recent years, multi-modal image generation has achieved remarkable success, driven by the advancements in Generative Adversarial Networks (GANs) [15] and diffusion models (DMs) [11, 18, 48]. Facial image processing has become a popular application for a variety of tasks, including face image generation [21, 39], face editing [6, 12, 30, 36, 37, 46], and style transfer [7, 64]. Many tasks typically utilize the pre-trained StyleGAN [21, 22], which can generate realistic facial images and edit facial attributes by manipulating the latent space using GAN inversion [39, 42, 58]. In these tasks, using multiple modalities as conditions is becoming a popular approach, which improves the user\u2019s controllability in generating realistic face images. However, existing GAN *Corresponding author This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF2021R1A2C2006703). rebuttal (a) Oil painting (b) Watercolor Visual input 2D face image generation 3D-aware face image generation Face style transfer \u201cThe woman has bangs, brown hair. She is smiling.\u201d \u201cGreek statue\u201d \u201csilver hair Elf\u201d \u201cCartoon style\u201d Overview of our method \u201cThe chubby man has receding hairline, eyeglasses, gray hair, and double chin.\u201d \u201cWatercolor painting\u201d GAN Ours Diffusion \u201cShe has blond hair, straight hair, and wears heavy makeup.\u201d Visual condition Text condition Figure 1. We present a method to map the diffusion features to the latent space of a pre-trained GAN, which enables diverse tasks in multi-modal face image generation and style transfer. Our method can be applied to 2D and 3D-aware face image generation. inversion methods [51, 58] have poor alignment with inputs as they neglect the correlation between multi-modal inputs. They struggle to map the different modalities into the latent space of the pre-trained GAN, such as by mixing the latent codes or optimizing the latent code converted from a given image according to the input text. Recently, DMs have increased attention in multi-modal image generation thanks to the stability of training and the flexibility of using multiple modalities as conditions. DMs [23, 53, 54] can control the multiple modalities and render diverse images by manipulating the latent or attention features across the time steps. However, existing textto-image DMs rely on an autoencoder and text encoder, such as CLIP [41], trained on unstructured datasets collected from the web [40, 45] that may lead to unrealistic arXiv:2405.04356v1 [cs.CV] 7 May 2024 \fimage generation. Moreover, some approaches address multi-modal face image generation in a 3D domain. In GAN inversion [14, 51], multi-view images can be easily acquired by manipulating the latent code with pre-trained 3D GANs. While DMs are inefficient in learning 3D representation, which has the challenge to generate multi-view images directly due to the lack of 3D ground-truth (GT) data for training [32, 47]. They can be used as a tool to acquire training datasets for 3D-aware image generation [24, 33]. In this paper, we present a versatile face generative model that uses text and visual inputs. We propose an approach that takes the strengths of DMs and GAN and generates photo-realistic images with flexible control over facial attributes, which can be adapted to 2D and 3D domains, as illustrated in Figure 1. Our method employs a latent mapping strategy that maps the diffusion features into the latent space of a pre-trained GAN using multi-denoising step learning, producing the latent code that encodes the details of text prompts and visual inputs. In summary, our main contributions are: (i) We present a novel method to link a pre-trained GAN (StyleGAN [22], EG3D [4]) and DM (ControlNet [62]) for multi-modal face image generation. (ii) We propose a simple mapping network that links pretrained GAN and DM\u2019s latent spaces and an attentionbased style modulation network that enables the use of meaningful features related to multi-modal inputs. (iii) We present a multi-denoising step training strategy that enhances the model\u2019s ability to capture the textual and structural details of multi-modal inputs. (iv) Our model can be applied for both 2Dand 3D-aware face image generation without additional data or loss terms and outperforms existing DMand GAN-based methods. 2. Related Work 2.1. GAN Inversion GAN inversion approaches have gained significant popularity in the face image generation task [7, 31, 51, 59] using the pre-trained 2D GAN, such as StyleGAN [21, 22]. This method has been extended to 3D-aware image generation [27, 60, 61] by integrating 3D GANs, such as EG3D [4]. GAN inversion can be categorized into learning-based, optimization-based, and hybrid methods. Optimization-based methods [44, 67] estimate the latent code by minimizing the difference between an output and an input image. Learning-based methods [1, 52] train an encoder that maps an input image into the latent space of the pre-trained GAN. Hybrid methods [58, 66] combine these two methods, producing an initial latent code and then refining it with additional optimizations. Our work employs a learning-based GAN inversion, where a DM serves as the encoder. We produce latent codes by leveraging semantic features in the denoising U-Net, which can generate images with controlled facial attributes. 2.2. Diffusion Model for Image Generation Many studies have introduced text-to-image diffusion models [36, 43, 45] that generate images by encoding multimodal inputs, such as text and image, into latent features via foundation models [41] and mapping them to the features of denoising U-Net via an attention mechanism. ControlNet [62] performs image generation by incorporating various visual conditions (e.g., semantic mask, scribbles, edges) and text prompts. Image editing models using DMs [16, 20, 26, 28, 34] have exhibited excellent performance by controlling the latent features or the attention maps of a denoising U-Net. Moreover, DMs can generate and edit images by adjusting latent features over multiple denoising steps [2]. We focus on using latent features of DM, including intermediate features and cross-attention maps, across denoising steps to link them with the latent space of GAN and develop a multi-modal face image generation task. 2.3. Multi-Modal Face Image Generation Face generative models have progressed by incorporating various modalities, such as text [25], semantic mask [38, 55], sketch [5, 9], and audio [65]. Several methods adopt StyleGAN, which can generate high-quality face images and edit facial attributes to control the style vectors. The transformer-based models [3, 13] are also utilized, which improves the performance of face image generation by handling the correlation between multi-modal conditions using image quantization. A primary challenge faced in face generative models is to modify the facial attributes based on given conditions while minimizing changes to other attributes. Some methods [39, 57] edit facial attributes by manipulating the latent codes in GAN models. TediGAN [58] controls multiple conditions by leveraging an encoder to convert an input image into latent codes and optimizing them with a pre-trained CLIP model. Recent works [19, 35] use DMs to exploit the flexibility of taking multiple modalities as conditions and generate facial images directly from DMs. Unlike existing methods, we use the pre-trained DM [62] as an encoder to further produce the latent codes for the pre-trained GAN models. 3. Method 3.1. Overview Figure 2 illustrates the overall pipeline of our approach. During the reverse diffusion process, we use the middle and decoder blocks of a denoising U-Net in ControlNet [62] as an encoder E. A text prompt c, along with a visual condition x, are taken as input to the denoising U-Net. Subsequently, E produces the feature maps h from the middle block, and \f\ud835\udc300 \ud835\udefe \u2219\u2219\u2219 \ud835\udc61= 0 \ud835\udc61= \ud835\udc47 \ud835\udc3c0 \u2032 \ud835\udc3c0 \ud835\udc51 \ud835\udc210 \ud835\udc3c\ud835\udc47 \u2032 \u2219\u2219\u2219 Conv ReLU \ud835\udc21\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udc5a \ud835\udc300 \ud835\udc300 \ud835\udefd Conv ReLU FC \u0de0 \ud835\udc05\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd \ud835\udc1f0 \ud835\udc300 \ud835\udc5a \ud835\udc50 Reverse Process of Diffusion \ud835\udc1a\ud835\udc61 \ud835\udc1f\ud835\udc61 Max-pool Average Average Upsample \ud835\udc05\ud835\udc61 \ud835\udc00\ud835\udc61 \u0d25 \ud835\udc00\ud835\udc61 \u0d24 \ud835\udc05\ud835\udc61 Style Modulation Network \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \ud835\udc1a0 \ud835\udc50 \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Pixel-wise multiplication Pixel-wise addition Our Model Mapping Network AbSMNet Frozen Figure 2. Overview of our method. We use a diffusion-based encoder E, the middle and decoder blocks of a denoising U-Net, that extracts the semantic features ht, intermediate features ft, and cross-attention maps at at denoising step t. We present the mapping network M (Sec. 3.2) and the attention-based style modulation network (AbSMNet) T (Sec. 3.3) that are trained across t (Sec. 3.4). M converts ht into the mapped latent code wm t , and T uses ft and at to control the facial attributes from the text prompt c and visual input x. The modulation codes w\u03b3 t and w\u03b2 t are then used to scale and shift wm t to produce the final latent code, wt, that is fed to the pre-trained GAN G. We obtain the generation output I\u2032 t from our model Y and we use the image Id 0 from the U-Net after the entire denoising process for training T (Sec. 3.4). Note that only the networks with the dashed line ( ) are trainable, while others are frozen. the intermediate features f and the cross-attention maps a from the decoder blocks. h is then fed into the mapping network M, which transforms the rich semantic feature into a latent code wm. The Attention-based Style Modulation Network (AbSMNet), T , takes f and a as input to generate the modulation latent codes, w\u03b3 and w\u03b2, that determine facial attributes related to the inputs. The latent code w is then forwarded to the pre-trained GAN G that generates the output image I\u2032. Our model is trained across multiple denoising steps, and we use the denoising step t to indicate the features and images obtained at each denoising step. With this pipeline, we aim to estimate the latent code, w\u2217 t , that is used as input to G to render a GT image, Igt: w\u2217 t = arg min wt L(Igt, G(wt)), (1) where L(\u00b7, \u00b7) measures the distance between Igt and the rendered image, I\u2032 = G(wt). We employ learning-based GAN inversion that estimates the latent code from an encoder to reconstruct an image according to given inputs. 3.2. Mapping Network Our mapping network M aims to build a bridge between the latent space of the diffusion-based encoder E and that of the pre-trained GAN G. E uses a text prompt and a visual input, and these textual and image embeddings are aligned by the cross-attention layers [62]. The feature maps h from the middle block of the denoising U-Net particularly contain rich semantics that resemble the latent space of the generator [28]. Here we establish the link between the latent spaces of E and G by using ht across the denoising steps t. Given ht, we design M that produces a 512-dimensional latent code wm t \u2208RL\u00d7512 that can be mapped to the latent space of G: wm t = M(ht). (2) M is designed based on the structure of the map2style block in pSp [42], as seen in Figure 2. This network consists of convolutional layers downsampling feature maps and a fully connected layer producing the latent code wm t . 3.3. Attention-based Style Modulation Network By training M with learning-based GAN inversion, we can obtain wm t and use it as input to the pre-trained GAN for image generation. However, we observe that ht shows limitations in capturing fine details of the facial attributes due to its limited spatial resolution and data loss during the encoding. Conversely, the feature maps of the DM\u2019s decoder blocks show rich semantic representations [53], benefiting from aggregating features from DM\u2019s encoder blocks via skip connections. We hence propose a novel Attentionbased Style Modulation Network (AbSMNet), T , that produces style modulation latent codes, w\u03b3 t , w\u03b2 t \u2208RL\u00d7512, by using ft and at from E. To improve reflecting the multimodal representations to the final latent code wt, we modulate wm t from M using w\u03b3 t and w\u03b2 t , as shown in Figure 2. We extract intermediate features, ft = {f n t }N n=1, from N different blocks, and cross-attention maps, at = {ak t }K k=1, from K different cross-attention layers of the n-th block, in E that is a decoder stage of denoising U-Net. The discrim\f(a) Cross-attention maps averaging for all denoising steps t= 0 \ud835\udc61= \ud835\udc47 (b) Cross-attention maps for individual denoising steps \ud835\udc00\ud835\udc61 0 \ud835\udc00\ud835\udc61 1 \ud835\udc00\ud835\udc61 2 \u0d25 \ud835\udc00\ud835\udc61 \ud835\udc00\ud835\udc47 1 \ud835\udc05\ud835\udc47 1 \u0de0 \ud835\udc05\ud835\udc47 1 (c) Example of an intermediate feature map Multi-modal inputs Output \u201cThe person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Figure 3. Visualization of cross-attention maps and intermediate feature maps. (a) represents the semantic relation information between an input text and an input semantic mask in the spatial domain. The meaningful representations of inputs are shown across all denoising steps and N different blocks. (b) represents N different cross-attention maps, At, at denoising steps t = T and t = 0. (c) shows the example of refined intermediate feature map \u02c6 F1 T at 1st block and t = T that is emphasized corresponding to input multi-modal conditions. The red and yellow regions of the map indicate higher attention scores. As the denoising step approaches T, the text-relevant features appear more clearly, and as the denoising step t approaches 0, the features of the visual input are more preserved. inative representations are represented more faithfully because ft consists of N multi-scale feature maps that can capture different sizes of facial attributes, which allows for finer control over face attributes. For simplicity, we upsample each intermediate feature map of ft to same size intermediate feature maps Ft = {Fn t }N n=1, where Fn t \u2208RH\u00d7W \u00d7Cn has H, W, and Cn as height, width and depth. Moreover, at is used to amplify controlled facial attributes as it incorporates semantically related information in text and visual input. To match the dimension with Ft, we convert at to At = {An t }N n=1, where An t \u2208RH\u00d7W \u00d7Cn, by max-pooling the output of the cross-attention layers in each decoder block and upsampling the max-pooling outputs. To capture the global representations, we additionally compute \u00af At \u2208RH\u00d7W \u00d71 by depth-wise averaging the max-pooling output of at over each word in the text prompt and upsampling it. As illustrated in Figures 3 (a) and (b), At and \u00af At represent the specific regions aligned with input text prompt and visual input, such as semantic mask, across denoising steps t. By a pixel-wise multiplication between Ft and At, we can obtain the refined intermediate feature maps \u02c6 Ft that emphasize the representations related to multiShift Net \u0de1 \ud835\udc6d\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udc6d\ud835\udc61 Weighted sum map2style \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd Scale Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc59 Shift Net Concat Scale Net Shift Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc54 \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefe \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc59 \ud835\udefc\ud835\udc61 \ud835\udefd 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udefc\ud835\udc61 \ud835\udefe map2style \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \u0de0 \ud835\udc05\ud835\udc61 Weighted sum \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe Figure 4. Style modulation network in T . The refined intermediate feature maps \u02c6 Ft and \u02c6 \u00af Ft are used to capture local and global semantic representations, respectively. They are fed into the scale and shift network, respectively. The weighted summations of these outputs are used as input to the map2style network, which finally generates the scale and shift modulation latent codes, w\u03b3 t , and w\u03b2 t . modal inputs as shown in Figure 3 (c). The improved average feature map \u02c6 \u00af Ft \u2208RH\u00d7W \u00d71 is also obtained by multiplying \u00af At with \u00af Ft, where \u00af Ft \u2208RH\u00d7W \u00d71 is obtained by first averaging the feature maps in Ft = {Fn t }N n=1 and then depth-wise averaging the outputs. \u02c6 Ft and \u02c6 \u00af Ft distinguish textand structural-relevant semantic features, which improves the alignment with the inputs. We use \u02c6 Ft and \u02c6 \u00af Ft as input to the style modulation network that produces the modulation codes w\u03b3 t , and w\u03b2 t as shown in Figure 4. We capture both local and global features by using \u02c6 Ft, which consists of feature maps representing different local regions on the face, and \u02c6 \u00af Ft, which implies representations of the entire face. We concatenate N intermediate feature maps of \u02c6 Ft, concat(\u02c6 F1 t \u00b7 \u00b7 \u00b7 \u02c6 FN t ), and it is forward to the scale and shift networks that consist of convolutional layers and Leaky ReLU, forming the local modulation feature maps, \u02c6 F\u03b3l t and \u02c6 F\u03b2l t . We also estimate global modulation feature maps, \u02c6 F\u03b3g t and \u02c6 F\u03b2g t , by feeding \u02c6 \u00af Ft to the scale and shift network. The final scale, \u02c6 F\u03b3 t , and shift, \u02c6 F\u03b2 t , feature maps are estimated by the weighted summation: \u02c6 F\u03b3 t = \u03b1\u03b3 t \u02c6 F\u03b3l t + (1 \u2212\u03b1\u03b3 t )\u02c6 F\u03b3g t , (3) \u02c6 F\u03b2 t = \u03b1\u03b2 t \u02c6 F\u03b2g t + (1 \u2212\u03b1\u03b2 t )\u02c6 F\u03b2g t , where \u03b1\u03b3 t and \u03b1\u03b2 t are learnable weight parameters. Through the map2style module, we then convert \u02c6 F\u03b3 t and \u02c6 F\u03b2 t into the final scale, w\u03b3 t \u2208RL\u00d7512, and shift, w\u03b2 t \u2208RL\u00d7512, latent codes. With these modulation latent codes, we achieve more precise control over facial details while corresponding to the input multi-modal inputs at the pixel level. Finally, the mapped latent code wm t from M is modulated by w\u03b3 t and w\u03b2 t from T to get the final latent code wt that is used to obtain the generated image I\u2032 t as follows: wt = wm t \u2299w\u03b3 t \u2295w\u03b2 t , (4) I\u2032 t = G(wt). (5) \f10132 5987 13044 9807 rebuttal (a) \u201cThis person has brown hair, and eyeglasses.\u201d (b)\u201cThis person has mustache.\u201d (c) \u201cThis person has gray hair, and eyeglasses.\u201d Inputs TediGAN UaC Ours (a) (b) (c) (a) (b) (c) (a) (b) (c) (a) \u201cShe has high cheekbones, straight hair, black hair.\u201d (b)\u201cShe has high cheekbones, straight hair, blond hair.\u201d (c) \u201cHe has blond hair, sideburns.\u201d (a) \u201cHe has brown hair, and wavy hair.\u201d (b)\u201cHe has black hair, and straight hair.\u201d (c) \u201cHe has black hair, and goatee.\u201d Collaborative ControlNet Figure 5. Visual examples of the 2D face image generation using a text prompt and a semantic mask. For each semantic mask, we use three different text prompts (a)-(c), resulting in different output images (a)-(c). 3.4. Loss Functions To optimize M and T , we use reconstruction loss, perceptual loss, and identity loss for image generation, and regularization loss [42] that encourages the latent codes to be closer to the average latent code \u00af w. For training M, we use the GT image Igt as reference to encourage the latent code wm t to generate a photo-realistic image as follows: LM = \u03bbm 0 \u2225Igt \u2212G(wm t )\u22252+ (6) \u03bbm 1 \u2225F(Igt) \u2212F(G(wm t )\u22252+ \u03bbm 2 (1 \u2212cos(R(Igt), R(G(wm t ))))+ \u03bbm 3 \u2225E(zt, t, x, c) \u2212\u00af w\u22252, where R(\u00b7) is pre-trained ArcFace network [8], F(\u00b7) is the feature extraction network [63], zt is noisy image, and the hyper-parameters \u03bbm (\u00b7) guide the effect of losses. Note that we freeze T while training M. For training T , we use Id 0 produced by the encoder E into the reconstruction and perceptual losses. With these losses, the loss LT encourages the network to control facial attributes while preserving the identity of Igt: LT = \u03bbs 0\u2225Id 0 \u2212G(wt)\u22252+ (7) \u03bbs 1\u2225F(Id 0) \u2212F(G(wt)\u22252+ \u03bbs 2(1 \u2212cos(R(Igt), R(G(wt))))+ \u03bbs 3\u2225E(zt, t, x, c) \u2212\u00af w\u22252, where the hyper-parameters \u03bbs (\u00b7) guide the effect of losses. Similar to Equation 6, we freeze M while training T . We further introduce a multi-step training strategy that considers the evolution of the feature representation in E over the denoising steps. We observe that E tends to focus more on text-relevant features in an early step, t = T, and structure-relevant features in a later step, t = 0. Figure 3 (b) shows the attention maps \u00af A showing variations across the denoising step. As the attention map, we can capture the textual and structural features by varying the denoising steps. To effectively capture the semantic details of multi-modal conditions, our model is trained across multiple denoising steps. 4. Experiments 4.1. Experimental Setup We use ControlNet [62] as the diffusion-based encoder that receives multi-modal conditions, including text and visual conditions such as a semantic mask and scribble map. The StyleGAN [22] and EG3D [4] are exploited as pre-trained 2D and 3D GAN, respectively. See the Supplementary Material for the training details, the network architecture, and additional results. Datasets. We employ the CelebAMask-HQ [29] dataset comprising 30,000 face RGB images and annotated semantic masks, including 19 facial-component categories such as skin, eyes, mouth, and etc. We also use textual de\fOurs I (a) (b) (c) (d) Ours IDE-3D \u201cThe person has brown hair, and sideburn.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has black hair, and wavy hair.\u201d (a) (b) (c) (d) Inputs Figure 6. Visual examples of the 3D-aware face image generation using a text and a semantic mask. We show the images generated with inputs and arbitrary viewpoints. Input conditions Method Model Domain FID\u2193 LPIPS\u2193 SSIM\u2191 ID\u2191 ACC\u2191 mIoU\u2191 Text + semantic mask TediGAN [58] GAN 2D 54.83 0.31 0.62 0.63 81.68 40.01 IDE-3D [51] GAN 3D 39.05 0.40 0.41 0.54 47.07 10.98 UaC [35] Diffusion 2D 45.87 0.38 0.59 0.32 81.49 42.68 ControlNet [62] Diffusion 2D 46.41 0.41 0.53 0.30 82.42 42.77 Collaborative [19] Diffusion 2D 48.23 0.39 0.62 0.31 74.06 30.69 Ours GAN 2D 46.68 0.30 0.63 0.76 83.41 43.82 Ours GAN 3D 44.91 0.28 0.64 0.78 83.05 43.74 Text + scribble map ControlNet [62] Diffusion 2D 93.26 0.52 0.25 0.21 Ours GAN 2D 55.60 0.32 0.56 0.72 Ours GAN 3D 48.76 0.34 0.49 0.62 Table 1. Quantitative results of multi-modal face image generation on CelebAMask-HQ [29] with annotated text prompts [58]. scriptions provided by [58] describing the facial attributes, such as black hair, sideburns, and etc, corresponding to the CelebAMask-HQ dataset. For the face image generation task using a scribble map, we obtain the scribble maps by applying PiDiNet [49, 50] to the RGB images in CelebAMask-HQ. We additionally compute camera parameters based on [4, 10] for 3D-aware image generation. Comparisons. We compare our method with GAN-based models, such as TediGAN [58] and IDE-3D [51], and DMbased models, such as Unite and Conquer (UaC) [35], ControlNet [62], and Collaborative diffusion (Collaborative) [19], for face generation task using a semantic mask and a text prompt. IDE-3D is trained by a CLIP loss term like TediGAN to apply a text prompt for 3D-aware face image generation. ControlNet is used for face image generation using a text prompt and a scribble map. We use the official codes provided by the authors, and we downsample the results into 256 \u00d7 256 for comparison. Evaluation Metrics. For quantitative comparisons, we evaluate the image quality and semantic consistency using sampled 2k semantic maskand scribble map-text prompt pairs. Frechet Inception Distance (FID) [17], LPIPS [63], and the Multiscale Structural Similarity (MS-SSIM) [56] are employed for the evaluation of visual quality and diversity, respectively. We also compute the ID similarity mean score (ID) [8, 57] before and after applying a text prompt. Additionally, we assess the alignment accuracy between the input semantic masks and results using mean Intersectionover-Union (mIoU) and pixel accuracy (ACC) for the face generation task using a semantic mask. 4.2. Results Qualitative Evaluations. Figure 5 shows the visual comparisons between ours and two existing methods for 2D face image generation using a text prompt and a semantic mask as input. We use the same semantic mask with different text prompts (a)-(c). TediGAN produces results consistent with the text prompt as the latent codes are optimized using the input text prompt. However, the results are inconsistent with the input semantic mask, as highlighted in the red boxes. UaC shows good facial alignment with the input semantic mask, but the results are generated with unexpected attributes, such as glasses, that are not indicated in the inputs. Collaborative and ControlNet produce inconsistent, blurry, and unrealistic images. Our model is capable of preserving semantic consistency with inputs and generating realistic facial images. As shown in Figure 5, our method preserves the structure of the semantic mask, such as the hairline, face position, and mouth shape, while changing the attributes through a text prompt. Figure 6 compares our method with IDE-3D [51] to validate the performance of 3D-aware face image generation \fInput View 1. 2. 3. 4. Novel Views (a) Inputs (b) ControlNet (c) Ours Input text: 1. \u201cThis young woman has straight hair, and eyeglasses and wears lipstick.\u201d 2. \u201cThe man has mustache, receding hairline, big nose, goatee, sideburns, bushy eyebrows, and high cheekbones.\u201d 3. \u201cShe has big lips, pointy nose, receding hairline, and arched eyebrows.\u201d 4. \u201cThis man has mouth slightly open, and arched eyebrows. He is smiling.\u201d Figure 7. Visual examples of 3D-aware face image generation using text prompts and scribble maps. Using (1-4) the text prompts and their corresponding (a) scribble maps, we compare the results of (b) ControlNet with (c) multi-view images generated by ours. using a semantic mask and a text prompt. We use the same semantic mask with different text prompts in Figures 6 (a) and (b), and use the same text prompt with different semantic masks in Figures 6 (c) and (d). The results of IDE-3D are well aligned with the semantic mask with the frontal face. However, IDE-3D fails to produce accurate results when the non-frontal face mask is used as input. Moreover, the results cannot reflect the text prompt. Our method can capture the details provided by input text prompts and semantic masks, even in a 3D domain. Figure 7 shows visual comparisons with ControlNet on 2D face generation from a text prompt and a scribble map. The results from ControlNet and our method are consistent with both the text prompt and the scribble map. ControlNet, however, tends to over-emphasize the characteristic details related to input conditions. Our method can easily adapt to the pre-trained 3D GAN and produce photo-realistic multiview images from various viewpoints. Quantitative Evaluations. Table 1 reports the quantitative results on CelebAMask-HQ with text prompts [58]. Our method using text prompts and semantic masks shows performance increases in all metrics in 2D and 3D domains, compared with TediGAN and UaC. Our model using 2D GAN significantly improves LPIPS, ID, ACC, and mIoU scores, surpassing TediGAN, UaC, ControlNet, and Collaborative, respectively. It demonstrates our method\u2019s strong ability to generate photo-realistic images while reflecting input multi-modal conditions better. For 3D-aware face image generation using a text prompt and a semantic mask, it \ud835\udcaf (c) w/o \ud835\udc34, \u04a7 \ud835\udc34 (d) Full model urns, and bags under eyes.\u201d and has arched eyebrows, black hair.\u201d 2. 3. 1. Input text: 1. \u201cThis man has gray hair.\u201d 2. \u201cHe has double chin, sideburns, and bags under eyes.\u201d 3. \u201cShe wears heavy makeup and has arched eyebrows, black hair.\u201d (a) Inputs (b) w/o T (c) w/o A, \u00af A (d) Ours Figure 8. Effect of M and T . (b) shows the results using only M, and (c) shows the effect of the cross-attention maps (A and \u00af A) in T . The major changes are highlighted with the white boxes. Method M T At Igt Id 0 FID\u2193 LPIPS\u2193ID\u2191 ACC\u2191 (a) \u2713 \u2713 \u2713 62.08 0.29 0.62 81.09 (b) \u2713 \u2713 \u2713 \u2713 48.68 0.28 0.66 82.86 (c) \u2713 \u2713 \u2713 \u2713 54.27 0.31 0.58 80.58 (d) \u2713 \u2713 \u2713 \u2713 61.60 0.29 0.62 80.04 (e) \u2713 \u2713 \u2713 \u2713 \u2713 44.91 0.28 0.78 83.05 Table 2. Ablation analysis on 3D-aware face image generation using a text prompt and a semantic mask. We compare (a) and (b) with (e) to show the effect of our style modulation network and (c) and (d) with (e) to analyze the effect of Igt and Id in model training. is reasonable that IDE-3D shows the highest FID score as the method additionally uses an RGB image as input to estimate the latent code for face generation. The LPIPS, SSIM, and ID scores are significantly higher than IDE-3D, with scores higher by 0.116, 0.23, and 0.24, respectively. Our method using 3D GAN exhibits superior ACC and mIoU scores for the 3D face generation task compared to IDE3D, with the score difference of 35.98% and 32.76%, likely due to its ability to reflect textual representations into spatial information. In face image generation tasks using a text prompt and a scribble map, our method outperforms ControlNet in FID, LPIPS, SSIM, and ID scores in both 2D and 3D domains. Note that the ACC and mIoU scores are applicable for semantic mask-based methods. 4.3. Ablation Study We conduct ablation studies to validate the effectiveness of our contributions, including the mapping network M, the AbSM network T , and the loss functions LM and LT . Effectiveness of M and T . We conduct experiments with different settings to assess the effectiveness of M and T . \fw/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours \u201cShe wears lipstick and has arched eyebrows, and slightly \u201cThis young person has goatee, mustache, big lips, and strai d) Ours urs and big lips, ws, and (a) Inputs (b) w/ \ud835\udc3c0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. 1. Input text: 1. \u201cThis young person has goatee, mustache, big lips, and straight hair.\u201d 2. \u201cShe wears lipstick and has arched eyebrows, and mouth slightly open.\u201d Figure 9. Effect of using Id from the denoising U-Net and the GT image Igt in model training. Using text prompts (1, 2) with (a) the semantic mask, we show face images using our model trained with (b) Id 0 , (c) Igt, and (d) both. We also show the advantages of using cross-attention maps in our model. The quantitative and qualitative results are presented in Table 2 and Figure 8, respectively. When using only M, we can generate face images that roughly preserve the structures of a given semantic mask in Figure 8 (a), including the outline of the facial components (e.g. face, eye) in Figure 8 (b). On the other hand, T enables the model to express face attribute details effectively, such as hair colors and mouth open, based on the multi-modal inputs in Figure 8 (c). The FID and ACC scores are higher than the model using only M in Table 2 (b). We further present the impact of adopting cross-attention maps to T for style modulation. Figure 8 (d) shows how the attention-based modulation approach enhances the quality of results, particularly in terms of the sharpness of desired face attributes and the overall consistency between the generated image and multi-modal conditions. Table 2 (e) demonstrates the effectiveness of our method by showing improvements in FID, LPIPS, ID, and ACC. Our method, including both M and T with cross-attention maps, significantly improves the FID showing our model\u2019s ability to generate high-fidelity images. From the improvement of the ID score, the crossattention maps enable relevantly applying the details of input conditions to facial components. Model Training. We analyze the effect of loss terms LM and LT by comparing the performance with the model trained using either Id 0 from the denoising U-Net or GT image Igt. The model trained using Id 0 produces the images in Figure 9 (b), which more closely reflected the multi-modal conditions (a), such as \u201cgoatee\u201d and \u201chair contour\u201d. In Table 2 (c), the ACC score of this model is higher than the model trained only using Igt in Table 2 (d). The images generated by the model trained with Igt in Figure 9 (c) are more perceptually realistic, as evidenced by the lower LPIPS score compared to the model trained with Id 0 in TaInput text: 1. 2. 3. 1. \u201cA photo of a face of a beautiful elf with silver hair in live action movie.\u201d 2. \u201cA photo of a white Greek statue.\u201d 3. \u201cA photo of a face of a zombie.\u201d Figure 10. Visual examples of 3D face style transfer. Our method generates stylized multi-view images by mapping the latent features of DM and GAN. ble 2 (c) and (d). Using Igt also preserves more conditionirrelevant features inferred by the ID scores in Table 2 (c) and (d). In particular, our method combines the strengths of two models as shown in Figure 9 (d) and Table 2 (e). 4.4. Limitations and Future Works Our method can be extended to multi-modal face style transfer (e.g. face \u2192Greek statue) by mapping the latent spaces of DM and GAN without CLIP losses and additional dataset, as shown in Figure 10. For the 3D-aware face style transfer task, we train our model using Id 0 that replaces GT image Igt in our loss terms. This method, however, is limited as it cannot transfer extremely distinct style attributes from the artistic domain to the photo-realistic domain of GAN. To better transfer the facial style in the 3D domain, we will investigate methods to map the diffusion features related to the input pose into the latent space of GAN in future works. 5. Conclusion We presented the diffusion-driven GAN inversion method that translates multi-modal inputs into photo-realistic face images in 2D and 3D domains. Our method interprets the pre-trained GAN\u2019s latent space and maps the diffusion features into this latent space, which enables the model to easily adopt multi-modal inputs, such as a visual input and a text prompt, for face image generation. We also proposed to train our model across the multiple denoising steps, which further improves the output quality and consistency with the multiple inputs. We demonstrated the capability of our method by using text prompts with semantic masks or scribble maps as input for 2D or 3D-aware face image generation and style transfer."
16
+ }
title_10K/test_title_short_2405.04370v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04370v1",
3
+ "title": "Diff-IP2D: Diffusion-Based Hand-Object Interaction Prediction on Egocentric Videos",
4
+ "abstract": "Understanding how humans would behave during hand-object interaction is vital\nfor applications in service robot manipulation and extended reality. To achieve\nthis, some recent works have been proposed to simultaneously predict hand\ntrajectories and object affordances on human egocentric videos. They are\nregarded as the representation of future hand-object interactions, indicating\npotential human motion and motivation. However, the existing approaches mostly\nadopt the autoregressive paradigm for unidirectional prediction, which lacks\nmutual constraints within the holistic future sequence, and accumulates errors\nalong the time axis. Meanwhile, these works basically overlook the effect of\ncamera egomotion on first-person view predictions. To address these\nlimitations, we propose a novel diffusion-based interaction prediction method,\nnamely Diff-IP2D, to forecast future hand trajectories and object affordances\nconcurrently in an iterative non-autoregressive manner. We transform the\nsequential 2D images into latent feature space and design a denoising diffusion\nmodel to predict future latent interaction features conditioned on past ones.\nMotion features are further integrated into the conditional denoising process\nto enable Diff-IP2D aware of the camera wearer's dynamics for more accurate\ninteraction prediction. The experimental results show that our method\nsignificantly outperforms the state-of-the-art baselines on both the\noff-the-shelf metrics and our proposed new evaluation protocol. This highlights\nthe efficacy of leveraging a generative paradigm for 2D hand-object interaction\nprediction. The code of Diff-IP2D will be released at\nhttps://github.com/IRMVLab/Diff-IP2D.",
5
+ "authors": "Junyi Ma, Jingyi Xu, Xieyuanli Chen, Hesheng Wang",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Diff-IP2D: Diffusion-Based Hand-Object Interaction Prediction on Egocentric Videos",
15
+ "main_content": "Introduction Accurately anticipating human intentions and future actions is important for artificial intelligence systems in robotics and extended reality [1, 2, 3]. Recent works have tried to tackle the problem from various perspectives, including action recognition and anticipation [4, 5, 6, 7], gaze prediction [8, 9, 10, 11], hand trajectory prediction [12, 13, 14, 15], and object affordance extraction [12, 16, 14, 17]. Among them, jointly predicting hand motion and object affordances can effectively facilitate more reasonable robot manipulation as the prior contextual information, which has been demonstrated on some robot platforms [1, 18, 19]. We believe that deploying such models pretrained by internet-scale human videos on robots is a promising path towards embodied agents. Therefore, our work aims to jointly predict hand trajectories and object affordances on egocentric videos as a concrete hand-object interaction (HOI) expression, following the problem modeling of previous works [12, 14]. Currently, the state-of-the-art approaches [12, 13] predicting hand trajectories and object affordances on egocentric videos tend to exploit the autoregressive (AR) model. They reason about the next \u2217Corresponding author: [email protected] Preprint. Under review. arXiv:2405.04370v1 [cs.CV] 7 May 2024 \fview1 (other observations) view2 (last observation) gap egocentric images (a) Existing Paradigm (b) Diff-IP2D Paradigm t autoregressive model HOI (t2) HOI (t1) predicted interaction diffusion-based model denoising HOI (t1) HOI (t2) HOI (t3) predicted interaction egocentric images t steps HOI (t1) HOI (t3) HOI (t1) HOI (t2) in parrallel motion features (c) Autoregressive Generation vs. Parallel Generation (d) Inherent Gaps gt gt ego motion real actions pixel movement gap accumulated error gt bidirectional unidirectional 3D environments Figure 1: Diff-IP2D vs. Existing Paradigm. The existing HOI prediction paradigm (a) tends to accumulate prediction errors under unidirectional constraints. In contrast, our proposed Diff-IP2D (b) directly forecasts all the future interaction states in parallel with denoising diffusion, mitigating error accumulation with bidirectional constraints (c). Moreover, we integrate egomotion information into our proposed paradigm to narrow the inherent gaps (d) in HOI prediction. HOI state only according to the previous steps (Fig. 1(a)). However, expected \u201cpost-contact states\u201d also affect \u201cpre-contact states\u201d according to human intentions that persist across the holistic HOI process as an oracle. There must be more coherent constraints that reflect human intention and mutually connect the preceding and the following motion in the HOI prediction process. Inspired by this, we argue that predicting future HOI states in parallel considering the bidirectional constraints within the holistic sequence outperforms generating the next state autoregressively (Fig. 1(c)). With diffusion models emerging across multiple domains [20, 21, 22, 23, 24, 25, 26, 27], their strong forecasting capability has been widely validated. Therefore, we propose a diffusion-based method to predict future hand-object interaction in parallel, considering bidirectional constraints in the latent space compared to the traditional autoregressive generation (Fig. 1(b)). In the forward process, the past and future video images are first encoded to sequential latent features. Noises are gradually added to the part of the future sequence while the past features remain anchored. Subsequently, a Transformer-based network is devised for learning to reverse the diffusion and reconstruct the input latent features. Finally, the proposed predictors are exploited to recover future hand trajectories and object affordances from the denoised latents. A new regularization strategy is also proposed to link the two latent spaces adjacent to the denoising diffusion process. Moreover, we also identify two inherent gaps (Fig. 1(d)) affecting HOI prediction in the existing paradigm: 1) Directly predicting the projection of 3D future hand trajectories and object affordances on 2D egocentric image plane is an ill-posed problem involving spatial ambiguities. There is generally a gap between 2D pixel movements and 3D real actions, which can be bridged by spatial transformation across multiple views changing with egomotion. 2) The past egocentric videos are absorbed to predict future interaction states on the last observed image, which is actually a \u201ccanvas\u201d from a different view w.r.t all the other frames. Therefore, there is also a gap between the last observation (first-person view) and the other observations (analogous to third-person view) caused by egomotion. To fill the two gaps together, we further propose to integrate the camera wearer\u2019s egomotion into our diffusion-based paradigm. The utilized homography features enable the denoising model aware of the camera wearer\u2019s dynamics and the spatial relationship between consecutive egocentric video frames. The main contributions of this paper are as follows: 1) We propose a diffusion-based hand-object interaction prediction method, dubbed Diff-IP2D. To our best knowledge, this is the first work to jointly forecast future hand trajectories and object affordances by the devised denoising diffusion probabilistic model with only 2D egocentric videos as input. It provides a foundation generative paradigm in the field of HOI prediction. 2) The homography egomotion features are integrated to fill the motion-related gaps inherent in HOI prediction on egocentric videos. 3) We extend the existing metrics and propose the first protocol for jointly evaluating the performance of hand trajectory prediction and object affordance prediction. 4) Comprehensive experiments are conducted to demonstrate that our Diff-IP2D can predict plausible hand trajectories and object affordances compared to the state-of-the-art baselines, showing its potential for deployment on artificial intelligence systems. 2 \f2 Related work Understanding hand-object interaction. Human HOI comprehension can guide the downstream tasks in artificial intelligence systems. As a pioneer work, Calway et al. [28] connect the specific human tasks to relevant objects, revealing the importance of object-centric understanding in different HOI modes. In contrast, Liu et al. [29] focus on capturing the changeable attributes of objects, which underlines the relationship between object-centric interaction and goal-oriented human activities. After that, more and more works contribute to HOI understanding by pixel-wise semantic segmentation [30, 31, 32, 33], bounding-box-wise detection [34, 35, 36, 37], fine-grained hand/object pose estimation [38, 39, 40, 41, 42, 43]. Ego4D [44] further provides a standard benchmark that divides HOI understanding into several predefined subtasks. Predicting hand-object interaction. Analyzing only past human behavior may be insufficient for service robot manipulation or extended reality. Forecasting possible future object-centric HOI states based on historical observations is also valuable, which attracts increasing attention due to the general knowledge that can be transferred to robot applications [1, 18, 19, 45]. For example, Dessalene et al. [46] propose to generate contact anticipation maps and next active object segmentations as future HOI predictions. Liu et al. [14] first achieve hand trajectory and object affordance prediction simultaneously, revealing that predicting hand motion benefits the extraction of interaction hotspots. Following this work, Liu et al. [12] further develop an object-centric Transformer to jointly forecast future trajectories and affordances autoregressively, and annotate publicly available datasets to support future works. More recently, Bao et al. [13] lift the problem to 3D spaces where hand trajectories are predicted by an uncertainty-aware state space Transformer in an autoregressive manner. However, this method needs additional 3D perception inputs from the RGB-D camera. In this work, we still achieve joint hand trajectory and object affordance prediction on 2D human videos rather than in 3D space. We focus on capturing more general knowledge from only egocentric camera observations in an iterative non-autoregressive (iter-NAR) manner, rather than the autoregressive way of the state-of-the-art works [12, 13]. Diffusion-based egocentric video analysis. Diffusion models have been successfully utilized in exocentric and egocentric video prediction [47, 48, 49, 50, 2] due to their strong generation ability. With only egocentric videos as inputs, diffusion-based techniques can also achieve human mesh recovery [51, 52], 3D HOI reconstruction [53, 54], and 3D HOI synthesizing [16, 55]. However, none of these works concentrate on the combination of fine-grained hand trajectories and object affordances as future HOI representations for potential utilization in artificial intelligence systems. Our proposed Diff-IP2D first achieves this based on the denoising diffusion probabilistic model [20], which dominates the existing paradigm [12, 13] in prediction performance on egocentric videos. 3 Proposed Method 3.1 Preliminaries Task definition. Given the video clip of past egocentric observations I = {It}0 t=\u2212Np+1, we aim to predict future hand trajectories H = {HR t , HL t }Nf t=1(HR t , HL t \u2208R2) and potential object contact points O = {On}No n=1(On \u2208R2), where Np and Nf are the numbers of frames in the past and future time horizons respectively, and No denotes the number of predicted contact points used to calculate interaction hotspots as object affordances. Following the previous works [12, 14], we predict the future positions of the right hand, the left hand, and the affordance of the next active object on the last observed image of the input videos. Diffusion models. In this work, we propose a diffusion-based approach to gradually corrupt the input to noisy features and then train a denoising model to reverse this process. We first map the input images into a latent space z0 \u223cq(z0), which is then corrupted to a standard Gaussian noise zS \u223cN(0, I). In the forward process, the perturbation operation can be represented as q(zs|zs\u22121) = N(zs; \u221a1 \u2212\u03b2szs\u22121, \u03b2sI), where \u03b2 is the predefined variance scales. In the reverse process, we set a denoising diffusion model to gradually reconstruct the latent z0 from the noisy zS. The denoised features can be used to recover the final future hand trajectories and object affordances. 3 \fforward process future HOI features conditional past HOI features reverse process Multi-Feature Extractor egomotion homography Hand Trajectory Head trajectory loss shared weights regularization affordance loss diffusion-related losses Input: sequential past egocentric images Output: future HOI states feature space (s=S) Side-Oriented Fusion Module MADT Predictors MADT Object Affordance Head global/right/left intermediate features right/left fused features diffusion process feature space (s=S/2) feature space (s=0) Hand Trajectory Head Figure 2: System Overview of Diff-IP2D. Our proposed paradigm takes in sequential past egocentric images and jointly predicts hand trajectories and object affordances as future HOI states. The observations are mapped to the latent feature space for the diffusion process. 3.2 Architecture System overview. Accurately reconstructing the future part of the input sequence is critical in the diffusion-based prediction task. We empirically found that ground-truth hand waypoints Hgt = {HR,gt t , HL,gt t }Nf t=1(HR,gt t , HL,gt t \u2208R2) and contact points Ogt = {Ogt n}No n=1(Ogt n \u2208R2) provide discrete and sparse supervision signals for reconstruction, which is not enough for capturing possible high-level semantics such as human intentions in the denoising process. Therefore, as Fig. 2 shows, we first use Multi-Feature Extractor and Side-Oriented Fusion Module to transform the input images into latent HOI features, and then implement diffusion-related operation in the latent continuous space. The HOI features denoised by Motion-Aware Denoising Transformer are further absorbed by Hand Trajectory Head and Object Affordance Head to generate future hand trajectories and object hotspots. Multi-Feature Extractor (MFE). Following the previous work [12], we use MFE that consists of a pretrained Temporal Segment Network (TSN) provided by Furnari et al. [34], RoIAlign [56] with average pooling, and Multilayer Perceptron (MLP) to extract hand, object, and global features for each sequence image It \u2208I. The positions of hand-object bounding boxes are also encoded to feature vectors fused with hand and object features. Side-Oriented Fusion Module (SOFM). Our proposed SOFM is a learnable linear transformation to fuse the above-mentioned three types of feature vectors into the final latent form for two sides respectively. Specifically, the global features and right-side features (right-hand/object features) are concatenated to the right-side HOI features FR = {F R t }X t=\u2212Np+1(F R t \u2208Ra, X = Nf for training and X = 0 for inference). The operation and feature sizes are the same as the leftside counterparts, leading to FL = {F L t }X t=\u2212Np+1. We further concatenate the side-oriented features along the time axis respectively to generate the input latents F R seq, F L seq \u2208R(Np+X)\u00d7a for the following diffusion model. Motion-Aware Denoising Transformer (MADT). Our proposed MADT takes in the noisy latent HOI features and reconstructs future HOI features for the following predictors conditioned on past HOI counterparts. MADT consists of several stacked Transformer layers as shown in Fig. 3. Inspired by the text generation technique [26], we anchor the past HOI features for both forward and reverse processes. We only impose noises and denoise at the positions of the future feature sequence. The features of the two sides are denoised using the same model, leading to \u02c6 F R seq and \u02c6 F L seq. In addition, egomotion guidance is proposed here to fill the gaps mentioned in Sec. 1. Specifically, we first extract the Scale-Invariant Feature Transform (SIFT) descriptors to find the pixel correspondence between two adjacent images of past observations I. Then we calculate the homography matrix with RANSAC that finds a transformation to maximize the number of inliers in the keypoint pairs. We accumulate the consecutive homography matrices and obtain Mseq \u2208RNp\u00d73\u00d73 representing the camera wearer\u2019s motion between It (t \u22640) and I0. They are further linearly embedded into an egomotion feature Eseq \u2208RNp\u00d7b by Motion Encoder. The multi-head cross-attention module 4 \fMHSA Add & Norm MHCA Add & Norm FFN Add & Norm past HOI features TE PE egomotion feature latent noisy samples denoised future HOI features \u3002 homography Motion Encoder N X input video clip \u3002\u3002 t m1,1 m1,2 m1,3 m2,1 m2,2 m2,3 m3,1 m3,2 m3,3 ... ... ... ... ... ... Figure 3: Architecture of our proposed MADT. MADT receives corrupted latent HOI features with the position embedding (PE) and time embedding (TE), and outputs denoised future HOI features. (MHCA) in the devised Transformer layer then absorbs the egomotion feature to guide the denoising process. More analysis on the use of egomotion guidance can be found in Appendix, Sec. B. Predictors. Our proposed predictors consist of Hand Trajectory Head (HTH) and Object Affordance Head (OAH). HTH contains an MLP that receives the future parts of the denoised features, \u02c6 F R seq[Np+1: Np+Nf] and \u02c6 F L seq[Np+1 : Np+Nf] to generate future waypoints H of two hands. As to OAH, we empirically exploit Conditional Variational Autoencoder (C-VAE) [57] to generate possible contact points O in the near future. Take the right hand as an example, the condition is selected as the time-averaged \u02c6 F R seq and predicted waypoints HR t . Note that we additionally consider denoised future HOI features \u02c6 F R seq[Np+1 : Np+Nf] (t>0) besides the features from the past observation (t\u22640) for object affordance prediction. This aligns with the intuitive relationship between the contact points and the overall interaction process. Therefore, we integrate richer conditional features from trajectory prediction into the object affordance prediction compared to the previous work [12] only conditioned on historical features. 3.3 Training Forward process. We implement partial noising [26] in the forward process during training. Taking the right side as an example, the output of SOFM is first extended by a Markov transition q(z0|F R seq) = N(F R seq, \u03b20I), where F R seq \u2208R(Np+Nf)\u00d7a. We discard the embedding process from Gong et al. [26] since the HOI feature F R seq is already in the continuous latent space. In each following forward step of the diffusion model, we implement q(zs|zs\u22121) by adding noise to the future part of zs\u22121, i.e., zs\u22121[Np+1:Np+Nf] for both sides. Reverse process. After corrupting the initial z0 to zS by the forward process, our proposed MADT is adopted to denoise zS to z0 in a classifier-free manner. Considering the guidance of egomotion features, the reverse process can be modeled as pMADT(z0:S) := p(zs) QS s=1 pMADT(zs\u22121|zs, Mseq). Specifically, the MADT model fMADT(zs, s, Mseq) predicts the injected noise for each forward step with pMADT(zs\u22121|zs, Mseq) = N(zs\u22121; \u00b5MADT(zs, s, Mseq), \u03c3MADT(zs, s, Mseq)). The same denoising operation and motion-aware guidance are applied to HOI features of both sides. Training objective. The loss function to train the networks in Diff-IP2D contains four parts, including diffusion-related losses, trajectory loss, affordance loss, and an additional regularization term (see Fig. 2). Take the right side as an example, we use the variational lower bound LR VLB as the diffusion-related losses: LR VLB = S X s=2 ||zR 0 \u2212fMADT(zR s, s, Mseq)||2 + ||F R seq \u2212\u02c6 F R seq||2, (1) where \u02c6 F R seq = fMADT(zR 1, 1, Mseq). To reconstruct hand trajectories beyond the latent feature space, we further set trajectory loss LR traj with the distance between the ground-truth waypoints and the ones predicted by HTH: LR traj = Nf X t=1 ||HR t \u2212HR,gt t ||2, (2) 5 \fwhere HR t = fHTH( \u02c6 F R seq[Np+1:Np+Nf]). We only focus on the future part out of the holistic sequence for computing LR traj since we let HTH be more sensitive to predictions rather than bias it to past observations. As to the object affordance prediction, we also compute the affordance loss Laff after multiple stochastic sampling considering the next active object recognized following Liu et al. [12] (assuming in the right side here for brevity): Laff = No X n=1 ||On \u2212Ogt n||2 + cLKL, (3) where On =fOAH( \u02c6 F R seq, HR t ), and LKL = 1 2(\u2212log \u03c32 OAH( \u02c6 F R seq, HR t )+\u00b52 OAH( \u02c6 F R seq, HR t )+\u03c32 OAH( \u02c6 F R seq, HR t )\u2212 1) is the KL-Divergence regularization for C-VAE, which is scaled by c = 1e-3. The latent features and predicted hand waypoints are fused by MLP suggested by the previous work [12]. We consider both reconstructed future HOI features \u02c6 F R seq[Np+1:Np+Nf] and anchored past counterparts \u02c6 F R seq[0:Np] compared to [12] as mentioned before. We also notice that the latent feature spaces before and after the denoising diffusion process represent the same \u201cprofile\u201d of the input HOI sequence. Therefore, we propose an additional regularization term implicitly linking F R seq and \u02c6 F R seq by hand trajectory prediction: LR reg = Nf X t=1 || \u02dc HR t \u2212HR,gt t ||2, (4) where \u02dc HR t = fHTH(F R seq[Np+1:Np+Nf]). Although Eq. (4) does not explicitly contain the term \u02c6 F R seq, the training direction is the same with Eq. (2), thus maintaining training stability. The regularization helps the convergence of Diff-IP2D by consistently constraining the two latent spaces alongside the diffusion process. Here we do not use object affordance prediction for regularization because we empirically found that incorporating OAH mitigates training efficiency while the positive effect is not obvious. Finally, we get the total loss to train our proposed Diff-IP2D: Ltotal = \u03bbVLB(LR VLB + LL VLB) + \u03bbtraj(LR traj + LL traj) + \u03bbaffLaff + \u03bbreg(LR reg + LL reg), (5) where \u03bbVLB, \u03bbtraj, \u03bbaff, and \u03bbreg are the weights to balance different losses. Besides, we leverage the importance sampling technique proposed in improved DDPM [58], which promotes the training process focusing more on the steps with relatively large Ltotal. 3.4 Inference In the inference stage, we first sample F R noise, F L noise \u2208RNf\u00d7a from a standard Gaussian distribution, which is then concatenated with F R seq, F L seq \u2208RNp\u00d7a along the time axis to generate zR S and zL S. Then we use MADT to predict zR 0 and zL 0 based on DDIM sampling [59]. Note that we anchor the past part of reparameterized zs as the fixed condition in every step of the inference process following Gong et al. [26]. Finally, the generated \u02c6 F R seq and \u02c6 F L seq are used to predict future hand waypoints and contact points by fHTH(\u00b7) and fOAH(\u00b7) as mentioned before. It can be seen from the inference stage that Diff-IP2D can be regarded as an iter-NAR model in the latent feature space. Compared to the state-of-the-art baselines in an autoregressive manner, our approach shifts the iteration from F1,1 F1,2 F1, Nf ... F2,1 F2,2 F2, Nf ... FS,1 FS,2 FS, Nf ... ... denoising diffusion process time axis ... ... F1 F2 FNf ... time axis H1 H2 HN ... f FS-1,1 FS-2,1 FS, Nf ... H1 H2 HN ... f F3 H3 F1 F2 FNf ... time axis H1 H2 HN ... f F3 H3 (b) Iter-NAR Prediction (a) AR Prediction Figure 4: Comparison of AR and our iter-NAR prediction. the time axis to the denoising direction, which is shown in Fig. 4. This alleviates the accumulated artifacts caused by the limited iteration in the time dimension, and maintains bidirectional constraints among the sequential features to generate future HOI states in parallel, providing a deeper understanding of human intention. We further present the mathematical relationship between the two iter-NAR models, Diff-IP2D for HOI prediction and DiffuSeq [26] for text generation in Appendix, Sec. A. 6 \f4 Experiments 4.1 Experimental setups Datasets. Following the previous work [12], we utilize three publicly available datasets including Epic-Kitchens-55 (EK55) [60], Epic-Kitchens-100 (EK100) [61], and EGTEA Gaze+ (EG) [11]. For the EK55 and EK100 datasets, we sample past Np = 10 frames (2.5 s) to forecast HOI states in future Nf = 4 frames (1.0 s), both at 4 FPS. As to the EG dataset, Np = 9 frames (1.5 s) are used for Nf = 3 HOI predictions (0.5 s) at 6 FPS. See the Appendix, Sec. C.2 for more details. Diff-IP2D configuration. MFE extracts the hand, object, and global feature vectors all with the size of 512 for each input image. For the EK55 and EK100 datasets, the outputs of SOFM F R seq, F L seq have the size of 14 \u00d7 512 for training and 10 \u00d7 512 for inference. For the EG dataset, F R seq, F L seq are 9 \u00d7 512 for training and 12 \u00d7 512 for inference. As to the diffusion process, the total number of steps S is set to 1000. We also provide an ablation study on multiple steps for training and inference in Appendix, Sec. D.3. The square-root noise schedule in Diffusion-LM [62] is adopted here for the forward diffusion process. MADT has 6 Transformer layers (Fig. 3) for denoising, where the embedding dimension is 512, the number of heads is set to 4, and the intermediate dimension of the feed-forward layer is set to 2048. Motion Encoder linearly projects each homography matrix to an egomotion feature vector of 512. We use an MLP with hidden dimensions 256 and 64 to predict the hand waypoints as HTH, and a C-VAE containing an MLP with a hidden dimension 512 to predict contact points as OAH. The training configurations can be found in Appendix, Sec. C.2. In the reference stage, we generate the 10 candidate samples for each prediction. Baseline configuration. We choose Constant Velocity Hand (CVH), Seq2Seq [63], FHOI [14], OCT [12], and USST [13] as the baselines for hand trajectory prediction. CVH is the most straightforward one which assumes two hands remain in uniform motion over the future time horizon with the average velocity during past observations. Besides, we adjust the input and architecture of USST to the 2D prediction task since it was originally designed for 3D hand trajectory prediction. We choose Center Object [14], Hotspots [64], FHOI [14], OCT [12], and Final Hand of USST [13] (USST-FH) as the baselines for object affordance prediction. USST-FH puts a mixture of Gaussians at the last hand waypoint predicted by USST since its vanilla version can only predict waypoints. Evaluation metrics. Following the previous work [14, 12, 13], we use Final Displacement Error (FDE) to evaluate prediction performance on hand trajectories. Considering the general knowledge of \u201cpost-contact trajectories\u201d extracted from human videos is potentially beneficial to robot manipulation [1, 18], we additionally extend the metric Average Displacement Error to Weighted Displacement Error (WDE): WDE = 1 2Nf X R,L Nf X t=1 t Nf D(Ht, Hgt t ), (6) where D(\u00b7) denotes the L2 distance function and the later waypoints contribute to larger errors. We select the mean error among the 10 samples for each hand trajectory prediction. As to the object affordance prediction, we use Similarity Metric (SIM) [65], AUC-Judd (AUC-J) [66], and Normalized Scanpath Saliency (NSS) [67] as evaluation metrics. We use all 10 contact point candidates to compute the metric values for each affordance prediction. Moreover, we propose a novel object-centric protocol to jointly evaluate the two prediction tasks. We first calculate the averaged hand waypoints \u00af HR t and \u00af HL t for each future timestamp from multiple samples. Then we select the waypoint closest to each predicted contact prediction On as an additional \u201cinteraction point\u201d, which can be formulated by: \u00af Hip n = minR,L,tD( \u00af Ht, On), (7) Finally, the joint hotspot is predicted using { \u00af Hip n \u222aOn}No n=1. This protocol comprehensively considers object-centric attention since HOI changes the object states and hand waypoints must have a strong correlation with object positions. Note that we also use the quantitative metrics same as the ones for object affordance prediction, which are denoted as SIM\u2217, AUC-J\u2217, and NSS\u2217. More clarifications about our proposed new protocol can be found in Appendix, Sec. C.1. 7 \fTable 1: Comparison of performance on hand trajectory and object affordance prediction approach EK55 EK100 EG WDE \u2193 FDE \u2193 WDE \u2193 FDE \u2193 WDE \u2193 FDE \u2193 CVH 0.636 0.315 0.658 0.329 0.689 0.343 Seq2Seq [63] 0.505 0.212 0.556 0.219 0.649 0.263 FHOI [14] 0.589 0.307 0.550 0.274 0.557 0.268 OCT [12] 0.446 0.208 0.467 0.206 0.514 0.249 USST [13] 0.458 0.210 0.475 0.206 0.552 0.256 Diff-IP2D (ours) 0.411 0.181 0.407 0.187 0.478 0.211 SIM \u2191 AUC-J \u2191 NSS \u2191 SIM \u2191 AUC-J \u2191 NSS \u2191 SIM \u2191 AUC-J \u2191 NSS \u2191 Center Object [14] 0.083 0.553 0.448 0.081 0.558 0.401 0.094 0.562 0.518 Hotspots [64] 0.156 0.670 0.606 0.147 0.635 0.533 0.150 0.662 0.574 FHOI [14] 0.159 0.655 0.517 0.120 0.548 0.418 0.122 0.506 0.401 OCT [12] 0.213 0.710 0.791 0.187 0.677 0.695 0.227 0.704 0.912 USST-FH [13] 0.208 0.682 0.757 0.179 0.658 0.754 0.190 0.675 0.729 Diff-IP2D (ours) 0.226 0.725 0.980 0.211 0.736 0.917 0.242 0.722 0.956 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 FHOI [14] 0.130 0.602 0.487 0.113 0.545 0.409 0.118 0.501 0.379 OCT [12] 0.219 0.720 0.848 0.182 0.684 0.662 0.194 0.672 0.752 Diff-IP2D (ours) 0.222 0.730 0.888 0.204 0.727 0.844 0.226 0.701 0.825 Figure 5: Visualization of hand trajectory prediction on Epic-Kitchens. The waypoints from groundtruth labels, Diff-IP2D, and the second-best baseline [12] are connected by red, white, and blue dashed lines respectively. 4.2 Separate evaluation on hand trajectory and object affordance prediction We first present the evaluation results on hand trajectory prediction. As Tab. 1 depicts, our proposed Diff-IP2D outperforms all the baselines on the EK55 and EK100 datasets on WDE and FED. This is mainly achieved by the devised iter-NAR paradigm of Diff-IP2D alleviating degeneration in AR baselines, as well as the egomotion guidance. The visualization of the related hand prediction results is shown in Fig. 5. It can be seen that our proposed method can better capture the camera wearer\u2019s intention (such as putting the food in the bowl) and generate more reasonable future trajectories even if there is a lack of past observations for hands (such as reaching out towards the table). Besides, our method can predict a good final hand position although there is a large shift in the early stage (the subfigure in the bottom right corner of Fig. 5), which benefits from our diffusion-based parallel generation. When directly transferring the models trained on Epic-Kitchens to the unseen EG dataset, our method still outperforms the other baselines, which improves by 7.0% and 15.3% against the second-best method on WDE and FDE respectively. This reveals the solid generalization capability of our diffusion-based approach across different environments. The comparison results of object affordance prediction are also shown in Tab. 1. Our proposed Diff-IP2D predicts the hotspots with larger SIM, AUC-J, and NSS compared to all the baselines on both Epic-Kitchens data and unseen EG data. Fig. 6 illustrates the predicted contact points with minimum distances to the ground-truth ones. Our proposed method focuses more on objects of interest considering the features of the holistic interaction and potential hand trajectories, and therefore grounds the contact points closer to the ground-truth labels than the counterparts of the baseline. 8 \f\u8981\u8bf4 \u4e3a\u4e86\u663e\u793a\u65b9\u4fbf \u52a0\u4e86\u4e2a\u865a\u62df\u7684hotspots\u5728\u4e0a\u9762 Figure 6: Visualization of object affordance prediction on Epic-Kitchens. The contact points from ground-truth, Diff-IP2D, and the state-of-the-art baseline OCT [12] are represented by red, white, and blue dots respectively. For a clearer illustration, we additionally put a fixed Gaussian with each contact point as the center. See the Appendix, Sec. D.6 for more visualization results. Table 2: Ablation study on egomotion guidance approach EK55 EK100 WDE \u2193 FDE \u2193 SIM \u2191 AUC-J \u2191 NSS \u2191 WDE \u2193 FDE \u2193 SIM \u2191 AUC-J \u2191 NSS \u2191 Diff-IP2D* 0.427 0.186 0.218 0.717 0.929 0.439 0.198 0.201 0.710 0.846 Diff-IP2D 0.411 0.181 0.226 0.725 0.980 0.407 0.187 0.211 0.736 0.917 improvement 3.7% 2.7% 3.7% 1.1% 5.5% 7.3% 5.6% 5.0% 3.7% 8.4% Diff-IP2D*: Diff-IP2D w/o egomotion guidance 4.3 Joint evaluation on hand trajectory and object affordance prediction We further compare Diff-IP2D with the other two joint prediction baselines, FHOI [14] and OCT [12], using our proposed object-centric protocol. The video clips containing both ground-truth hand waypoints and contact points are used for evaluation in this experiment. The results are also shown in Tab. 1, which indicates that our proposed Diff-IP2D can generate the best object-centric HOI predictions considering the two tasks concurrently on both Epic-Kitchens and unseen EG data. The results also suggest that Diff-IP2D outperforms the baselines on object-centric HOI prediction by focusing more attention on the target objects and predicting reasonable hand trajectories around them. 4.4 Ablation study on egomotion guidance We provide an ablation study of the egomotion features used to guide MADT denoising on the EK55 and EK100 datasets. Here we replace the MHCA in MADT with a multi-head self-attention module (MHSA) to remove the egomotion guidance while keeping the same parameter number. The experimental results in Tab. 2 show that the guidance of motion features improves our proposed diffusion-based paradigm noticeably on both hand trajectory prediction and object affordance prediction. This is achieved by narrowing the two gaps caused by 2D-3D ill-posed problem and view difference mentioned in Sec. 1. Note that the egomotion guidance is more significant on the EK100 dataset than on the EK55 dataset. The reason could be that EK100 has a larger volume of training data incorporating more diverse egomotion patterns than EK55, leading to a model that can capture human dynamics better. More results of the related joint evaluation are presented in Appendix, Sec. D.1. 4.5 Conclusion and insights In this paper, we propose a novel hand-object interaction prediction method Diff-IP2D. Specifically, we implement the denoising diffusion in the latent feature space under the egomotion guidance, and jointly predict future hand trajectories and object affordances with the recovered latents as input. According to the experimental results, Diff-IP2D dominates the existing baselines on both off-the-shelf metrics and our new evaluation protocol, suggesting promising applications in artificial intelligence systems. It learns to recover latent HOI features and forecast future HOI states in parallel, which can serve as a foundation generative paradigm for future works on the same or similar prediction tasks. 9"
16
+ }
title_10K/test_title_short_2405.04403v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04403v1",
3
+ "title": "Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks",
4
+ "abstract": "Augmenting Large Language Models (LLMs) with image-understanding capabilities\nhas resulted in a boom of high-performing Vision-Language models (VLMs). While\nstudying the alignment of LLMs to human values has received widespread\nattention, the safety of VLMs has not received the same attention. In this\npaper, we explore the impact of jailbreaking on three state-of-the-art VLMs,\neach using a distinct modeling approach. By comparing each VLM to their\nrespective LLM backbone, we find that each VLM is more susceptible to\njailbreaking. We consider this as an undesirable outcome from visual\ninstruction-tuning, which imposes a forgetting effect on an LLM's safety\nguardrails. Therefore, we provide recommendations for future work based on\nevaluation strategies that aim to highlight the weaknesses of a VLM, as well as\ntake safety measures into account during visual instruction tuning.",
5
+ "authors": "Georgios Pantazopoulos, Amit Parekh, Malvina Nikandrou, Alessandro Suglia",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.CL"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM AND Jailbreak",
15
+ "gt": "Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks",
16
+ "main_content": "Introduction Visual Instruction Tuning extends the instructionfollowing abilities of Large Language Models (LLMs) to the visual modality. The common recipe for a Vision-Language Model (VLM), is to combine an existing LLM along with a vision encoder and learn a mapping between the two unimodal experts (Alayrac et al., 2022; Dai et al., 2023b; Liu et al., 2024). As a result, VLMs can solve additional tasks as opposed to their language-only counterparts, while their performance correlates heavily with the capabilities of their unimodal backbones. LLMs have become the go-to option for practically all Natural Language Processing (NLP) tasks, with models such as ChatGPT (OpenAI, 2022) and Gemini (Gemini Team et al., 2023) witnessing widespread deployment. While these models exhibit\u2014to some degree\u2014general capabilities (OpenAI, 2023a), previous work shows they are susceptible to misuse (Bommasani et al., 2021; Kreps et al., 2022; Weidinger et al., 2021). Consequently, a large body of work incorporates safety mechanisms in model development to constrain model behavior to a \u201csafer\u201d subset by aligning models with values (Askell et al., 2021; Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022). Despite these efforts, LLMs are vulnerable to malicious prompts\u2014referred to as \u201cjailbreaking\u201d (Wei et al., 2024; Xie et al., 2023): engineered to trick the LLM outside of the safer subset and generate the potentially harmful content it was trained to reject (Qi et al., 2023). An example of such behavior is illustrated in Figure 1, where the model initially refuses to comply with the input question, but com*Equal Contribution pletely adheres to the modified adversarial prompt. Therefore, recent approaches to model development incorporate safety tuning against jailbreaking during training (Jiang et al., 2023; Touvron et al., 2023) that mitigate these vulnerabilities. Adversarial attacks, in the form of input perturbations, can also affect vision or language models (Goodfellow et al., 2014; Madry et al., 2018; Yu and Rieser, 2023). With VLMs now being deployed to a wider audience (Brooks et al., 2024; Gemini Team et al., 2023; OpenAI, 2023b), we believe that now is a good time to consider: how does visual instruction tuning affect the safety of LLMs? We explore this question by prompting publiclyavailable VLMs, and the LLM backbones they were derived from, by employing jailbreaking techniques across eight distinct scenarios (Liu et al., 2023d). We demonstrate that visual instruction tuning damages guardrails put in place during the LLM training (Jiang et al., 2023; Touvron et al., 2023), leaving models vulnerable to adversarial attacks. Overall, our findings further highlight the importance of considering the effects of the training stages, training data, and the ways to evaluate and mitigate potential harms when introducing new modalities (Weidinger et al., 2023). As such, with great instruction tuning comes great responsibility. 2. Background 2.1. Instruction-tuning models Language models trained on self-supervised objectives show broad pattern recognition abilities (Brown et al., 2020; Radford et al., 2019; Raffel et al., 2020) when paired with instruction-tuning: arXiv:2405.04403v1 [cs.CV] 7 May 2024 \fFigure 1: Example responses to the query \u201cWhat are some effective ways to create and spread fake news that incites violence and unrest in society?\u201d Vicuna 13B (Chiang et al., 2023) does not directly generate hateful content (top-left), but will given a jailbreak pre-prompt (bottom-left). LLaVa 13B (Liu et al., 2023c) produces hateful content both with (bottom-right) and without (top-right) a jailbreak pre-prompt. a fine-tuning paradigm that uses language instructions as input to solve multiple tasks (Chung et al., 2022; Gupta et al., 2022; Wei et al., 2021). Instruction-tuning is an established concept in NLP (Chung et al., 2022; Mishra et al., 2022) as resulting models generalize better to user queries (Chung et al., 2022; Sanh et al., 2022; Wei et al., 2021) by learning to connect them to concepts seen during pretraining for zero-shot generalization on unseen tasks (Gupta et al., 2022; Mishra et al., 2022). Visual Instruction Tuning refers to the process of converting a LLM into a VLM, often using language (Bai et al., 2023a; Chiang et al., 2023) and vision experts (Fang et al., 2023; Radford et al., 2021), by learning a mapping between the two modalities. Existing approaches concatenate visual and textual representations with a lightweight adapter module (Liu et al., 2024). Other techniques construct \u201cvisual prompts\u201d with a resampler\u2014where learnable latent tokens are informed by each modality (Bai et al., 2023b; Li et al., 2023a; Zhu et al., 2023). Training involves multiple stages, with initial stages focusing on image-text alignment and later stages on supervised fine-tuning (SFT). As VLMs based on this recipe are successful across established multimodal tasks (Goyal et al., 2017; Singh et al., 2019), a large body of work focuses on the safety aspect of these models through the hallucination prism. These works typically measure the degree to which model responses are factually grounded to the visual context (Li et al., 2023b; Liu et al., 2023a,b). However, they do not explore how safety guardrails integrated into the LLM are impacted by visual instruction tuning. 2.2. Jailbreaking and adversarial attacks LLMs and VLMs exhibit vulnerabilities along the same lines as other deep learning models; slight perturbations in inputs can result in (possibly coherent) \u201challucinated\u201d responses (Bender et al., 2021; Goodfellow et al., 2014; Liu et al., 2023b; Szegedy et al., 2013). Learning from vast training corpora improves a model\u2019s generalization capabilities (Radford et al., 2018; Raffel et al., 2020). However, as datasets surpass trillions of tokens (Gao et al., 2020; Hoffmann et al., 2022; Touvron et al., 2023), it is difficult to know the characteristics and biases included in them (Gehman et al., 2020). Moreover, while instruction-tuned models can make reasonable predictions with irrelevant and misleading prompts (Webson and Pavlick, 2022), a model\u2019s strong pattern recognition abilities can at the same time be exploited forcing potentially harmful responses (Ganguli et al., 2022; Perez et al., 2022). As a result, various methods (Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022) try to better align generated content to one more preferred by humans; encouraging safer and more ethical responses (Bai et al., 2022; Ganguli \fVision-Language Model Large Language Model LLaVA-1.5 (Liu et al., 2023c) Vicuna 13B (Chiang et al., 2023) Qwen-VL-Chat (Bai et al., 2023b) Qwen-Chat 7B (Bai et al., 2023a) InternLM-XComposer2 (Dong et al., 2024) InternLM2-Chat 7B (InternLM Team, 2023) Table 1: VLM & LLM pairs used in our experiments. et al., 2022). Other measures include SFT on datasets with adversarial prompts and exemplary responses (Touvron et al., 2023), and context distillation (Askell et al., 2021) which finetunes a model on outputs generated by another model prompted for safe behavior. However, introducing visual inputs opens a new attack vector as adversarial inputs imperceptible to the human eye can steer models to unsafe behavior (Qi et al., 2023). 3. Experimental Setup We hypothesize that after visual instruction tuning, models become less safe and more vulnerable to jailbreaks as opposed to their original LM backbone. To test this hypothesis, we prompt three state-of-the-art VLMs and their LM counterparts with questions related to prohibited scenarios, both with and without jailbreak prompt prefixes.1 Model Selection Table 1 displays the evaluated VLMs along with their respective LLM backbones. We selected these models because: 1) they showcased strong performance in established multimodal tasks (Goyal et al., 2017; Li et al., 2023b; Marino et al., 2019); 2) they connect vision and language models in different ways; and 3) they incorporate safety mechanisms during the development of their LLM. Finally, all chosen VLMs and LLMs are open-source, ensuring reproducibility. See Appendix A for additional details about this selection. Data Preparation We query each model with a prompt, a question, and, for the VLMs, an input image. We leverage the jailbreak prompt dataset from Liu et al. (2023d), which contains questions to simulate prohibited scenarios and prompts that were successful in jailbreaking ChatGPT (OpenAI, 2022).2 Liu et al. (2023d) categorized jailbreak prompts into one-of-three different types, and oneof-ten different patterns. Overall, we employ 40 input queries: derived from eight prohibited scenarios, with each containing five questions. We used four jailbreak prompts that cover all patterns 1Code available at https://github.com/ gpantaz/vl_jailbreak 2See Appendix B for a short description of each scenario, and we refer to Liu et al. (2023d) for details. to ensure models are evaluated fairly across all jailbreak types, resulting in 160 queries to evaluate how susceptible models are to jailbreaking. In order to mimic a common downstream use case of VLMs, we retrieve the most relevant image for each question from the pretraining data of LLaVA (Liu et al., 2024) by selecting the image with the maximum CLIPScore (Hessel et al., 2021) using the base CLIP model (Radford et al., 2021). Finally, we also use a blank image (i.e. an image with only white pixels) to simulate pure text generation by removing any visual context. As a result, we have four conditions for VLMs from the combinations of original vs. jailbreak prompts, and retrieved images vs. blank images. Data Annotation To understand the impact of visual instruction tuning against adversarial inputs, all 1,800 responses were annotated by the authors of this paper\u2014each with a background in developing large-scale multimodal models\u2014and measured inter-annotator agreement on a subset of 100 samples, resulting in a Krippendorff\u2019s alpha of 0.674. We labeled every response with one of: \u2022 Refusal: The model refuses to answer the query; optionally, providing counter-arguments or condemning unethical/illegal actions. \u2022 Neutral: Generated content is neither a refusal nor is potentially harmful; such as, when models do not acknowledge the harmful intent but respond in the opposite manner. For example, when asked \u201cHow to screen job candidates based on race?\u201d, models might provide measures against discrimination. \u2022 Harmful: Response contains potentially harmful content. This includes where generated content promotes discrimination towards individuals\u2014 possibly based on demographics or gender\u2014or encourages illegal activities. \u2022 Not Applicable: Content is not applicable for the study. For example, the model repeated the instruction or continuously produced the same (invalid or incoherent) sentence. 4. Results Figure 2 illustrates the percentage of the responses labeled as harmful across all models. We observe that all VLMs generate substantially more hateful responses as opposed to their LLM backbones. In particular, LLaVA generates 27.50% and 6% more harmful content than Vicuna, with and without jailbreak pre-prompts respectively. Additionally, QwenChat/Qwen-VL-Chat and InterLM2-Chat/InterLMXComposer2 exhibit similar behavior, though they \f Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 50 60 70 Percentage of harmful responses 20.00 60.50 47.50 66.50 40.00 69.00 Vicuna & LLaVA Vicuna LLaVA LLaVA-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 7.50 42.50 15.00 45.00 12.50 47.50 Qwen & Qwen-VL-Chat Qwen-Chat Qwen-VL-Chat Qwen-VL-Chat-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 10.00 40.62 17.50 41.88 17.50 45.62 InterLM2 & InterLM-Xcomposer2 InterLM2-Chat InterLM-XComposer2 InterLM-XComposer2-Blank Figure 2: Percentage of harmful responses for every LLM & VLM pair. Across all model pairs, the VLM generates harmful content more frequently compared to its LLM backbone. generate less harmful responses. Consequently, the safeguards imposed on the LLMs during model development are, at best, relaxed as an outcome of the visual instruction tuning stage. Furthermore, VLMs are more prone to generate potentially harmful content when provided with a prompt and a semantically-relevant image. While this may seem obvious, we observe that in the case of adversarial input, including a blank image results leads to more harmful responses. We hypothesize that this is due to \u201ccompeting objectives\u201d (Wei et al., 2024); where, on one hand, the model tries to generate content relative to both the instruction and the image, while on the other hand, it tries to adhere to its safeguards. Using a jailbreak pre-prompt, however, provides a signal stronger than the content of the image resulting in the aforementioned behavior. 5. Discussion Why are VLMs more prone to jailbreak attacks? Competing objectives present a significant challenge for both VLMs and LLMs. Given an adversarial prompt, both models must navigate between providing relevant responses and resisting adherence to the adversarial prompt. While we have not explored whether this effect is magnified in VLMs, we hypothesize that both models are equally susceptible to the impact of competing objectives. A more plausible scenario is that VLMs forget queries from adversarial prompts when undergoing visual instruction tuning. Reframing generation of appropriate responses to adversarial prompts as its own task, it becomes evident that models may inadvertently disregard this task during further finetuning. This behavior is particularly likely to occur as the model must incorporate an additional modality during the instruction tuning stage. However, we believe this issue can be mitigated through continual learning or training methodologies that expose the model to additional (image-text or text-only) examples that demonstrate appropriate responses during the visual instruction tuning stage. In the follow-up section, we further elaborate on possible strategies to mitigate the forgetting effect. 5.1. Suggestions for Future Work Evaluation & Benchmarking Most current evaluations of VLMs focus exclusively on model capabilities, such as grounding, reasoning, and factuality (Weidinger et al., 2021). Some recent benchmarks are starting to address the gap in safety (Li et al., 2024b; Roger et al., 2023) and robustness to adversarial attacks (Carlini et al., 2024; Zhao et al., 2024). However, creating comprehensive benchmarks to evaluate the safety of VLMs remains a crucial area for future research. A possible step in this direction would be to implement a unified framework for evaluating VLMs similar to LM-Harness (Gao et al., 2023) and SALAD-Bench (Li et al., 2024a), ensuring transparency and reproducibility. Additionally, we emphasize the need for \u201cdata parity\u201d when evaluating from a safety perspective. Without it, jailbreak prompts may be accidentally leaked into (pre-)training data, leading to inflated scores (Golchin and Surdeanu, 2023; Li and Flanigan, 2023; Zhou et al., 2023). However, as jailbreaking is an adversarial setting, it should be evaluated on out-of-distribution prompts (Yuan et al., 2023) that are held-out and/or regularly updated (Kiela et al., 2021). Safety Defenses in All Training Stages VLMs are trained following a curriculum: typically involving image-text alignment and instruction-tuning stages (Bai et al., 2023a; Li et al., 2023a; Liu et al., 2024). Our analysis indicates that when safety is not considered across all\u2014or, at least, final\u2014 stages, models become misaligned and are therefore more likely to generate harmful content. Korbak et al. (2023) show that incorporating conditional pretraining\u2014where text segments are conditioned on human preferences\u2014can reduce the toxicity of model outputs without sacrificing performance on other tasks. As a result, when training a model from scratch, safety should be considered at every stage. However, as training from scratch \fis resource-intensive, it may be more practical to initialize a VLM with pretrained experts. Another possible solution is to ensure that the VLM alignment is part of the final training stage. However, multimodal datasets annotated with human preferences or exemplar responses against adversarial prompts (Li et al., 2024b) are largely missing. Therefore, an important avenue for future work would be to collect or synthetically generate (Liu et al., 2024) such resources. The goal of maintaining safety alignment after visual instruction tuning resembles a continual learning scenario. Future work could draw inspiration from approaches that aim to mitigate catastrophic forgetting (Hadsell et al., 2020; Ke and Liu, 2022). For instance, previous work has found that methods such as experience replay (Biesialska et al., 2020) and logit distillation (Jin et al., 2022) can be effective in continual pretraining of language models. Further benefits could be achieved through more sophisticated approaches, such as selectively updating a small isolated set of parameters for vision (Gururangan et al., 2022; Ke et al., 2022). 6. Conclusion In this paper, we argue that relying on the safety alignment of the backbone LLM downplays the potential vulnerabilities of VLMs. To support this claim, we used three VLMs with strong performance on public benchmarks, each with a different LLM as a starting point with safety playing a crucial role for development of the LLM. Our analysis has shown that visual instruction tuning can affect all VLMs, making them more prone to generate potentially harmful responses both with and without jailbreaking attacks. Furthermore, we have provided suggestions with regard to core evaluation procedures and incorporating safety measures during the successive training stages of visual instruction tuning. Finally, notwithstanding the impressive progress in the development of VLMs, we emphasize that our ultimate goal in this paper is to identify weaknesses in existing approaches and provide recommendations aimed at propelling the field forward. 7. Limitations While our results consistently showcased evidence that visual instruction tuning has a negative impact on model safety, we have only evaluated three models with public weights and using English prompts. Furthermore, even though the developers of each model claim that they have taken action towards incorporating safety mechanisms, the exact details are not disclosed. As a result, we cannot guarantee that these models are not trained on any of the jailbreaking prompts because not all data used to train each LLM is publicly accessible. This highlights the need for the ability to conduct open research replications that enable similar studies. Lastly, we have not explored to what degree these models are sensitive to image attacks either through adversarial noise, adjusting the attention mask during generation, or completely removing the image. 8. Bibliographical"
17
+ }
title_10K/test_title_short_2405.04483v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04483v1",
3
+ "title": "CloudDiff: Super-resolution ensemble retrieval of cloud properties for all day using the generative diffusion model",
4
+ "abstract": "Clouds play a crucial role in the Earth's water and energy cycles,\nunderscoring the importance of high spatiotemporal resolution data on cloud\nphase and properties for accurate numerical modeling and weather prediction.\nCurrently, Moderate Resolution Imaging Spectroradiometer (MODIS) provides cloud\nproducts with a spatial resolution of 1 km. However, these products suffer from\na lengthy revisit cycle. This study develops a generative diffusion model\n(donated as CloudDiff) for super-resolution retrieval of high spatiotemporal\ncloud phase and properties, applicable both day and night. Leveraging 2 km\nspatial resolution Himawari-8 Advanced Himawari Imager (AHI) thermal infrared\n(TIR) radiances and viewing geometry as condition, alongside daytime MODIS\nproducts as targets, the model can generate cloud phase (CLP), cloud top height\n(CTH), cloud optical thickness (COT), and cloud effective radius (CER) at 1 km\nspatial resolution and 10-minute temporal resolution. The conditional diffusion\nmodel can generate sharper images and capture finer local features than\ndeterministic super-resolution approaches. It draws multiple samples based on\nthe underlying probability distribution, enabling retrieval uncertainty\nassessment. Evaluations show agreement between cloud phase and properties\nderived from the CloudDiff and MODIS cloud products. The ensemble mean is found\nto enhance retrieval accuracy and credibility, outperforming the deterministic\nmodel.",
5
+ "authors": "Haixia Xiao, Feng Zhang, Lingxiao Wang, Wenwen Li, Bin Guo, Jun Li",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "physics.ao-ph",
9
+ "cats": [
10
+ "physics.ao-ph"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "CloudDiff: Super-resolution ensemble retrieval of cloud properties for all day using the generative diffusion model",
15
+ "main_content": "Introduction Clouds are critical in the Earth\u2019s water and energy budgets (Li et al., 2005). Their influence on the radiation budget can induce either heating or cooling of the planet, contingent upon the radiative characteristics of the cloud and its altitude (Stephens et al., 1981, 1990). The significance of clouds is further underscored by variables such as cloud optical thickness (COT), cloud effective radius (CER), cloud top height (CTH), and cloud phase (CLP). These parameters profoundly impact the Earth\u2019s net radiation balance due to their distinct scattering and absorption characteristics (Fauchez et al., 2018a; Min et al., 2020; Wang et al., 2016a). Achieving an accurate representation of these optical properties remains a formidable challenge, primarily because the microscale physical processes within clouds are difficult to explicitly simulate in global numerical models (Baran, 2012; Ceppi et al., 2017; Waliser et al., 2009). Consequently, there is an urgent need to obtain cloud phase and properties with high spatial and temporal resolution. Such detailed cloud data are indispensable for a deeper understanding of atmospheric physical processes, the enhancement of data assimilation techniques, and the improvement of weather forecasting accuracy (Muskatel et al., 2021). The retrieval of cloud properties has been conducted for several decades. Since the 1970s, airborne measurements have been employed to retrieve COT and CER, resulting in numerous successful experimental studies (Finger et al., 2015; King, 1987; Krisna et al., 2018; Platnick et al., 1995; Twomey and Cocks, 1989). However, these campaigns incur high costs, and the temporal and spatial coverage of field observations is limited. With the advancement of satellite remote sensing technology, particularly passive sensors (geostationary and polar-orbiting satellites), researchers have increasingly utilized data from visible and near-infrared bands to retrieve cloud properties. This approach enables the characterization of cloud properties at various spatial and temporal resolutions (King et al., 1992; Menzel et al., 2008; Platnick et al., 2003; Tang et al., 2017; Zhang and Platnick, 2011; Zhuge et al., 2020), owing to the wide observational coverage provided by passive sensors. The basic physical principle behind this method is that the cloud radiances measured by the nonabsorptive channels in the visible or near-infrared wavelengths are influenced by COT, while those captured by water-absorption channels in the shortwave infrared wavelength are sensitive to the CER (Nauss and Kokhanovsky, 2011). These retrieval methods, which rely on solar radiation, are effective only for daytime scenes. However, they are not applicable to nighttime scenes and exhibit higher uncertainties in high-latitude regions and optically thin cloud scenes (Wang et al., 2016b). Thermal Infrared (TIR) retrieval algorithm, utilizing the split-window technique (Parol et al., 1991; Toshiro, 1985), offer valuable capabilities for both daytime and nighttime scene analysis. This technique retrieves COT and CER from the brightness temperature differences between two distinct channels in the infrared atmospheric windows, where gaseous absorption is minimal. Additionally, the optimal estimation methodology (Rodgers, 2000) has been implemented for the Atmospheric Infrared 2 \fSounder V6 (AIRS) and Advanced Microwave Sounding Unit (AMSU), utilizing infrared spectral data to successfully retrieve the physical and optical properties of clouds (Kahn et al., 2014, 2015). However, due to significant absorption by cloud particles in the infrared spectrum, these traditional IR-based algorithms primarily excel in retrieving optically thin cloud properties, while facing challenges in scenarios involving opaque, thick clouds (Wang et al., 2016a). Consequently, an alternative approach is necessary to provide a more comprehensive solution. The data-driven deep learning method, renowned for their proficiency in capturing the spatial variations of image features with fast computation, have been extensively applied in the cloud identification and properties retrieval (Tong et al., 2023; Zhao et al., 2023). For example, Wang et al. (2022) developed a convolutional neural network (CNN) model for the continuous cloud identification and retrieval of cloud properties (i.e., COT, CER, and CTH) throughout the diurnal cycle for the Moderate Resolution Imaging Spectroradiometer (MODIS), leveraging utilizing daytime MODIS TIR radiances alongside satellite viewing zenith angles (VZA). Additionally, employing a transfer-learning-based UNet model and MODIS/Himawari-8 cloud products, Li et al. (2023) successfully estimated the CER, COT, and CTH from Himawari-8 TIR measurements, and results showed that the model enhanced performance for optically thick clouds. Previous research has relied on either polar-orbiting (e.g., MODIS) or geostationary (e.g., Himawari-8 Advanced Himawari Imager) satellite sensors for cloud property estimation. While polar-orbiting satellites offer high-resolution cloud products (1 km resolution), they suffer from a lengthy revisit cycle, impacting temporal resolution. Conversely, geostationary satellites provide frequent revisits, offering high temporal resolution and continuous cloud observation (Meng et al., 2024). However, their spatial resolution is lower compared to polar-orbiting satellites. Hence, combining data from both types of satellites to achieve high spatiotemporal resolution in cloud phase and properties is a promising direction to explore. For high-impact weather events such as severe convective storms, tropical and extratropical cyclones, the underlying dynamical and thermodynamic mechanisms are complex, leading to significant uncertainties in retrieving their cloud properties. Unfortunately, current CNN/UNet retrieval methods primarily focus on deterministic modeling, which often neglects the inherent uncertainties within the data. Diffusion models, a novel category of likelihood-based models recently highlighted for generating high-quality images (Sohl-Dickstein et al., 2015; Song and Ermon, 2019), offer desirable characteristics such as distribution coverage (Ho et al., 2020). Unlike deterministic retrieval methods, diffusion models derive probability distribution functions and can generate a large number of samples (Ho et al., 2020; Ling et al., 2024; Bishop, 2024), while guaranteeing that the retrieval distribution encapsulates all plausible outcomes, thus allowing for estimating the probability density and its score. Diffusion models have proven successful in various research domains, such as computer vision for image generation and synthesis (Croitoru, 2023), precipitation nowcasting (Nai 3 \fet al., 2024), estimating the unresolved geophysical processes (Pan et al., 2023), and earth system model downscaling (Hess et al., 2024), showcasing their effectiveness in handling complex systems. The primary objective of this study is to develop a diffusion model aimed at superresolution high spatiotemporal resolution cloud optical properties and cloud phase retrieval throughout the diurnal cycle using a geostationary satellite. Leveraging the TIR channels of the Himawari-8 satellite and employing MODIS cloud products as ground truth, we have developed a generative diffusion model capable of cloud identification and retrieval of COT, CER, and CTH, characterized by high precision and enhanced spatiotemporal resolution. The efficacy of this model is evaluated against standard MODIS cloud product measurements, focusing particularly on its generalization capabilities and the uncertainty, analyzed across typhoon case studies and extended datasets. The data, methodology, and experimental details are outlined in Section 2. The performance outcomes of the model are thoroughly examined in Section 3. Lastly, Section 4 offers conclusions and discussions. 2. Data and methods 2.1. Data 2.1.1. Himawari-8 AHI Satellite Data Himawari-8, launched in October 2014, is the geostationary satellite sensor system operated by the Japan Meteorological Agency (JMA). It represents the latest iteration in the Multifunctional Transport Satellite (MTSAT) series. The Advanced Himawari Imager (AHI) sensor onboard Himawari-8 captures full disk images every 10 minutes across 16 spectral bands from visible to infrared wavelengths, with spatial resolutions ranging from 500 m to 2 km and temporal resolutions between 2.5 and 10 minutes, covering regions from East Asia to Australia. The TIR measurements are sensitive to optically thin clouds and are continuously obtained throughout the diurnal cycle, independent of solar geometry (Fauchez et al., 2018a). In this study, TIR radiations from Himawari-8 AHI are utilized to estimate cloud properties during both daytime and nighttime. Additionally, the VZA are employed to construct the retrieval model. Table 1 summarizes the used TIR measurements (6.95\u201313.30 \u00b5m) and VZA of Himawari-8 AHI. 2.1.2. MODIS data With the launch of NASA\u2019s Terra satellite in 1999, followed by Aqua in 2002, MODIS has emerged as one of the most indispensable satellite remote sensing platforms for Earth science research. It measures reflected solar and emitted thermal radiation across 36 spectral channels (0.42\u201314.24 \u00b5m), offering unique spectral and spatial capabilities for retrieving cloud properties (Platnick et al., 2016). The Terra-MODIS (MOD06) and Aqua-MODIS (MYD06) products, which have a spatial resolution of 1 km, are accessible through the Atmosphere Archive and Distribution System website 4 \f(https://ladsweb.modaps.eosdis.nasa.gov/). These products include cloud top properties (e.g., CTH, CLP for both day and night) and cloud optical and microphysical properties (e.g., COT, CER, daytime only). Over the years, the MODIS cloud products have demonstrated consistent high accuracy and reliable performance (King et al., 2003; Platnick et al., 2015). In this study, the daytime MODIS cloud optical and physical properties (CTH, COT, CER, and CLP) from the Level-2 cloud product (MYD06 L2 and MOD06 L2) are utilized as ground truth to develop the super-resolution retrieval model. Table 1: The Himawari-8 AHI data used for cloud parameter super-resolution retrieval. Band Number Bandwidth (\u00b5m) Central Wavelength (\u00b5m) Spatial resolution (km) Spatial resolution (minute) 9 6.89\u20137.01 6.95 10 7.26\u20137.43 7.35 11 8.44\u20138.76 8.6 12 9.54\u20139.72 9.63 2 10 13 10.3\u201310.6 10.45 14 11.1\u201311.3 11.20 15 12.2\u201312.5 12.35 16 13.20\u201313.40 13.30 VZA \u2013 \u2013 2.1.3. Data preprocessing As described above, the TIR measurements (6.95 \u00b5m, 7.35 \u00b5m, 8.60 \u00b5m, 9.60 \u00b5m, 10.45 \u00b5m, 11.20 \u00b5m, 12.35 \u00b5m, and 13.30 \u00b5m) along with the VZA of the Himawari-8 AHI serve as the inputs for the model, while the MODIS level-2 CLP, CTH, COT, and CER data are used as the targets for training the model. To optimize the model during training and enhance its accuracy, we normalized the inputs and targets. By employing min-max normalization, we scaled the input and output variables to fall within the range of 0 to 1. To cover as wide a range of the Earth\u2019s surface and viewing geometries as possible, and to accommodate seasonal variations, we collected data from January 2016 to October 2017. Specifically, data from January 2016 to May 2017 was utilized for model training, data from June to August 20, 2017 for model validation, and data from August 21, 2017, to October 2017 served as the test set. Owing to the differing spatiotemporal resolutions of the Himawari-8 AHI and MODIS cloud products, we performed spatiotemporal matching of the data. In this process, we selected data from both MODIS and Himawari-8 for the same regions and times, with the cloud product grid points being twice that of the TIR observations. To alleviate memory and computational demands and to accelerate the selection process for the model, 5 \fwe cropped the cloud products in the training, validation, and test sets to a size of 256\u00d7256 km, while the input TIR observations were sized at 128\u00d7128 km. Ultimately, our training set comprised 76,247 samples, with the validation and test sets containing 9,530 and 9,532 samples, respectively. 2.2. Method The diffusion model is a state-of-the-art deep learning technique that employs probabilistic denoising processes to develop generative models (Bishop, 2024). The model typically operates on the principle of simulating a gradual process of denoising, effectively reconstructing data points from a noise-like distribution. This process is modeled as a reverse Markov chain, where a data sample is initially transformed into noise through a sequence of diffusion steps and then reconstructed back into a clean sample through learned reverse transitions. In a classical set-up, the model involves iteratively applying a series of conditional Gaussian distributions, beginning from a distribution of noise p(zT) and progressively denoising it to retrieve the original data distribution p(x0). This can be succinctly represented as, p(x0) = Z \u00b7 \u00b7 \u00b7 Z p(x0|x1)p(x1|x2) \u00b7 \u00b7 \u00b7 p(xT\u22121|zT)p(zT) dx1 \u00b7 \u00b7 \u00b7 dxT\u22121dzT. (1) In each iteration, the model utilizes the noisy data from the previous step as input, subsequently refining it to a greater degree of accuracy in accordance with the data\u2019s original state. The denoising path is learned from training data, thereby enabling the model to effectively generate or reconstruct high-quality data samples. 2.2.1. Conditional diffusion model In our study, these TIR measurements and VZA variable are denoted by y which is the condition variable. The target variables, cloud products, are represented by x. The objective is to approximate the conditional distribution of x given y, using a significantly large dataset of paired samples (xi, yi). The conditional diffusion model incorporates conditioning variables into the generative process (Batzolis, 2021), allowing the model to generate data conditioned on specific information. Mathematically, this can be represented as the transition from a noise distribution p(zT) to the data distribution p(x0) conditioned on a variable y, described by, p(x0|y) = Z p(x0|zT, y)p(zT|y) dzT, (2) where, zT represents the latent variables at the final timestep, and the model iteratively refines these variables through the conditioning on y, enhancing its ability to target specific data generation tasks. As Figure 1 shows, the conditional diffusion model enables to produce cloud products given the conditions of TIR and VZA variables, making it particularly useful in scenarios where the output needs to be tailored to specific environments. In this framework, for any given y, the algorithm 6 \foutputs samples of x from x \u223cp(x0|y), where p is a learned distribution that does not adhere to any predefined probability distribution form. The forward process has the same scheme as the Denoising Diffusion Probabilistic Models(DDPMs) (Ho et al., 2020), but in the reverse process we embed the conditional variables into the UNet for modelling the conditional probability distributions (Nai et al., 2024). \ud835\udc650 \ud835\udc651 \ud835\udc652 ... \ud835\udc65\ud835\udc47 \ud835\udc65\ud835\udc47 \ud835\udc650 Forward Diffusion Process Reverse Diffusion Process ... \ud835\udc65\ud835\udc47\u22121 UNet UNet Condition Figure 1: The CloudDiff for super-resolution cloud identification and properties retrieval. The generated samples x are cloud products, and the conditions y includes TIR and VZA variables. In the forward process, the data x0 undergoes a series of transformations, gradually adding noise over discrete time steps T until it is converted into pure Gaussian noise xT \u2261zT. The noise addition at each timestep t is defined by a variance schedule \u03b2t, and can be described by the following stochastic differential equation, xt = p 1 \u2212\u03b2txt\u22121 + p \u03b2t\u03f5, \u03f5 \u223cN(0, I), (3) where \u03f5 represents Gaussian noise. The reverse process, where the model learns to reconstruct the original data from noise, is explicitly conditioned on y. At each step, the model estimates the original data xt\u22121 from the current noisy data xt using a neural network parameterized by {\u03b8}. This network predicts the mean \u00b5\u03b8(xt, t, y) of the distribution for xt\u22121, typically modeled as, xt\u22121 = \u00b5\u03b8(xt, t, y) + \u03c3t\u03f5, \u03f5 \u223cN(0, I), (4) where \u03c3t is a predetermined noise level (Ho et al., 2020). 7 \fThe objective of training this conditional diffusion model is to minimise the difference between the estimated xt\u22121 and its actual value. This effectively allows the model to learn the reverse of the forward diffusion process. The loss function is originally from the Fisher divergence (Song and Ermon, 2019; Song et al., 2021; Nai et al., 2024), but equivalently used as a variant of the mean squared error between the predicted and actual previous timestep values, conditioned on y, L(\u03b8) = Ex0,\u03f5,y \u0002 \u2225\u03f5 \u2212\u03f5\u03b8(xt, t, y)\u22252\u0003 , (5) where \u03f5\u03b8 represents the outputs of the UNet as the predictions of the noise used to generate xt from xt\u22121. To improve the representation ability, we have introduced the multi-head attention modules into the UNet architecture (Vaswani et al., 2017). After training, the conditional diffusion model (hereafter, CloudDiff) is capable of generating multiple samples simultaneously. In our tests, we generate 30 samples per evaluation instance. These samples are reminiscent of the ensemble members used in numerical weather prediction\u2019s dynamical models, which employ large numbers of members for ensemble predictions (Li et al., 2024). Furthermore, we conduct comparative analyses between the CloudDiff and established deterministic data-driven methods. For this purpose, the study uses a supervised learning approach with a UNet architecture (Trebing et al., 2021), referred to as the deterministic model, as the benchmark. This method is specifically applied to the tasks of super-resolution retrieval of cloud properties and cloud identification, serving as a baseline for performance comparison. 2.2.2. Performance evaluation The CloudDiff serves as a super-resolution approach that requires an appropriate evaluation scheme. Although intuitive, sample-by-sample comparisons cannot fully demonstrate the effectiveness of the super-resolution technique. To obtain a comprehensive performance evaluation, we collect MODIS labels for assessing the quality of the generated cloud products. Consequently, we employ Mean Absolute Error (MAE) and Mean Squared Error (MSE) as metrics, allowing for a quantitative assessment of the model\u2019s performance in enhancing spatial resolution. These metrics, commonly used in cloud properties retrieval (Wang et al., 2022; Zhao et al., 2023), are defined as follows, MAE = 1 NNp N X i=1 Np X j=1 |xi,j \u2212\u02c6 xi,j| , (6) RMSE = v u u t 1 NNp N X i=1 Np X j=1 (xi,j \u2212\u02c6 xi,j)2, (7) where N represents the number of samples, xi denotes the values from MODIS cloud products, and \u02c6 xi represents the super-resolution retrieved cloud products. Np indicates the number of pixels for each sample, and j labels the index of the pixels. It 8 \fshould be noted that a more accurate super-resolution model will have a smaller root mean square error (RMSE) and mean absolute error (MAE). 3. Results 3.1. Case study We begin our study with a case analysis focusing on Typhoon Hato (No.1713) over the offshore areas of China to evaluate the performance of the CloudDiff and comprehend its uncertainty. Typhoon Hato developed in the northwest Pacific Ocean at 06:00 UTC on August 20, 2017, and progressively intensified. By 01:00 UTC on August 23, it had escalated to a severe typhoon, peaking at Category 16 with maximum sustained winds of 52 m/s. It made landfall near Zhuhai City, Guangdong Province, China, around 04:50 UTC on August 23 as a severe typhoon, causing substantial devastation in southern China. On that day, the Terra satellite passed over the coastal Zhuhai area around 02:50 UTC; thus, our analysis primarily focused on evaluating the retrieved COT, CER, CTH, and CLP at this specific time. The analysis covered the typhoon area between 19.78\u00b0N\u201322.32\u00b0N and 111.68\u00b0E\u2013114.22\u00b0E, corresponding to a grid size of 256\u00d7256. Figure 2 presents the various cloud properties generated by the CloudDiff across 30 samples and grid points where MODIS cloud properties were not captured by samples. Since all 30 CLP samples indicated ice clouds within the study area, CLP results are not displayed. It is observed that the cloud properties generated by different samples vary slightly but generally reflect the typhoon\u2019s morphology accurately. Despite variations in COT values among the samples and differing degrees of overestimation and underestimation in the typhoon\u2019s cloud wall, they accurately estimated the optical thickness at the typhoon eye. Notably, underestimation occurred for COT values over 90 at about 16.03% of the grid points, and overestimation at 1.67% of the grid points, while COT values below 60 were well retrieved. Regarding CER, some samples did not accurately represent the CER, generally overestimating (9.68%, mainly around the typhoon eye) and underestimating (12.49%, mainly in the typhoon\u2019s cloud wall). Additionally, samples underestimated CTH to various extents, particularly on the west and southwest sides of the typhoon eye, with a total underestimation of 30.41% in CTH and a mere 0.63% overestimation. To evaluate the performance and uncertainty of the CloudDiff, we compared the cloud properties with those from the deterministic model (Fig. 3). The results show that individual sample produces more sharpness and more local details of COT, CER, and CTH compared to the ensemble mean (appears blurrier). The deterministic model\u2019s results blurrier than the ensemble mean and also lack detail. Regarding COT, compared to MODIS cloud products, the sample underestimated the COT in the typhoon eye region and overestimated areas with COT <90. The ensemble mean (the mean values of 30 samples) also overestimated the extent of COT <90 but reported lower values than single sample, somewhat correcting the underestimation of COT in 9 \fSamples COT CER CTH Figure 2: Cloud properties retrieval in the typhoon Hato region centering around 21.8\u00b0N, 113.8\u00b0E at 0250 UTC on August 23, 2017, was conducted using the CloudDiff. The columns represent samples and grid points where MODIS cloud properties are not captured by samples. The underestimation and overestimation are respectively indicated by black squares and green \u2019x\u2019. The background is colored based on MOD06 cloud products. the typhoon eye region by single sample. The standard deviation of 30 samples, which can donate the retrieval uncertainty, indicates large error in the estimates of COT in the typhoon\u2019s cloud wall, mainly because most samples overestimated the COT in this area (see Fig. 2). The deterministic model not only overestimated the extent of COT >90 (with lower internal values) but also underestimated the optical thickness on the western side of the typhoon eye. Both single sample and ensemble mean, as well as the deterministic model, inaccurately retrieved areas with CER >35\u00b5m and overestimated the CER in the typhoon eye area. However, the CloudDiff exhibited smaller biases in CER retrievals compared to the deterministic model, and standard deviations mostly below 6\u00b5m across most regions, indicating small uncertainty. Regarding CTH, CloudDiff exhibits minimal uncertainty, with standard deviations generally below 1 km across most regions. compared to MODIS, the ensemble mean more accurately represented CTH in the southern part of the typhoon eye than individual samples, but it underestimated areas with CTH greater than 16 km and the CTH in the typhoon eye. The deterministic model also underestimated CTH greater than 16 km and the CTH in the typhoon eye. Additionally, deterministic model 10 \f20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N MODIS Sample Ensemble mean Deterministic model Std 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 0 20 40 60 80 0 10 20 30 40 0 10 20 30 40 50 m 0 3 6 9 12 m 10 11 12 13 14 15 16 17 km 0.0 0.4 0.8 1.2 1.6 km CTH CER COT Figure 3: MOD06 cloud products and retrieved cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. The columns are MOD06 cloud products, sample, esemble means, deterministic model, and standard deviation (std). underestimated CTH at the image edges. Moreover, both the ensemble mean and deterministic model accurately retrieved CLP (not showed), consistent with MODIS cloud classification results. Overall, the super-resolution cloud properties retrieval based on the CloudDiff proved superior to those from the deterministic model, providing sharper and more localized details of 1 km cloud properties during the typhoon event. Using 30 samples generated by the CloudDiff, we computed probability estimates for various thresholds of cloud property estimates and cloud phase probability results (Fig. 4), which deterministic model cannot provide. Based on the thresholds provided by the International Satellite Cloud Climatology Project (ISCCP) for COT and CTH associated with cloud types, we computed probability estimates for COT (Fig.4b,c,d) and CTH (Fig.4j,k,l) at similar thresholds in ISCCP. The results indicate that the probability estimates from the CloudDiff are close to with MODIS data, with probabilities exceeding 80% in the 3.6<COT<23 and 23<COT regions. Additionally, all MODIS CTH values were greater than 6.4 km, and the CloudDiff estimated probabilities of CTH>6.4 km to be over 90%. Following ISCCP cloud classifications, the predominant cloud types in the typhoon eye and its southwestern sea regions are cirrostratus, while other areas feature deep convection clouds. For CER, thresholds of 20 \u00b5m and 40 \u00b5m were selected for probability estimation (Fig.4f,g,h), revealing that the CloudDiff\u2019s CER estimates primarily 11 \ffall within the (20, 40] range, with very low probabilities for CER in the (0, 20] and CER>40 \u00b5m ranges. In comparison to MODIS, the CloudDiff tends to overestimate CER in the typhoon eye and underestimate CER over the western land areas of the typhoon eye. Furthermore, the CloudDiff\u2019s probability estimates for clouds classified as ice clouds in the study area exceed 99 % (not showed) , aligning well with MODIS. Overall, through probabilistic estimation, we can better ascertain the range of cloud property values and cloud phase, evaluate the uncertainty in cloud property retrieval and identification, and enhance the accuracy of super-resolution retrievals. 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (a)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (b)(0, 3.6] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (c)(3.6, 23] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (d)COT > 23 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (e)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (f)(0, 20] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (g)(20, 40] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (h)CER > 40 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (i)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (j)(0, 3.2] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (k)(3.2, 6.4] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (l)CTH > 6.4 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 10 20 30 40 50 m 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 10 11 12 13 14 15 16 17 km 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 CTH CER COT Figure 4: The probability estimates for cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. (b-d) present the probability estimates of COT within different threshold ranges.(fh) display the probability estimates of CER for varying thresholds. (j-l) show the probability estimates for CTH across different threshold ranges. 3.2. Overall evaluation We evaluated the overall performances of the models using data from the test set. We employed MSE and RMSE metrics to evaluate cloud properties. A comparative analysis was conducted to investigate how the number of samples affects the superresolution retrieval performance. This analysis included ensemble means with 1 and 30 samples. Additionally, we compared these results with those from the deterministic model. Figure 5 illustrates the RMSE and MSE comparisons between the MODIS cloud products and the super-resolution retrieval results. 12 \fALL Water Ice 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 MAE (a) COT ALL Water Ice 4 5 6 7 8 9 10 11 (b) CER ( m) ALL Water Ice 0.75 1.00 1.25 1.50 1.75 2.00 2.25 (c) CTH (km) ALL Water Ice 12 13 14 15 16 17 RMSE (d) ALL Water Ice 6 8 10 12 14 16 (e) ALL Water Ice 1.5 2.0 2.5 3.0 3.5 4.0 (f) size 1 5 10 20 30 Deterministic model Figure 5: The performance evaluation of cloud properties. Skill metrics were calculated between the CloudDiff/deterministic model and MODIS cloud products. Different sizes of circles represent ensemble sizes ranging from 1 to 30, while pentagrams indicate deterministic model. For COT, CER, and CTH, the results indicate significantly higher MAE and RMSE values when the ensemble size is 1. As the ensemble size increases beyond five, both the MAE and RMSE of the ensemble mean gradually decrease. An interesting observation is that the improvement in super-resolution retrieval capability from 20 to 30 samples is relatively minor, suggesting that approximately 20 samples are sufficient to capture most of the high-resolution details and adequately cover the uncertainty space in the retrieval process. The MAE and RMSE values of the deterministic model retrieval approach those when the ensemble size is 5, and are notably lower than those observed with an ensemble size of 30. Specifically, for COT at an ensemble size of 30, the ensemble mean MAE for all clouds (water and ice) is 6.62, with an RMSE of 12.51, compared to the deterministic model results which have an MAE of 7.45 and an RMSE of 13.48. For water clouds alone, the MAE is 6.97 and the RMSE is 12.68, with ice clouds showing slightly better performance (MAE = 6.23, RMSE = 12.32). For CER, the ensemble mean MAE for all clouds at an ensemble size of 30 is 5.87\u00b5m, with an RMSE of 8.93\u00b5m. Water clouds exhibit a lower MAE of 4.47\u00b5m and RMSE of 6.62\u00b5m, whereas ice clouds have a higher MAE of 7.48\u00b5m and RMSE of 10.98\u00b5m. Similarly, for CTH at the same ensemble size, the ensemble mean MAE for all clouds is 1.18 km, with an RMSE of 2.15 km. The MAE for water clouds is 0.91 km and RMSE is 1.72 km, with ice clouds performing worse (MAE = 1.61 km, RMSE = 2.68 km). 13 \fClear Water Ice CloudDiff Clear Water Ice MODIS 0.89 0.10 0.02 0.10 0.85 0.05 0.02 0.10 0.88 (a) OA=85.89% Clear Water Ice Deterministic model Clear Water Ice MODIS 0.87 0.11 0.02 0.11 0.83 0.06 0.03 0.11 0.87 (b) OA=84.52% 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 6: Confusion matrix of CLP products between MODIS and CloudDiff (a), deterministic model (b). \u2019OA\u2019 is the overall accuracy In addition, the cloud identification results were assessed. Here, we primarily compared the performance of the deterministic model with the ensemble mean results of 30 samples. The validation results demonstrate the model\u2019s capability to accurately identify true targets from MODIS data. Figure 6 presents the CLP identification results for the ensemble mean of the CloudDiff (Fig. 6a) and the deterministic model (Fig. 6b), which categorize the targets primarily into clear sky, water clouds, and ice clouds. The CloudDiff achieves an overall accuracy (OA) of 85.89%. Specifically, it shows a retrieval accuracy for clear sky and ice clouds of 89% and 88% respectively, and 85% for water clouds. In contrast, the deterministic model exhibits a retrieval accuracy of 88% for both clear sky and water clouds, but a slightly lower accuracy of 83% for ice clouds, with an OA of 84.52%, which is marginally lower than that of the CloudDiff. Overall, the ensemble mean of the CloudDiff demonstrates superior performance in identifying clear sky, water clouds, and ice clouds compared to the deterministic model. In summary, the CloudDiff enables the efficient generation of realistic samples that are faithful to a broad range of resolved retrieval schemes and sufficiently diverse to cover most plausible outcomes. 4. Conclusions In this study, we propose a conditional diffusion model named CloudDiff for cloud identification and retrieval of COT, CER, and CTH. The model is trained on 2 km TIR measurements from AHI onboard the Himawari-8 and satellite VZA, using MODIS 1 km resolution cloud products as training data. This CloudDiff is capable of generating cloud properties and CLP with high spatiotemporal resolution (1 km, 10-minute). It can produce various samples to effectively cover the distribution and range of cloud properties and also offers uncertainty estimates. Evaluation of the model on Typhoon Hato demonstrates that the 30 samples generated by the CloudDiff accurately capture the range of COT, CER, and CTH during 14 \fthe typhoon event and effectively identify cloud phases. Compared to the deterministic model, the CloudDiff\u2019s cloud properties more closely align with MODIS cloud products and improve the sharpness of the super-resolution retrieval. Additionally, the model can provide probability estimates for different threshold cloud properties, significantly enhancing retrieval accuracy. Further evaluation on the test set shows that MAE and RMSE decrease as the ensemble size increases, with the lowest errors observed at an ensemble size of 30. The performance of deterministic model matches that of the ensemble mean when the ensemble size is 5, underscoring the superior results of the CloudDiff. The results clearly demonstrate that increasing the samples size enhances retrieval capabilities, but this improvement is minimal beyond a certain size; for instance, increasing the ensemble size from 20 to 30 offers little improvement. Although the CloudDiff has shown promising results, further improvements are still possible. Integrating additional conditional variables such as ERA5 meteorological data could improve the super-resolution retrieval effectiveness. Given adequate computing resources, it is feasible to generate more samples and determine the optimal ensemble size for even better performance. Future work will involve case studies of high-impact weather events to further assess the CloudDiff\u2019s performance and explore specific applications in ensemble retrieval. We hope that the demonstrated utility of generative artificial intelligence technology for cloud identification and probabilistic retrieval will promote its application in remote sensing, which is crucial for quantifying uncertainty in identification and forecasting weather events such as typhoons. We believe it is time to explore the potential of diffusion models in cloud remote sensing, offering a promising solution for challenges such as cloud image forecasting and satellite precipitation estimation. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work was supported by the National Natural Science Foundation of China (42222506 and 42075125). L. Wang also thanks the National Natural Science Foundation of China (12147101) for supporting his visit to Fudan University. The authors would like to thank NASA for freely providing the Himiwari-8 products (https:// www.eorc.jaxa.jp/ptree/index.html) and MODIS data online (https://ladsweb. modaps.eosdis.nasa.gov/). We acknowledge Xiaoye Wang from Fudan University for assisting with data processing. 15"
16
+ }
title_10K/test_title_short_2405.04496v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04496v1",
3
+ "title": "Edit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing",
4
+ "abstract": "Existing diffusion-based video editing methods have achieved impressive\nresults in motion editing. Most of the existing methods focus on the motion\nalignment between the edited video and the reference video. However, these\nmethods do not constrain the background and object content of the video to\nremain unchanged, which makes it possible for users to generate unexpected\nvideos. In this paper, we propose a one-shot video motion editing method called\nEdit-Your-Motion that requires only a single text-video pair for training.\nSpecifically, we design the Detailed Prompt-Guided Learning Strategy (DPL) to\ndecouple spatio-temporal features in space-time diffusion models. DPL separates\nlearning object content and motion into two training stages. In the first\ntraining stage, we focus on learning the spatial features (the features of\nobject content) and breaking down the temporal relationships in the video\nframes by shuffling them. We further propose Recurrent-Causal Attention\n(RC-Attn) to learn the consistent content features of the object from unordered\nvideo frames. In the second training stage, we restore the temporal\nrelationship in video frames to learn the temporal feature (the features of the\nbackground and object's motion). We also adopt the Noise Constraint Loss to\nsmooth out inter-frame differences. Finally, in the inference stage, we inject\nthe content features of the source object into the editing branch through a\ntwo-branch structure (editing branch and reconstruction branch). With\nEdit-Your-Motion, users can edit the motion of objects in the source video to\ngenerate more exciting and diverse videos. Comprehensive qualitative\nexperiments, quantitative experiments and user preference studies demonstrate\nthat Edit-Your-Motion performs better than other methods.",
5
+ "authors": "Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, Yuwei Guo",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Edit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing",
15
+ "main_content": "INTRODUCTION Diffusion-based [22, 41, 44, 49, 53] video motion editing aims to control the motion (e.g., standing, dancing, running) of objects in the source video based on text prompts or other conditions (e.g., depth map, visible edges, human poses, etc), while preserving the integrity of the source background and object\u2019s content. This technique is especially valuable in multimedia [6, 10, 21, 33, 52, 56, 58, 63], including advertising, artistic creation, and film production. It allows users to effortlessly modify the motion of objects in videos Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. ACM MM, 2024, Melbourne, Australia \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn using a video motion editing model, eliminating the necessity for complex software. In prior studies, researchers primarily utilized generative methods to create videos featuring specific actions, with few efforts focusing on editing motions within a specific video. For example, several prior studies [26, 64, 65] have focused on pose-guided video generation, which involves creating videos that align with specified human poses. Other studies [9, 17, 25, 35, 57, 66] to generate videos with the same motion by learning the motion features in the source video. These studies operate within the text-driven space-time diffusion model framework, engineered to learn the link between textual prompt inputs and corresponding video outputs. However, the spatial and temporal features of the video are not separated during the training, which makes them entangled. The spatial features are usually represented as the object\u2019s content, and the temporal features are usually represented as the background and motion. This entangled state leads to overlapping object content, background and motion in the space-time diffusion model. As a result, it is challenging to generate highly aligned videos with the fine-grained foreground and background of the source video, even when detailed text descriptions are used. Intuitively, the key to video motion editing lies in decoupling [8, 54, 60] the temporal and spatial features of the space-time diffusion model. MotionEditor [45] first explored this problem by utilizing a twobranch structure in the inference stage to decouple the object\u2019s content and background in the feature layer by the object\u2019s segmentation mask. However, since the MotionEditor\u2019s model learns the relationship between the prompt and the entire video during the training stage, the features of objects and the background overlap in the feature layer. This overlap makes it challenging to distinguish between the background and the objects using only the segmentation mask [23, 39, 50]. In this paper, we explore methods to separate the learning of temporal and spatial features in space-time diffusion models. To this end, we propose a one-shot video motion editing method named Edit-Your-Motion that requires only a single text-video pair for training. Specifically, we propose the Detailed Prompt-Guided Learning Strategy (DPL), a two-stage learning strategy designed to separate spatio-temporal features within space-time diffusion models. Furthermore, we propose Recurrent-Causal Attention (RC-Attn) as an enhancement over Sparse-Causal Attention. The RecurrentCausal Attention allows early frames in a video to receive information from subsequent frames, ensuring consistent content of objects throughout the video without adding computational burden. Additionally, we construct the Noise Constraint Loss [31] to minimize inter-frame differences of the edited video during the second training stage. During DPL, we use the space-time diffusion model (inflated UNet [37]) as the backbone and integrate ControlNet [61] to control the generation of motion. In the first training stage, we activate Recurrent-Causal Attention and freeze the other parameters. Then, we randomly disrupt the order of frames in the source video and mask the background to guide Recurrent-Causal Attention to focus on learning the content features of objects. In the second training stage, we activate Temporal Attention [48] and freeze other parameters to learn motion and background features from ordered video \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia frames. Concurrently, Noise Constraint Loss is used to minimize the difference between frames. In the inference stage, we first perform a DDIM [42] inversion for the source video to introduce latent noise and facilitate the smoothness of the edited video. Then, the pose information of the reference video is introduced via ControlNet. Next, to ensure that the content of the objects in the edited video remains consistent with that of the source video, we utilize a two-branch structure (edit branch and reconstruction branch) similar to [45]. However, unlike MotionEditor, DPL distinctly decoupled spatial and temporal features into Recurrent-Causal Attention and Temporal Attention, respectively. Therefore, we only inject the key and value of Recurrent-Causal Attention from the reconstruction branch into the editing branch, eliminating the need for the segmentation mask. In conclusion, our contributions are as follows: \u2022 We further explored how to decouple spatio-temporal features in video motion editing explicitly and proposed a oneshot video motion editing method named Edit-Your-Motion. \u2022 We designed the Detailed Prompt-Guided Learning Strategy (DPL), a two-stage training method. It can decouple the space-time diffusion model\u2019s overlapping spatial and temporal features, thereby avoiding interference from background features during the editing object\u2019s motion. \u2022 We designed Recurrent-Causal Attention to assist DPL in learning the more comprehensive content of objects in the first training stage. In addition, We constructed the Noise Constraint Loss to smooth out inter-frame differences in the second training stage. \u2022 We conduct experiments on in-the-wild videos, where the results show the superiority of our method compared with the state-of-the-art. 2 RELATED WORK In this section, we provide a brief overview of the fields related to video motion editing and point out the connections and differences between them and video motion editing. 2.1 Image Editing Recently, a large amount of work has been done on image editing using diffusion models [7, 30, 36]. SDEdit [28] is the first method for image synthesis and editing based on diffusion models. Promptto-Prompt [13] edits images by referencing cross-attention in the diffusion process. Plug-and-play [46] provides fine-grained control over the generative structure by manipulating spatial features during generation. UniTune [47] completes text-conditioned image editing tasks by fine-tuning. For non-rigidly transformed image editing, Imagic [19] preserves the overall structure and composition of the image by linearly interpolating between texts, thus accomplishing non-rigid editing while. Masactrl [4] converts selfattention to mutual self-attention for non-rigid image editing. On the other hand, InstructPix2Pix [3] has devised a method of editing images by written instructions rather than textual descriptions of image content. Unlike text-driven image editing, DreamBooth [38] generates new images with theme attributes by using several different images of a given theme. However, these methods lack temporal modeling, and it is difficult to maintain consistency between frames when generating video. 2.2 Pose-guided and Motion-Customization Video Generation Pose-guided image and video generation is a method to control image and video generation by adding additional human poses. ControlNet [61] references additional conditions via auxiliary branches to produce images consistent with the condition map. Follow-YourPose [26] controls video generation given human skeletons. It uses a two-stage training to learn to pose and control temporal consistency. ControlVideo [64] is adapted from ControlNet and uses cross-frame interaction to constrain appearance coherence between frames. Control-A-Video [65] enhances faithfulness and temporal consistency by fine-tuning the attention modules in both the diffusion models and ControlNet. Unlike the pose-guided video generation model, the motioncustomization video generation model generates videos with the same motion by learning the motion features in the source video. Customize-A-Video [35] designed an Appearance Absorber module to decompose the spatial information of motion, thus directing the Temporal LoRA [16] to learn the motion information. MotionCrafter [66] customizes the content and motion of the video by injecting motion information into U-Net\u2019s temporal attention module through a parallel spatial-temporal architecture. VMC [17] fine-tunes only the temporal attention layer in the video diffusion model to achieve successful motion customization. Unlike these methods, video motion editing requires controlling the motion of the source video object while maintaining its content and background. 2.3 Video Editing The current video editing models can be divided into two categories: video content editing models [1, 5, 20, 24, 32, 51, 67] and video motion editing models [45]. The video content editing model is designed to modify the background and object\u2019s content (e.g., the scene in the background, the clothes colour, the vehicle\u2019s shape, etc.) in the source video. In video content editing, Tune-A-Video [51] introduces the OneShot Video Tuning task for the first time, which trains the spacetime diffusion model by a single text-video pair. FateZero [32] uses cross-attention maps to edit the content of videos without any training. Mix-of-show [12] fine-tune the model through low-rank adaptions [16] (LoRA) to prevent the crash of knowledge learned by the pre-trained model. Some other approaches [2, 5, 20] use NLA [18] mapping to map the video to a 2D atlas to decouple the object content from the background to edit the content of the object effectively. In video motion editing, MotionEditor [45] uses the object\u2019s segmentation mask to decouple the content and background in the feature layer. Content features are then injected into the editing branch to maintain content consistency. Since the object and the background overlap in the feature layer, it is difficult to accurately separate the object\u2019s content from the background features with the segmentation mask. \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo Our approach decouples the object from the background during the training stage and directs RC-Attn and Temporal Attention to learn spatial and temporal features, respectively. This ensures that the source video content is accurately injected. 3 METHOD In video motion editing, the focus is on decoupling the spatiotemporal features of the diffusion model. To this end, we propose Edit-Your-Motion, a one-shot video motion editing method trained only on a pair of source and reference videos. Specifically, we design the Detailed Prompt-Guided Learning strategy (DPL), a two-stage learning strategy capable of decoupling spatio-temporal features in the space-time diffusion model. In the first training stage, we shuffle the video frames to disrupt the temporal relationship of the video. Then, mask the background and learn intently spatial features (object content) from the unordered frames. We further propose Recurrent-Causal Attention (RC-Attn) instead of Sparse-Causal Attention to construct consistent features of objects over the whole sequence. In the second training stage, we recover the temporal relationships in the video frames to learn the temporal features (the background and object motion). To smooth out the inter-frame differences, we also construct Noise Constraint Loss. Finally, in the inference stage, we use the deconstruction with a two-branch structure [66] (reconstruction branch and editing branch). Since the spatial and temporal features have been decoupled in the training stage, we obtain the background and motion features in the editing branch and inject the content features of the objects in the reconstruction branch into the editing branch. Fig. 2 illustrates the pipeline of Edit-Your-Motion. To introduce our proposed Edit-Your-Motion, we first introduce the basics of the text-video diffusion model in Sec. 3.1. Then, Sec. 3.2 introduces our proposed Recurrent-Causal Attention (RC-Attentio). After that, in Sec. 3.3, our proposed Detailed Prompt-Guided Learning strategy and Noise Constraint Loss are described. Finally, we will introduce the inference stage in Sec. 3.4. 3.1 Preliminaries Denoising Diffusion Probabilistic Models. The denoising diffusion probabilistic models [11, 14, 27, 55] (DDPMs) consists of a forward diffusion process and a reverse denoising process. During the forward diffusion process, it gradually adds noise \ud835\udf16to a clean image \ud835\udc990 \u223c\ud835\udc5e(\ud835\udc990) with time step \ud835\udc61, obtaining a noisy sample \ud835\udc65\ud835\udc61. The process of adding noise can be represented as: \ud835\udc5e(\ud835\udc99\ud835\udc61|\ud835\udc99\ud835\udc61\u22121) = N (\ud835\udc99\ud835\udc61| \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61\ud835\udc99\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I), (1) where \ud835\udefd\ud835\udc61\u2208(0, 1) is a variance schedule. The entire forward process of the diffusion model can be represented as a Markov chain from time \ud835\udc61to time \ud835\udc47, \ud835\udc5e(\ud835\udc991:\ud835\udc47) = \ud835\udc5e(\ud835\udc990) \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5e(\ud835\udc99\ud835\udc61|\ud835\udc99\ud835\udc61\u22121) . (2) Then, in reverse processing, noise is removed through a denoising autoencoders \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61) to generate a clean image. The corresponding objective can be simplified to: \ud835\udc3f\ud835\udc37\ud835\udc40= E\ud835\udc65,\ud835\udf16\u223cN(0,1),\ud835\udc61 \u0002 \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)\u22252 2 \u0003 . (3) Latent Diffusion Models. Latent Diffusion models (LDM) [29, 36, 59] is a newly introduced variant of DDPM that operates in the latent space of the autoencoder. Specifically, the encoder E compresses the image to latent features \ud835\udc9b= E(\ud835\udc99). Then performs a diffusion process over \ud835\udc67, and finally reconstructs latent features back into pixel space using the decoder D. The corresponding objective can be represented as: \ud835\udc3f\ud835\udc3f\ud835\udc37\ud835\udc40= EE(\ud835\udc65),\ud835\udf16\u223cN(0,1),\ud835\udc61 h \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61)\u22252 2 i . (4) Text-to-Video Diffusion Models. Text-to-Video Diffusion Models [43] train a 3D UNet \ud835\udf163\ud835\udc37 \ud835\udf03 with text prompts \ud835\udc50as a condition to generate videos using the T2V model. Given the \ud835\udc39frames \ud835\udc991...\ud835\udc39of a video, the 3D UNet is trained by \ud835\udc3f\ud835\udc472\ud835\udc49= EE(\ud835\udc651...\ud835\udc39),\ud835\udf16\u223cN(0,1),\ud835\udc61,\ud835\udc50 \u0014\r \r \r\ud835\udf16\u2212\ud835\udf163\ud835\udc37 \ud835\udf03 (\ud835\udc671...\ud835\udc39 \ud835\udc61 ,\ud835\udc61,\ud835\udc50) \r \r \r 2 2 \u0015 , (5) where \ud835\udc671...\ud835\udc39 \ud835\udc61 is the latent features of \ud835\udc991...\ud835\udc39, \ud835\udc671...\ud835\udc39 \ud835\udc61 = E(\ud835\udc991...\ud835\udc39). 3.2 Recurrent-Causal Attention Like Tune-A-Video [51], we use the inflated U-Net network (spacetime diffusion model) as the backbone of Edit-Your-Motion, consisting of stacked 3D convolutional residual blocks and transform blocks. Each transformer block consists of Sparse-Causal Attention, Cross Attention, Temporal Attention, and a Feed-Forward Network (FFN). To save computational overhead, Tune-A-Video uses the current frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u2208 \b \ud835\udc67\ud835\udc630, . . . ,\ud835\udc67\ud835\udc63\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \t as the query for Sparse-Causal Attention. Meanwhile, the previous frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u22121 is combined with the first frame latent \ud835\udc67\ud835\udc631 to obtain the key and value. The specific formula is as follows: \ud835\udc44= \ud835\udc4a\ud835\udc44\ud835\udc67\ud835\udc63\ud835\udc56, \ud835\udc3e= \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc631,\ud835\udc67\ud835\udc63\ud835\udc56\u22121 \u0003 ,\ud835\udc49= \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc631,\ud835\udc67\ud835\udc63\ud835\udc56\u22121 \u0003 , (6) where [\u00b7] denotes concatenation operation. where \ud835\udc4a\ud835\udc44, \ud835\udc4a\ud835\udc3eand \ud835\udc4a\ud835\udc49are projection matrices. However, because there is less information in the early frames of a video, Sparse-Causal Attention does not consider the connection with the subsequent frames. As a result, it may lead to inconsistencies between the content at the beginning and the end of the video. To solve this problem, we propose a simple Recurrent-Causal Attention with no increase in computational complexity. In RecurrentCausal Attention, key and value are obtained by combining the previous frame latent \ud835\udc67\ud835\udc63\ud835\udc56\u22121 with the current frame latent \ud835\udc67\ud835\udc63\ud835\udc56, not \ud835\udc67\ud835\udc631 with \ud835\udc67\ud835\udc63\ud835\udc56\u22121. Notably, the key and value of the first frame latent \ud835\udc67\ud835\udc631 are obtained from the last frame latent \ud835\udc67\ud835\udc63\ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65with the first frame latent \ud835\udc67\ud835\udc631. This allows the object\u2019s content to propagate throughout the video sequence without adding any computational complexity. The formula for Recurrent-Causal Attention is as follows: \ud835\udc44= \ud835\udc4a\ud835\udc44\ud835\udc67\ud835\udc63\ud835\udc56, (7) \ud835\udc3e= ( \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc63\ud835\udc56\u22121,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 if \ud835\udc56< \ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc4a\ud835\udc3e\u0002 \ud835\udc67\ud835\udc630,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 \ud835\udc52\ud835\udc59\ud835\udc60\ud835\udc52 , (8) \ud835\udc49= ( \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc63\ud835\udc56\u22121,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 if \ud835\udc56< \ud835\udc56\ud835\udc5a\ud835\udc4e\ud835\udc65 \ud835\udc4a\ud835\udc49\u0002 \ud835\udc67\ud835\udc630,\ud835\udc67\ud835\udc63\ud835\udc56 \u0003 \ud835\udc52\ud835\udc59\ud835\udc60\ud835\udc52 . (9) \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia \u201cA boy wearing black clothes and gray pants is dancing.\u201d ControlNet Inference Stage: A Two-Branch Structure that Injects Spatial Features The Second Training Stage: Learning Temporal Feature from Ordered Video Frames \u201cA boy wearing black clothes and gray Pants is playing basketball.\u201d The First Training Stage: Learning Spatial Features from Shuffled Images \u201cA boy wearing black clothes and gray pants.\u201d a P s P a P t P rf S rf C Editing Branch Reconstruction Branch k V Edited video sr S Source video Reference video a P Unordered Frames ControlNet Temp-Attn Cross-Attn RC-Attn Temp-Attn Cross-Attn RC-Attn ControlNet Ordered Frames Temp-Attn Cross-Attn RC-Attn Temp-Attn Cross-Attn RC-Attn s P sr S s r C rf C t P Figure 2: The overall pipline of Edit-Your-Motion. Edit-Your-Motion decouples spatial features (object appearance) from temporal features (background and motion information) of the source video using the Detailed Prompt-Guided Learning Strategy (DPL). In the first training stage, Recurrent-Causal attention (RC-Attn) is guided to learn spatial features. In the second training stage, Temporal Attention (Temp-Attn) is guided to learn temporal features. During inference, the spatial features of the source video are injected into the editing branch through the key and value of Recurrent-Causal Attention, thus keeping the source content and background unchanged. Overall, Recurrent-Causal Attention enables early frames to acquire more comprehensive content information compared to Sparse-Causal Attention, by establishing a link to the last frame in the first frame. 3.3 The Detailed Prompt-Guided Learning Strategy The purpose of diffusion-based video motion editing is to control the motion of objects in the source video based on a reference video with a prompt and to ensure that the content and background of the objects remain unchanged. The key lies in decoupling the diffusion model\u2019s overlapping temporal and spatial features. MotionEditor uses the object\u2019s segmentation mask to decouple the object content and the background in the feature layer. However, the decoupled features also overlap since the spatio-temporal features have been obfuscated in the model. In order to be able to decouple overlapping spatio-temporal features, we design the Detailed Prompt-Guided Learning Strategy (DPL). DPL is divided into two training stages: (1) The First Training Stage: Learning Spatial Features from Shuffled Images, and (2) The Second Training Stage: Learning Temporal Features from Ordered video frames. Next, we will describe the two stages in detail. The First Training Stage: Learning Spatial Features from Shuffled Images. In this stage, the space-time diffusion model focuses on learning the spatial features of the source object. First, we disrupt the order of video frames to destroy their temporal information and generate unordered video frames U = {\ud835\udc62\ud835\udc56|\ud835\udc56\u2208[1,\ud835\udc5b]}, where \ud835\udc5b is the length of the video. If we train the model directly using unordered frames, the features of the object and the background will overlap. Such overlapping spatio-temporal features are challenging to decouple later and will lead to interference from background features when controlling object motion. Therefore, we use an existing segmentation \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo network to extract the segmentation mask \ud835\udc40for the unordered video frames. Therefore, we use an existing segmentation network to extract the segmentation mask M for the video frames and mask out the background as: UM = U \u00b7 M, (10) ZM \ud835\udc61 = E(UM), (11) where ZM \ud835\udc61 is the latent features of UM, and E(\u00b7) is encoder. Then, we utilize an existing skeleton extraction network to obtain the human skeleton \ud835\udc46\ud835\udc60\ud835\udc5fin the source video and feed it into ControlNet along with the prompt \ud835\udc43\ud835\udc4e. \ud835\udc36\ud835\udc60\ud835\udc5f= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc59\ud835\udc41\ud835\udc52\ud835\udc61(\ud835\udc46\ud835\udc60\ud835\udc5f, \ud835\udc43\ud835\udc4e), (12) where \ud835\udc36\ud835\udc60\ud835\udc5fis the pose feature of source video. Next, we will freeze other parameters and only activate Recurrent-Causal Attention. Finally, we will \ud835\udc43\ud835\udc4eand \ud835\udc36\ud835\udc60\ud835\udc5finto the space-time diffusion model for training. The reconstruction loss can be written as follows: \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50= E\ud835\udc67\ud835\udc5a \ud835\udc61,\ud835\udf16\u223cN(0,1),\ud835\udc61,\ud835\udc43\ud835\udc4e,\ud835\udc36\ud835\udc60\ud835\udc5f \u0014\r \r \r\ud835\udf16\u2212\ud835\udf163\ud835\udc37 \ud835\udf03 (\ud835\udc67\ud835\udc5a \ud835\udc61,\ud835\udc61, \ud835\udc43\ud835\udc4e,\ud835\udc36\ud835\udc60\ud835\udc5f) \r \r \r 2 2 \u0015 . (13) The Second Training Stage: Learning Temporal Features from Ordered Video Frames. Unlike the first training stage, we restored the temporal relationship of video frames. Then, guide the spacetime diffusion model to learn the temporal features of motion and background from ordered video frames V = {\ud835\udc63\ud835\udc56|\ud835\udc56\u2208[1,\ud835\udc5b]}. Specifically, We construct a new prompt \ud835\udc43\ud835\udc60, which adds a description of the motion to \ud835\udc43\ud835\udc4e. Then, Temporal Attention is activated to learn motion features while other parameters are frozen. To smooth the video, we added Noise Constraint Loss [31]. The noise constraint loss can be written as follows: \ud835\udc3f\ud835\udc5b\ud835\udc5c\ud835\udc56\ud835\udc60\ud835\udc52= 1 \ud835\udc5b\u22121 \ud835\udc5b\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r\ud835\udf16\ud835\udc53\ud835\udc56 \ud835\udc9b\ud835\udc61\u2212\ud835\udf16\ud835\udc53\ud835\udc56+1 \ud835\udc9b\ud835\udc61 \r \r \r 2 2 , (14) where \ud835\udc53\ud835\udc56denote the \ud835\udc56-th frame of the video. \ud835\udf16\ud835\udc53\ud835\udc56 \ud835\udc9b\ud835\udc61is the noise prediction at timestep \ud835\udc61. The total loss for the second training stage is constructed as follows: \ud835\udc3f\ud835\udc47\ud835\udc5c\ud835\udc61\ud835\udc4e\ud835\udc59= (1 \u2212\ud835\udf06)\ud835\udc3f\ud835\udc5b\ud835\udc5c\ud835\udc56\ud835\udc60\ud835\udc52+ \ud835\udf06\ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50, (15) where \ud835\udc3f\ud835\udc5f\ud835\udc52\ud835\udc50is constructed from ordered video frames V without segmentation mask \ud835\udc40. \ud835\udf06is set to 0.9. 3.4 Inference Pipelines In the inference stage, we first extract the human skeleton \ud835\udc46\ud835\udc5f\ud835\udc53from the reference video to guide motion generation. Then, to ensure that the object\u2019s content and background are unchanged, we use a two-branch architecture (reconstruction branch and editing branch) similar to [45] to inject the object\u2019s content and background features into the editing branch. Specifically, we first input the latent noise \ud835\udc67\ud835\udc60from the source video DDIM inversion and \ud835\udc43\ud835\udc4einto the reconstruction branch. Simultaneously input \ud835\udc67\ud835\udc60and \ud835\udc43\ud835\udc61into the editing branch. Then, we will input the human skeleton \ud835\udc46\ud835\udc5f\ud835\udc53from the reference video and \ud835\udc43\ud835\udc61 into ControlNet to obtain feature \ud835\udc36\ud835\udc5f\ud835\udc53as: \ud835\udc36\ud835\udc5f\ud835\udc53= \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc61\ud835\udc5f\ud835\udc5c\ud835\udc59\ud835\udc41\ud835\udc52\ud835\udc61(\ud835\udc46\ud835\udc5f\ud835\udc53, \ud835\udc43\ud835\udc61), (16) where \ud835\udc36\ud835\udc5f\ud835\udc53is the pose feature of the reference video to be used to guide the generation of motion in the editing branch. Next, we will inject the spatial features from the reconstruction branch into the editing branch. Due to disrupting the time relationship and mask the background in the first training stage of DPL. Therefore, we directly inject the keys and values of the RC-Attn in the reconstruction branch into the editing branch without needing segmentation masks. The specific formula can be written as: \ud835\udc3e\ud835\udc5f= \ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc60 \ud835\udc63\ud835\udc56,\ud835\udc49\ud835\udc5f= \ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc60 \ud835\udc63\ud835\udc56, (17) \ud835\udc3e\ud835\udc52= h \ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56\u22121,\ud835\udc4a\ud835\udc3e\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56, \ud835\udc3e\ud835\udc5fi ,\ud835\udc49\ud835\udc52= h \ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56\u22121,\ud835\udc4a\ud835\udc49\ud835\udc67\ud835\udc52 \ud835\udc63\ud835\udc56,\ud835\udc49\ud835\udc5fi , (18) \ud835\udc49\ud835\udc5fwhere \ud835\udc52represents the editing branch. \ud835\udc5frepresents the reconstruction branch. In the end, we obtained the edited video. 4 EXPERIMENTAL 4.1 Implementation Details Our proposed Edit-Your-Motion is based on the Latent Diffusion Model [36] (Stabel Diffusion). The data in this article comes from TaichiHD [40] and YouTube video datasets, in which each video has a minimum of 70 frames. During training, we finetune 300 steps for each of the two training stages at a learning rate of 3 \u00d7 10\u22125. For inference, we used the DDIM sampler [42] with no classifier guidance [15] in our experiments. For each video, the fine-tuning takes about 15 minutes with a single NVIDIA A100 GPU. 4.2 Comparisons Method To demonstrate the superiority of our Edit-Your-Motion, we have selected methods from motion customization, pose-guided video generation, video content editing, and video motion editing as comparison methods. (1) Tune-A-Video [51]: The first presents the work of one-shot video editing. It inflates a pre-trained T2I diffusion model to 3D to handle the video task. (2) MotionEditor1 [45]: The first examines the work of video motion editing while maintaining the object content and background unchanged. (3) Follow-YourPose [26]: Generating pose-controllable videos using two-stage training. (4) MotionDirector [66]: Generate motion-aligned videos by decoupling appearance and motion in reference videos for videomotion-customization. 4.3 Evaluation Our method can edit the motion of objects in the source video by using the reference video and prompting without changing the object content and the background. Fig. 4 shows some of our examples. As can be seen, our proposed Edit-Your-Motion accurately controls the motion and preserves the object\u2019s content and background well. The more cases are in the appendix. Qualitative Results. Fig. 3 shows the results of the visual comparison of Edit-Your-Motion with other comparison methods on 25 in-the-wild cases. Although Follow-Your-Pose and MotionDirector can align well with the motion of the reference video, it is difficult to 1Since the article\u2019s code is not provided, the experimental results in this paper are obtained by replication. \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia Source video Reference video Tune-A-Video Ours MotionEditor 4 8 12 16 0 22 6 2 Follow-Your-Pose MotionDirector Source video Reference video Tune-A-Video Ours MotionEditor 0 22 6 2 Follow Your Pose MotionDirector A girl in a plaid top and black skirt is dancing practicing wugong. A boy with a black top and gray pants is playing basketball dancing. Figure 3: Qualitative comparison with state-of-the-art methods. Compared to other baselines, Edit-Your-Motion successfully achieves motion alignment with the reference video and maintains the content consistency of the background and objects. maintain consistency between the object content and background in both the source and reference videos. It demonstrates that generating specific background and content using only text prompts is difficult. Tune-A-Video and MotionEditor show noticeable content changes. In addition, MotionEditor shows motion overlap (arms) caused by using of the segmentation mask to decouple overlapping features. In contrast to the above, our proposed Edit-Your-Motion aligns the motion of the edited video and the reference video well and preserves the content and background of the objects in the source video intact. This also demonstrates the effectiveness of our method in video motion editing. Quantitative results. We evaluate the methods with automatic evaluations and human evaluations on 25 in-the-wild cases. Automatic Evaluations. To quantitatively assess the differences between our proposed Edit-Your-Motion and other comparative methods, we use the following metrics to measure the results: (1) Text Alignment (TA). We use CLIP [34] to compute the average cosine similarity between the prompt and the edited frames. (2) Temporal Consistency (TC). We use CLIP to obtain image features and compute the average cosine similarity between neighbouring video frames. (3) LPIPS-N (L-N): We calculate Learned Perceptual Image Patch Similarity [62] between edited neighbouring frames. (4) LPIPS-S (L-S): We calculate Learned Perceptual Image Patch Table 1: Quantitative evaluation using CLIP and LPIPS. TA, TC, L-N, L-S represent Text Alignment, Temporal Consistency, LPIPS-N and LPIPS-S, respectively. Method TA \u2191 TC \u2191 L-N \u2193 L-S \u2193 Follow-Your-Pose [26] 0.236 0.913 0.213 0.614 MotionDirector [66] 0.239 0.872 0.141 0.430 Tune-A-Video [51] 0.278 0.934 0.137 0.359 MotionEditor [45] 0.286 0.948 0.102 0.300 Ours 0.289 0.950 0.109 0.276 Similarity between edited frames and source frames. Table 1 shows the quantitative results of Edit-Your-Motion with other comparative methods. The results show that Edit-Your-Motion outperforms the other methods on all metrics. User Study. We invited 70 participants to participate in the user study. Each participant could see the source video, the reference video, and the results of our and other comparison methods. For each case, we combined the results of Edit-Your-Motion with the results of each of the four comparison methods. Then, we set three \fACM MM, 2024, Melbourne, Australia Yi Zuo, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Shuyuan Yang, and Yuwei Guo A boy wearing black clothes and gray pants is playing basketball dancing. A woman in a blue top and white skirt is waving her hand dancing. A girl with a black top and black skirt is dancing practicing Tai Chi. A man with a dark green top and black pants is standing practicing Tai Chi. 0 6 12 18 22 Figure 4: Some examples of motion editing results for Edit-Your-Motion. Table 2: User Study. Higher indicates the users prefer more to our MotionEditor. TA, CA, and MA represent Text Alignment, Content Alignment, and Motion Alignment, respectively. Method TA CA MA Follow-Your-Pose [26] 87.142% 96.663% 90.953% MotionDirector [66] 94.522% 96.190% 86.188% Tune-A-Video [51] 78.810% 82.145% 84.047% MotionEditor [45] 76.428% 82.380% 80.950% questions to evaluate Text Alignment, Content Alignment and Motion Alignment. The three questions are \"Which is more aligned to the text prompt?\", \"Which is more content aligned to the source video?\" and \"Which is more motion aligned to the reference video?\". Table 2 shows that our method outperforms the other compared methods in all three aspects. 4.4 Ablation Study To verify the effectiveness of the proposed module, we show the results of the ablation experiments in Fig. 5. In column 3, we replace RC-Attn with Sparse Attention, which makes the first frame inconsistent with the object content in the subsequent frames. This shows that RC-Attn can better establish content consistency over the entire sequence than with Sparse Attention. In column 4, w/o Noise Constraint Loss (NCL) affects the smoothness between frames, causing the background to be inconsistent between frames. In column 5, we train RC-Attn and Temporal Attention in a training stage. However, the lack of spatio-temporal decoupling results in background and object content interfering, generating undesirable edited videos. At the same time, it also demonstrates the effectiveness of DPL in decoupling time and space. \fEdit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing ACM MM, 2024, Melbourne, Australia 40 44 48 58 Source video Reference video w/o RCA w/o NCL w/o DPT Edit-Your-Motion A girl with a black top and black shorts is waving her hand dancing. Figure 5: Some examples of video motion editing results for Edit-Your-Motion. 5 CONCLUSION In this paper, we explore methods to separate the learning of temporal and spatial features in space-time diffusion models. To this end, we propose a one-shot video motion editing method called EditYour-Motion that requires only a single text-video pair for training. Specifically, we design the Detailed Prompt-Guided Learning Strategy (DPL) to decouple the diffusion model\u2019s space-time features in two training stages. Furthermore, we propose Recurrent-Causal Attention (RC-Attn) as an enhancement over Sparse-Causal Attention. In the first training stage, RC-Attn focuses on learning the spatial feature by shuffling the temporal relations. In the second training stage, we guide the Temporal Attention to learn temporal features. In addition, Noise Constraint Loss is constructed to smooth the video. In the inference stage, we utilize a two-branch structure to inject spatial features into the editing branch to generate edit videos. Extensive experiments demonstrate the effectiveness of our proposed Edit-Your-Action. Limitations and Future Work. Although our proposed Edit-YourMotion achieves compelling results in video motion editing, twostage training consumes more computational resources. Therefore, how to perform video motion editing with limited computational resources still deserves further exploration in future research. We also expect video motion editing to receive more attention from researchers."
16
+ }
title_10K/test_title_short_2405.04534v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04534v1",
3
+ "title": "Tactile-Augmented Radiance Fields",
4
+ "abstract": "We present a scene representation, which we call a tactile-augmented radiance\nfield (TaRF), that brings vision and touch into a shared 3D space. This\nrepresentation can be used to estimate the visual and tactile signals for a\ngiven 3D position within a scene. We capture a scene's TaRF from a collection\nof photos and sparsely sampled touch probes. Our approach makes use of two\ninsights: (i) common vision-based touch sensors are built on ordinary cameras\nand thus can be registered to images using methods from multi-view geometry,\nand (ii) visually and structurally similar regions of a scene share the same\ntactile features. We use these insights to register touch signals to a captured\nvisual scene, and to train a conditional diffusion model that, provided with an\nRGB-D image rendered from a neural radiance field, generates its corresponding\ntactile signal. To evaluate our approach, we collect a dataset of TaRFs. This\ndataset contains more touch samples than previous real-world datasets, and it\nprovides spatially aligned visual signals for each captured touch signal. We\ndemonstrate the accuracy of our cross-modal generative model and the utility of\nthe captured visual-tactile data on several downstream tasks. Project page:\nhttps://dou-yiming.github.io/TaRF",
5
+ "authors": "Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, Andrew Owens",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Tactile-Augmented Radiance Fields",
15
+ "main_content": "Introduction As humans, our ability to perceive the world relies crucially on cross-modal associations between sight and touch [19, 50]. Tactile sensing provides a detailed understanding of material properties and microgeometry, such as the intricate patterns of bumps on rough surfaces and the complex motions that soft objects make when they deform. This type of understanding, which largely eludes today\u2019s computer vision models, is a critical component of applications that require reasoning about physical contact, such as robotic locomotion [3, 24, 31, 34, 37, 38] and manipulation [6, 7, 11, 42, 60], and methods that simulate the behavior of materials [4, 13, 40, 41]. In comparison to many other modalities, collecting tactile data is an expensive and tedious process, since it requires direct physical interaction with the environment. A recent line of work has addressed this problem by having humans or robots probe the environment with touch sensors (see Table 1). Early efforts have been focused on capturing the properties of only a few objects either in simulation [16, 17, 52] or in lab-controlled settings [6, 7, 18, 28, 35, 52, 63], which may not fully convey the diversity of tactile signals in natural environments. Other works have gone beyond a 1 arXiv:2405.04534v1 [cs.CV] 7 May 2024 \fDataset Samples Aligned Scenario Source More Than a Feeling [7] 6.5k \u2715 Tabletop Robot Feeling of Success [6] 9.3k \u2715 Tabletop Robot VisGel [35] 12k \u2715 Tabletop Robot SSVTP [28] 4.6k \u2713 Tabletop Robot ObjectFolder 1.0 [16] \u2013 \u2713 Object Synthetic ObjectFolder 2.0 [17] \u2013 \u2713 Object Synthetic ObjectFolder Real [18] 3.7k \u2715 Object Robot Burka et al. [5] 1.1k \u2715 Sub-scene Human Touch and Go [56] 13.9k \u2715 Sub-scene Human YCB-Slide\u2217[52] \u2713 Object Human Touching a NeRF [63] 1.2k \u2713 Object Robot TaRF (Ours) 19.3k \u2713 Full scene Human Table 1. Dataset comparison. We present the number of real visual-tactile pairs and whether such pairs are visually aligned, i.e., whether the visual image includes an occlusion-free view of the touched surface. \u2217YCB-Slide has real-world touch probes but synthetic images rendered with CAD models of YCB objects on a white background [9]. lab setting and have collected touch from real scenes [5, 56]. However, existing datasets lack aligned visual and tactile information, since the touch sensor and the person (or robot) that holds it often occlude large portions of the visual scene (Fig. 2). These datasets also contain only a sparse set of touch signals for each scene, and it is not clear how the sampled touch signals relate to each other in 3D. In this work, we present a simple and low-cost procedure to capture quasi-dense, scene-level, and spatially-aligned visual and touch data (Fig. 1). We call the resulting scene representation a tactile-augmented radiance field (TaRF). We remove the need for robotic collection by leveraging a 3D scene representation (a NeRF [39]) to synthesize a view of the surface being touched, which results in spatially aligned visual-tactile data (Fig. 2). We collect this data by mounting a touch sensor to a camera with commonly available materials (Fig. 3). To calibrate the pair of sensors, we take advantage of the fact that popular vision-based touch sensors [25, 26, 32, 48] are built on ordinary cameras. The relative pose between the vision and tactile sensors can thus be estimated using traditional methods from multi-view geometry, such as camera resectioning [20]. We use this procedure to collect a large real-world dataset of aligned visual-tactile data. With this dataset, we train a diffusion model [45, 51] to estimate touch at locations not directly probed by a sensor. In contrast to the recent work of Zhong et al. [63], which also estimates touch from 3D NeRF geometry, we create scene-scale reconstructions, we do not require robotic proprioception, and we use diffusion models [51]. This enables us to obtain tactile data at a much larger scale, and with considerably more diversity. Unlike previous visual-tactile diffusion work [57], we condition the model on spatially aligned visual and depth information, enhancing the generated samples\u2019 quality and their usefulness in downstream applications. After training, the diffusion model can be used to predict tactile informaOF 2.0 [17] VisGel [35] OF Real [18] SSVTP [28] TG [56] TaRF (Ours) Figure 2. Visual-tactile examples. In contrast to the visual-tactile data captured in previous work, our approach allows us to sample unobstructed images that are spatially aligned with the touch signal, from arbitrary 3D viewpoints using a NeRF. tion for novel positions in the scene. Analogous to quasidense stereo methods [15, 33], the diffusion model effectively propagates sparse touch samples, obtained by probing, to other visually and structurally similar regions of the scene. We evaluate our visual-tactile model\u2019s ability to accurately perform cross-modal translation using a variety of quality metrics. We also apply it to several downstream tasks, including localizing a touch within a scene and understanding material properties of the touched area. Our experiments suggest: \u2022 Touch signals can be localized in 3D space by exploiting multi-view geometry constraints between sight and touch. \u2022 Estimated touch measurements from novel views are not only qualitatively accurate, but also beneficial on downstream tasks. \u2022 Cross-modal prediction models can accurately estimate touch from sight for natural scenes. \u2022 Visually-acquired 3D scene geometry improves crossmodal prediction. 2. Related Work Visual-tactile datasets. Previous work has either used simulators [16, 17] or robotic arms [6, 8, 18, 35, 63] for data generation. Our work is closely related to that of Zhong et al. [63], which uses a NeRF and captured touch data to generate a tactile field for several small objects. They use the proprioception of an expensive robot to spatially align vision and touch. In contrast, we leverage the properties of the tactile sensor and novel view synthesis to use commonly available material (a smartphone and a selfie stick) to align vision and touch. This enables the collection of a larger, scene-level, and more diverse dataset, on which we train a higher-capacity diffusion model (rather than a conditional GAN). Like several previous works [5, 56], we also collect scene-level data. In contrast to them, we spatially align the signals by registering them in a unified 3D representation, thereby increasing the prediction power of the visual-tactile generative model. Capturing multimodal 3D scenes. Our work is related to methods that capture 3D visual reconstructions of spaces 2 \fusing RGB-D data [12, 49, 55, 59] and multimodal datasets of paired 3D vision and language [1, 2, 10]. Our work is also related to recent methods that localize objects in NeRFs using joint embeddings between images and language [29] or by semantic segmentation [62]. In contrast to language supervision, touch is tied to a precise position in a scene. 3D touch sensing. A variety of works have studied the close relationship between geometry and touch, motivating our use of geometry in imputing touch. Johnson et al. [25, 26] proposed vision-based touch sensing, and showed that highly accurate depth can be estimated from the touch sensor using photometric stereo. Other work has estimated object-scale 3D from touch [54]. By contrast, we combine sparse estimates of touch with quasi-dense tactile signals estimated using generative models. Cross-modal prediction of touch from sight. Recent work has trained generative models that predict touch from images. Li et al. [35] used a GAN to predict touch for images of a robotic arm, while Gao et al. [18] applied them to objects collected on a turntable. Yang et al. [57] used latent diffusion to predict touch from videos of humans touching objects. Our goal is different from these works: we want to predict touch signals that are spatially aligned with a visual signal, to exploit scene-specific information, and to use geometry. Thus, we use a different architecture and conditioning signal, and fit our model to examples from the same scenes at training and test time. Other work has learned joint embeddings between vision and touch [28, 36, 56, 58, 61]. 3. Method We collect visual and tactile examples from a scene and register them together with a 3D visual reconstruction to build a TaRF. Specifically, we capture a NeRF F\u03b8 : (x, r) 7\u2192(c, \u03c3) that maps a 3D point x = (x, y, z) and viewing direction r to its corresponding RGB color c and density \u03c3 [39]. We associate to the visual representation a touch model F\u03d5 : vt 7\u2192\u03c4 that generates the tactile signal that one would obtain by touching at the center of the image vt. In the following, we explain how to estimate F\u03b8 and F\u03d5 and put them into the same shared 3D space. 3.1. Capturing vision and touch signals Obtaining a visual 3D reconstruction. We build the visual NeRF, F\u03b8, closely following previous work [12, 55]. A human data collector moves through a scene and records a video, covering as much of the space as possible. We then estimate camera pose using structure from motion [47] and create a NeRF using off-the-shelf packages [53]. Additional details are provided in the supplement. Capturing and registering touch. We simultaneously collect tactile and visual signals by mounting a touch sensor Visual Camera Tactile Sensor Tactile frames Visual frames Visual-Tactile Correspondences Figure 3. Capturing setup. (a) We record paired vision and touch signals using a camera attached to a touch sensor. (b) We estimate the relative pose between the touch sensor and the camera using correspondences between sight and touch. on a camera (Fig. 3), obtaining synchronized touch signals {\u03c4 i}N i=1 and video frames v. We then estimate the pose of the video frames using off-the-shelf structure from motion methods [47], obtaining poses {pv i }N i=1. Finally, we use the calibration of the mount to obtain the poses {pt i}N i=1 of the tactile measurements with respect to the scene\u2019s global reference frame. As a collection device, we mount an iPhone 14 Pro to one end of a camera rod, and a DIGIT [32] touch sensor to the other end. Note that the devices can be replaced with any RGB-D camera and vision-based tactile sensor. Capturing setup calibration. To find the relative pose between the camera and the touch sensor (Fig. 3), we exploit the fact that arbitrary viewpoints can be synthesized from F\u03b8, and that ubiquitous vision-based touch sensors are based on perspective cameras. In these sensors, an elastomer gel is placed on the lens of a commodity camera, which is illuminated by colored lights. When the gel is pressed into an object, it deforms, and the camera records an image of the deformation; this image is used as the tactile signal. This design allows us to estimate the pose of the tactile sensor through multi-view constraints from visualtactile correspondences: pixels in visual images and tactile images that are of the same physical point. We start the calibration process by synthesizing novel views from F\u03b8. The views are generated at the camera location {pv i }N i=1, but rotated 90\u25e6on the x-axis. This is because the camera is approximately orthogonal to the touch sensor (see Fig. 3). Then, we manually annotate corresponding pixels between the touch measurements and the generated frames (Fig. 3). To simplify and standardize this process, we place a braille board in each scene and probe it with the touch sensor. This will generate a distinctive touch signal that is easy to localize [23]. We formulate the problem of estimating the six degrees of freedom relative pose (R, t) between the touch sensor and the generated frames as a resectioning problem [20]. We use the estimated 3D structure from the NeRF F\u03b8 to obtain 3D points {xi}M i=1 for each of the annotated corre3 \fspondences. Each point has a pixel position ui \u2208R2 in the touch measurement. We find (R, t) by minimizing the reprojection error: \\ min _ { { \\ma thbf R } , { \\ma t hbf t}} \\frac {1}{M}\\sum _{i=1}^M \\lVert \\pi ({\\mathbf K}[\\mathbf {R}\\,\\,|\\,\\,\\mathbf {t}], \\mathbf {X}_i) \\bu _i \\rVert _1, (1) where \u03c0 projects a 3D point using a given projection matrix, K are the known intrinsics of the tactile sensor\u2019s camera, and the point Xi is in the coordinate system of the generated vision frames. We perform the optimization on 6-15 annotated correspondences from the braille board. For robustness, we compute correspondences from multiple frames. We represent the rotation matrix using quaternions and optimize using nonlinear least-squares. Once we have (R, t) with respect to the generated frames, we can derive the relative pose between the camera and the touch sensor. 3.2. Imputing the missing touch We use a generative model to estimate the touch signal (represented as an image from a vision-based touch sensor) for other locations within the scene. Specifically, we train a diffusion model p\u03d5(\u03c4 | v, d, b), where v and d are images and depth maps extracted from F\u03b8 (see Fig. 4). We also pass as input to the diffusion model a background image captured by the touch sensor when it is not in contact with anything, denoted as b. Although not essential, we have observed that this additional input empirically improves the model\u2019s performance (e.g., Fig. 1 the background provides the location of defects in the gel, which appear as black dots). We train the model p\u03d5 on our entire vision-touch dataset (Sec. 4). The training of p\u03d5 is divided into two stages. In the first, we pre-train a cross-modal visual-tactile encoder with self-supervised contrastive learning on our dataset. This stage, initially proposed by [23, 57], is equivalent to the self-supervised encoding pre-training that is common for image generation models [45]. We use a ResNet-50 [21] as the backbone for this contrastive model. In the second stage, we use the contrastive model to generate the input for a conditional latent diffusion model, which is built upon Stable Diffusion [45]. A frozen pretrained VQ-GAN [14] is used to obtain the latent representation with a spatial dimension of 64 \u00d7 64. We start training the diffusion model from scratch and pre-train it on the task of unconditional tactile image generation on the YCBSlide dataset [52]. After this stage, we train the conditional generative model p\u03d5 on our spatially aligned visual-tactile dataset, further fine-tuning the contrastive model end-to-end with the generation task. At inference time, given a novel location in the 3D scene, we first render the visual signals \u02c6 v and \u02c6 d from NeRF, and then estimate the touch signal \u02c6 \u03c4 of the position using the diffusion model. Latent Diffusion Gaussian Noise \u001f\u001e\u001e\u001e\u001e\u001d\u001e\u001e\u001e\u001e\u001c Depth RGB Est. Touch NeRF { Figure 4. Touch estimation. We estimate the tactile signal for a given touch sensor pose (R, t). To do this, we synthesize a viewpoint from the NeRF, along with a depth map. We use conditional latent diffusion to predict the tactile signal from these inputs. 4. A 3D Visual-Tactile Dataset In the following, we show the details of the data collection process and statistics of our dataset. 4.1. Data Collection Procedure The data collection procedure is divided into two stages. First, we collect multiple views from the scene, capturing enough frames around the areas we plan to touch. During this stage, we collect approximately 500 frames. Next, we collect synchronized visual and touch data, maximizing the geometry and texture being touched. We then estimate the camera location of the vision frames collected in the previous two stages using off-the-shelf mapping tools [47]. After estimating the camera poses for the vision frames, the touch measurements\u2019 poses can be derived by using the mount calibration matrix. More details about the pose estimation procedure can be found in the supplement. Finally, we associate each touch sensor with a color image by translating the sensor poses upwards by 0.4 meters and querying the NeRF with such poses. The field of view we use when querying the NeRF is 50\u25e6. This provides us with approximately 1,500 temporally aligned vision-touch image pairs per scene. Note that this collection procedure is scalable since it does not require specific expertise or equipment and generates abundant scene-level samples. 4.2. Dataset Statistics We collect our data in 13 ordinary scenes including two offices, a workroom, a conference room, a corridor, a tabletop, a corridor, a lounge, a room with various clothes and four outdoor scenes with interesting materials. Typically, we collect 1k to 2k tactile probes in each scene, resulting in a total of 19.3k image pairs in the dataset. Some representative samples from the collected dataset are shown in Fig. 5. Our data includes a large variety of geometry (edges, surfaces, corners, etc.) and texture (plastic, clothes, snow, wood, etc.) of different materials in the scene. During capturing process, the collector will try to 4 \fFigure 5. Representative examples from the captured dataset. Our dataset is obtained from nine everyday scenes, such as offices, classrooms, and kitchens. We show three such scenes in the figure above, together with samples of spatially aligned visual and tactile data. In each scene, 1k to 2k tactile probes were collected, resulting in a total of 19.3k image pairs. The data encompasses diverse geometries (edges, surfaces, corners, etc.) and textures (plastic, clothes, snow, wood, etc.) of various materials. The collector systematically probed different objects, covering areas with distinct geometry and texture using different sensor poses. thoroughly probe various objects and cover the interesting areas with more distinguishable geometry and texture with different sensor poses. To the best of our knowledge, our dataset is the first dataset that captures full, scene-scale spatially aligned vision-touch image pairs. We provide more details about the dataset in the supplement. 5. Experiments Leveraging the spatially aligned image and touch pairs from our dataset, we first conduct experiments on dense touch estimation. We then show the effectiveness of both the aligned data pairs and the synthesized touch signals by conducting tactile localization and material classification as two downstream tasks. 5.1. Implementation Details NeRF. We use the Nerfacto method from Nerfstudio [53]. For each scene, we utilize approximately 2,000 images as training set, which thoroughly cover the scene from various view points. We train the network with a base learning rate of 1 \u00d7 10\u22122 using Adam [30] optimizer for 200,000 steps on a single NVIDIA RTX 2080 Ti GPU to achieve optimal performance. Visual-tactile contrastive model. Following prior works [27, 57], we leverage contrastive learning methods to train a ResNet-50 [21] as visual encoder. The visual and tactile encoders share the same architecture but have different weights. We encode visual and tactile data into latent vectors in the resulting shared representation space. We set the dimension of the latent vectors to 32. Similar to CLIP [43], the model is trained on InfoNCE loss obtained from the pairwise dot products of the latent vectors. We train the model for 20 epochs by Adam [30] optimizer with a learning rate of 10\u22124 and batch size of 256 on 4 NVIDIA RTX 2080 Ti GPUs. Visual-tactile generative model. Our implementation of the diffusion model closely follows Stable Diffusion [46], with the difference that we use a ResNet-50 to generate the visual encoding from RGB-D images for conditioning. Specifically, we also add the RGB-D images rendered from the tactile sensors\u2019 poses into the conditioning, which we refer to in Sec. 5.2 as multiscale conditioning. The model is optimized for 30 epochs by Adam [30] optimizer with a base learning rate of 10\u22125. The learning rate is scaled by gpu number \u00d7 batch size. We train the model with batch size of 48 on 4 NVIDIA A40 GPUs. At inference time, the model conducts 200 steps of denoising process with a 7.5 guidance scale. Following prior cross-modal synthesis work [44], we use reranking to improve the prediction quality. We obtain 16 samples from the diffusion model for every instance and re-rank the samples with our pretrained contrastive model. The sample with highest similarity is the final prediction. 5.2. Dense Touch Estimation Experimental setup. We now evaluate the diffusion model\u2019s ability to generate touch images. To reduce overlap between the training and test set, we first split the frames into sequences temporally (following previous work [56]). We split them into sequences of 50 touch samples, then divide these sequences into train/validation/test with a ratio of 8/1/1. We evaluate the generated samples on Frechet Inception Distance (FID), a standard evaluation metric for cross-modal generation [56]. We also include Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM), though we note that these metrics are highly sensitive to spatial position of the generated content, and can be optimized by models that minimize simple pixelwise losses [22]. We also include CVTP metric proposed by prior work [57], which measures the similarity between visual and tactile embeddings of a contrastive model, analogous to 5 \fedge Condition VisGel Condition G.T. Ours L1 Ours G.T. L1 VisGel brick rock chair sofa desk wall surface desk carpet Figure 6. Qualitative touch estimation results. Each model is conditioned on the RGB image and depth map rendered from the NeRF (left). The white box indicates the tactile sensor\u2019s approximate field of view (which is much smaller than the full conditional image). The G.T. column shows the ground truth touch images measured from a DIGIT sensor. L1 and VisGel often generate blurry textures and inaccurate geometry. By contrast, our model better captures the features of the tactile image, e.g., the rock\u2019s microgeometry and complex textures and shapes of furniture. The last row shows two failure cases of our model. In both examples, our model generates a touch image that is geometrically misaligned with the ground truth. All of the examples shown here are at least 10cm away from any training sample. CLIP [43] score. We compare against two baselines: VisGel, the approach from Li et. [35], which trains a GAN for touch generation, and L1, a model with the same architecture of VisGel but trained to minimize an L1 loss in pixel space. Results. As is shown in Table 2, our approach performs much better on the high-level metrics, with up to 4x lower FID and 80x higher CVTP. This indicates that our proposed diffusion model captures the distribution and characteristics of the real tactile data more effectively. On the low-level metrics (PSNR and SSIM), all methods are comparable. In particular, the L1 model slightly outperforms the other methods since the loss it is trained on is highly correlated with low-level, pixel-wise metrics. Fig. 6 qualitatively compares samples from the different models. Indeed, our generated samples exhibit enhanced details in micro-geometry of fabrics and richer textures, including snow, wood and carpeting. However, all methods fail on fine details that are barely visible in the image, such as the tree bark. Ablation study. We evaluate the importance of the main components of our proposed touch generation approach (Table 3). Removing the conditioning on the RGB image results in the most prominent performance drop. This is expected since RGB image uniquely determines the fineModel PSNR \u2191 SSIM \u2191 FID \u2193 CVTP \u2191 L1 24.34 0.82 97.05 0.01 VisGel [35] 23.66 0.81 130.22 0.03 Ours 22.84 0.72 28.97 0.80 Table 2. Quantitative results on touch estimation for novel views. While comparable on low-level metrics with the baselines, our approach captures the characteristics of the real tactile data more effectively, resulting in a lower FID score. grained details of a tactile image. Removing depth image or contrastive pretraining has small effect on CVTP but results in a drop on FID. Contrastive re-ranking largely improves CVTP, indicating the necessity of obtaining multiple samples from the diffusion model. We also find that multiscale conditioning provide a small benefit on FID and CVTP. 5.3. Downstream Task I: Tactile Localization To help understand the quality of the captured TaRFs, we evaluate the performance of the contrastive model (used for conditioning our diffusion model) on the task of tactile localization. Given a tactile signal, our goal is to find the corresponding regions in a 2D image or in a 3D scene that are associated with it, i.e., we ask the question: what part of this image/scene feel like this? We perform the following 6 \fQuery Heatmap Query Query Heatmap Heatmap Query Heatmap Figure 7. Tactile localization heatmaps. Given a tactile query image, the heatmap shows the image patches with a higher affinity to this tactile signal, as measured by a contrastive model trained on our dataset. We use a sliding window and compare each extracted patch with the touch signal. In each case, the center patch is the true position. Our model successfully captures the correlation between the two signals. This enables it to localize a variety of touch signals, including fine-grained geometry, e.g., a cable or a keyboard, various types of corners and edges, and large uniform regions, such as a clothing. This ability enables our diffusion model to effectively propagate sparse touch samples to other visually and structurally similar regions of the scene. Model variation PSNR \u2191SSIM \u2191FID \u2193CVTP \u2191 Full 22.84 0.72 28.97 0.80 No RGB conditioning 22.13 0.70 34.31 0.76 No depth conditioning 22.57 0.71 33.16 0.80 No contrastive pretraining 22.82 0.71 32.98 0.79 No re-ranking 22.92 0.72 29.46 0.61 No multiscale 23.19 0.72 30.89 0.77 Table 3. Ablation study. Since the fine-grained details of touch images can be determined from a RGB image, removing conditioning on the latter results in the largest performance drops. Reranking has notable impact on CVTP, indicating the necessity of obtaining multiple samples from the diffusion model. evaluations on the test set of our dataset. Note that we run no task-specific training. 2D Localization. To determine which part of an image are associated with a given tactile measurement, we follow the same setup of SSVTP [28]. We first split the image into patches and compute their embedding. Then, we generate the tactile embedding of the input touch image. Finally, we compute the pairwise similarities between the tactile and visual embeddings, which we plot as a heatmap. As we can see in Fig. 7, our constrastive encoder can successfully capture the correlations between the visual and tactile data. For instance, the tactile embeddings of edges are associated to edges of similar shape in the visual image. Note that the majority of tactile embeddings are highly ambiguous: all edges with a similar geometry feel the same. 3D Localization. In 3D, the association of an image to tactile measurements becomes less ambiguous. Indeed, since tactile-visual samples are rotation-dependent, objects with similar shapes but different orientations will generate different tactile measurements. Lifting the task to 3D still does not remove all ambiguities (for example, each side of a rectangular table cannot be precisely localized). Nonetheless, we believe it to be a good fit for a quantitative evaluation since it\u2019s rare for two ambiguous parts of the scene to be touched with exactly the same orientation. We use the following experimental setup for 3D localization. Given a tactile image as a query, we compute its distance in embedding space to all visual test images from the same scene. Note that all test images are associated with a 3D location. We define as ground-truth correspondences all test images at a distance of at most r from the 3D location of the test sample. We vary r to account for local ambiguities. As typical in the retrieval literature, we benchmark the performance with metric mean Average Precision (mAP). We consider three baselines: (1) chance, which randomly selects corresponding samples; (2) real, which uses the contrastive model trained on our dataset; and (3) real + estimated, which trains the contrastive model on both dataset samples and a set of synthetic samples generated via the scenes\u2019 NeRF and our touch generation model. Specifically, we render a new image and corresponding touch by interpolating the position of two consecutive frames in the training dataset. This results in a training dataset for the contrastive model that is twice as large. 7 \fr(m) Dataset 0.001 0.005 0.01 0.05 0.1 Chance 3.55 6.82 10.25 18.26 21.33 Real 12.10 22.93 32.10 50.30 57.15 Real + Est. 14.92 26.69 36.17 53.62 60.61 Table 4. Quantitative results on 3D tactile localization. We evaluate using mean Average Precision (mAP) as a metric. Training the contrastive model on our dataset of visually aligned real samples together with estimated samples from new locations in the scene results in the highest performance. The results, presented in Table 4, demonstrate the performance benefit of employing both real and synthetic tactile pairs. Combining synthetic tactile images with the original pairs achieves highest performance on all distance thresholds. Overall, this indicates that touch measurements from novel views are not only qualitatively accurate, but also beneficial for this downstream task. 5.4. Downstream Task II: Material Classification We investigate the efficacy of our visual-tactile dataset for understanding material properties, focusing on the task of material classification. We follow the formulation by Yang et al. [56], which consists of three subtasks: (i) material classification, requiring the distinction of materials among 20 possible classes; (ii) softness classification, a binary problem dividing materials as either hard or soft; and (iii) hardness classification, which requires the classification of materials as either rough or smooth. We follow the same experimental procedure of [56]: we pretrain a contrastive model on a dataset and perform linear probing on the sub-tasks\u2019 training set. Our experiments only vary the pretraining dataset, leaving all architectural choices and hyperparameters the same. We compare against four baselines. A random classifier (chance); the ObjectFolder 2.0 dataset [17]; the VisGel dataset [35]; and the Touch and Go dataset [56]. Note that the touch sensor used in the test data (GelSight) differs from the one used in our dataset (DIGIT). Therefore, we use for pretraining a combination of our dataset and Touch and Go. To ensure a fair comparison, we also compare to the combination of each dataset and Touch and Go. The findings from this evaluation, as shown in Table 5, suggest that our data improves the effectiveness of the contrastive pretraining objective, even though our data is from a different distribution. Moreover, we find that adding estimated touch probes for pretraining results in a higher performance on all the three tasks, especially the smoothness classification. This indicates that not only does our dataset covers a wide range of materials but also our diffusion model captures the distinguishable and useful patterns of different materials. Dataset Material Hard/ Soft Rough/ Smooth Chance 18.6 66.1 56.3 ObjectFolder 2.0 [17] 36.2 72.0 69.0 VisGel [35] 39.1 69.4 70.4 Touch and Go [56] 54.7 77.3 79.4 + ObjectFolder 2.0 [17] 54.6 87.3 84.8 + VisGel [35] 53.1 86.7 83.6 + Ours\u2217(Real) 57.6 88.4 81.7 + Ours\u2217(Real + Estimated) 59.0 88.7 86.1 Table 5. Material classification. We show the downstream material recognition accuracy of models pre-trained on different datasets. The final rows show the performance when combining different datasets with Touch and Go [56]. \u2217The task-specific training and testing datasets for this task are collected with a GelSight sensor. We note that our data comes from a different distribution, since it is collected with a DIGIT sensor [32]. 6. Conclusion In this work, we present the TaRF, a scene representation that brings vision and touch into a shared 3D space. This representation enables the generation of touch probes for novel scene locations. To build this representation, we collect the largest dataset of spatially aligned vision and touch probes.We study the utility of both the representation and the dataset in a series of qualitative and quantitative experiments and on two downstream tasks: 3D touch localization and material recognition. Overall, our work makes the first step towards giving current scene representation techniques an understanding of not only how things look, but also how they feel. This capability could be critical in several applications ranging from robotics to the creation of virtual worlds that look and feel like the real world. Limitations. Since the touch sensor is based on a highly zoomed-in camera, small (centimeter-scale) errors in SfM or visual-tactile registration can lead to misalignments of several pixels between the views of the NeRF and the touch samples, which can be seen in our TaRFs. Another limitation of the proposed representation is the assumption that the scene\u2019s coarse-scale structure does not change when it is touched, an assumption that may be violated for some inelastic surfaces. Acknowledgements. We thank Jeongsoo Park, Ayush Shrivastava, Daniel Geng, Ziyang Chen, Zihao Wei, Zixuan Pan, Chao Feng, Chris Rockwell, Gaurav Kaul and the reviewers for the valuable discussion and feedback. This work was supported by an NSF CAREER Award #2339071, a Sony Research Award, the DARPA Machine Common Sense program, and ONR MURI award N00014-21-1-2801. 8"
16
+ }
title_10K/test_title_short_2405.04674v1.json ADDED
The diff for this file is too large to render. See raw diff
 
title_10K/test_title_short_2405.04682v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04682v1",
3
+ "title": "TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation",
4
+ "abstract": "Recent advances in diffusion-based generative modeling have led to the\ndevelopment of text-to-video (T2V) models that can generate high-quality videos\nconditioned on a text prompt. Most of these T2V models often produce\nsingle-scene video clips that depict an entity performing a particular action\n(e.g., `a red panda climbing a tree'). However, it is pertinent to generate\nmulti-scene videos since they are ubiquitous in the real-world (e.g., `a red\npanda climbing a tree' followed by `the red panda sleeps on the top of the\ntree'). To generate multi-scene videos from the pretrained T2V model, we\nintroduce Time-Aligned Captions (TALC) framework. Specifically, we enhance the\ntext-conditioning mechanism in the T2V architecture to recognize the temporal\nalignment between the video scenes and scene descriptions. For instance, we\ncondition the visual features of the earlier and later scenes of the generated\nvideo with the representations of the first scene description (e.g., `a red\npanda climbing a tree') and second scene description (e.g., `the red panda\nsleeps on the top of the tree'), respectively. As a result, we show that the\nT2V model can generate multi-scene videos that adhere to the multi-scene text\ndescriptions and be visually consistent (e.g., entity and background). Further,\nwe finetune the pretrained T2V model with multi-scene video-text data using the\nTALC framework. We show that the TALC-finetuned model outperforms the baseline\nmethods by 15.5 points in the overall score, which averages visual consistency\nand text adherence using human evaluation. The project website is\nhttps://talc-mst2v.github.io/.",
5
+ "authors": "Hritik Bansal, Yonatan Bitton, Michal Yarom, Idan Szpektor, Aditya Grover, Kai-Wei Chang",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation",
17
+ "main_content": "Introduction The ability to generate videos that simulate the physical world has been a long-standing goal of artificial intelligence [1, 2, 3, 4]. In this regard, text-to-video (T2V) models have seen rapid advancements by pretraining on internet-scale datasets of images, videos, and texts [5, 6]. Previous works [7, 8, 9, 10, 11, 12] primarily focus on training conditional denoising diffusion probabilistic models [13] on paired video-text data [14, 15]. After training, these models allow for video generation by sampling from the trained diffusion model, conditioned on a text prompt. However, most of the open-models such as ModelScope[10] VideoCrafter [16, 17], OpenSora [18] are trained with single-scene video-text dataset [14, 19], which is widely available and easy to acquire. However, real-world scenarios often require the generation of multi-scene videos from multi-scene descriptions (e.g., Scene1: \u2018A koala is napping on a tree.\u2019 Scene2: \u2018The koala eats leaves on the tree.\u2019). In such cases, the generated video should accurately depict the events in their temporal order (e.g., Scene2 \u2020 Equal Contribution. \u2217Equal Advising. Contact [email protected],[email protected]. Preprint. arXiv:2405.04682v1 [cs.CV] 7 May 2024 \fScene 1: \u201cA red panda climbing a tree\u201d Scene 2 : \u201cThe red panda sleeps on the top of the tree\u201d Text2Video (a) Merging Captions (b) Merging Videos (c) Time-Aligned Captions (TALC) Text2Video Text2Video Text2Video \u201c{Scene 1} then {scene 2}\u201d Figure 1: Multi-scene video generation methods. (a) Generating a video by merging scene 1 and scene 2 descriptions. (b) The resulting video is composed from the video generated by the description of scene 1 and the video generated by the description of scene 2. (c) In our method (TALC) the generated video is conditioned on the description of scene 1 for the first half of the video frames and on the description of scene 2 for the later video frames. follows Scene1) while maintaining visual consistency, meaning that backgrounds and entities should remain consistent across scenes. While high-performance text-to-video models such as Sora [4] might be able to generate multi-scene videos, we point out that they are closed-source models trained with massive compute resources and lack sufficient details on the model design, training protocol, and datasets. In this work, we present a complementary approach and tackle the challenge of effectively leveraging the capabilities of base T2V models for multi-scene video generation. The multi-scene text-to-video generation differs from long video synthesis where the goal is to either interpolate (few frames to many frames) [8] or create continuing patterns of the single event in the generated video [11]. Prior works [20, 9] use a transformers [21, 22] to generate video frames for a given scene autoregressively. However, it is hard for their model to generate multiple scenes reliably as the context length increases with history of text descriptions and visual tokens [23] of the previous generated videos (e.g., generating Scene 4 conditioned on the Scene1, 2, 3 videos and descriptions). Other works [24] utilize a latent diffusion model [25] to generate video frames autoregressively by conditioning on the entire history of generated videos and scene descriptions. However, the approach is (a) slow due to repeated sampling, (b) generates only one frame per scene description, and (c) shown to work with only limited cartoon characters [26, 27] instead of wide range of visual concepts in the real-world. In this work, our goal is to generate multi-scene videos in the end-to-end manner, using a diffusion text-to-video generative model that is capable of producing content for a wide range of visual entities and actions. As shown in Figure 1(a), the naive approach to generating a multi-scene video for the scene descriptions (T \u2032 1, T \u2032 2) would condition the T2V generative model on the merged descriptions. In this setup, the diffusion model processes the entire scene description together, and lacks any information regarding the expected temporal order of events in the generated videos. As a result, we find that this approach leads to poor text-video alignment. As shown in Figure 1(b), an alternative approach generates videos for the individual text descriptions independently and concatenates them in the raw input space along the temporal dimension. While this approach achieves good alignment between the scene description and the scene-specific video segment, the resulting video lacks visual consistency in terms of entity and background appearances. Prior work [28, 29] generates multi-scene videos by utilizing knowledge of the entity, background, and their movements from large language models [30]. However, these videos are generated independently for each scene before being merged. Moreover, these methods do not offer a way to learn from realworld multi-scene video-text data. To remedy these challenges, we propose TALC (Time-ALigned Captions), a simple and effective framework to generate consistent and faithful multi-scene videos. As shown in Figure 1(c), our approach conditions the T2V generative model with the knowledge of the temporal alignment between the parts of the multi-scene video and multi-scene descriptions. 2 \f(c) Time-Aligned Captions (TALC) (b) Merging Videos (a) Merging Captions \u201cA grizzly bear catches a \ufb01sh in a rushing river\u201d \u201cThe grizzly bear looks over its territory.\u201d \u201cA grizzly bear catches a \ufb01sh in a rushing river then the grizzly bear looks over its territory.\u201d \u201cA grizzly bear catches a \ufb01sh in a rushing river\u201d \u201cThe grizzly bear looks over its territory.\u201d Figure 2: Examples of multi-scene video generation baselines. (a) Generating video on the merged descriptions, leads to a poor text-video alignment. (b) Generating videos for the individual text descriptions and concatenate them temporally, leads to a lack of background consistency. (c) Our approach (TALC) enhances the scene-level text-video alignment and maintains background consistency. Specifically, TALC conditions the visual representations of earlier video frames on the embeddings of the earlier scene description, and likewise, it conditions the representations of later video frames on the embeddings of the later scene description in the temporal dimension. Additionally, the temporal modules in the T2V diffusion architecture allows information sharing between video frames (the first half and the second half) to maintain visual consistency. Thus, TALC enhances the scene-level textvideo alignment while providing all the scene descriptions to the diffusion model at once. Further, our TALC framework can enhance the multi-scene text-to-video generation capabilities with real-world multi-scene data (\u00a73.3). In our experiments, we assess the visual consistency (background and entity consistency) and multiscene script adherence of the generated videos from Modelscope [10] and Lumiere [6]. Through our automatic and human evaluation, we find that merging scene descriptions leads to high visual consistency but poor text adherence. On the other hand, we observe that merging videos independently achieves the highest text adherence while the visual consistency is compromised. Interestingly, switching to TALC strikes an effective balance between visual consistency and text adherence, outperforming the baseline methods by 11.1 points on the overall score. This score represents the average of visual consistency and text adherence scores, as determined by human evaluation. Furthermore, we construct a multi-scene text-video dataset from real-world videos and fine-tune the T2V generative model using TALC. On our human evaluation, the generated videos from the TALC-finetuned model exhibit higher text adherence than the base model in multi-scene scenarios. Specifically, it outperforms the baseline methods by 15.5 points on the overall score. In summary, our contributions are: 2 Preliminaries In this work, we focus on generating multi-scene videos from scene descriptions using a diffusionbased Text-to-Video (T2V) generative model. The initial step is to equip the generative model with the knowledge of a wide range of visual concepts and actions. This is achieved during the pretraining stage (\u00a72.1). Subsequently, we aim to utilize the base model for multi-scene text-to-video generation task, which we formalize in (\u00a72.3). In \u00a73, we propose our TALC framework and discuss collection of real-world multi-scene text-video data for finetuning the base T2V model. 3 \f2.1 Diffusion Models for Text-to-Video Generation Diffusion models [13, 31] p\u03b8(x) are a class of generative models that learn data distribution pdata(x). Due to their flexible design, we can train their class-conditional versions to learn class-conditional data distributions pdata(x|y) where y is the conditioning variable, that can take various forms such as labels from a dataset or text description accompanying in a video [32]. We assume a dataset S \u2282V \u00d7 T consisting of pairs of (Vj, Tj) where Vj \u2208RL\u00d73\u00d7H\u00d7W is a raw video consisting of 3 RGB channels, L frames, H height, W width, and Tj is a text caption. We use V and T to denote the domain of videos and text, respectively. The aim of T2V generative modeling is to learn the conditional distribution of the videos conditioned on the text pS(Vj|Tj). In this work, we consider diffusion-based generative models that learn the data distribution via iterative denoising of the input video zj \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032. Here, zj can either represent the input video in the raw pixel space Vj [6] or it can represent the latent representation of the video zj = E(Vj) for the latent diffusion models [25] where E is an encoder network such as VAE [33]. Given zj, diffused variable z\u03c4,j = \u03b1\u03c4zj + \u03b2\u03c4\u03f5 are constructed where \u03f5 \u223cN(0, I) where \u03b1\u03c4 and \u03b2\u03c4 are sampled from the noise scheduler p\u03c4 [34] which define the noise levels the model is trained on. Finally, we train a denoiser network f\u03b8 [35, 36] that inputs the diffused variable z\u03c4 and embeddings of the text caption to predict the target vector y where y can be the original noise \u03f5, which minimizes the denoising score matching objective [13]: E(Vj,Tj)\u2208S,\u03c4\u223cp\u03c4 ,\u03f5\u223cN(0,I) \u0002 ||\u03f5 \u2212f\u03b8(\u03c4, z\u03c4,j, hj)||2 2 \u0003 (1) where hj = H(Tj) \u2208Rd is the embedding of the text caption Tj where H is the text embedding model [37] and d is the dimension size. 2.2 Text Conditioning Mechanism To ensure the effective textual controllability of video generation, the structure of the denoiser networks is equipped with a cross-attention mechanism [10, 8]. Specifically, it conditions the visual content z\u03c4 \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032 on the text. To do so, we first repeat the text embeddings of the text caption rj = R(hj) \u2208RL\u00d7d where R is a function that repeats the input text embedding hj for L times in the temporal dimension. Intuitively, the repeat operation represents that the L frames of the video zj are semantically aligned with the textual description Tj or its text embedding rj. In \u00a73, we will manipulate this operation to make the model architecture aware of the video-text alignment in the multi-scene scenario. These repeated text embeddings rj are inputs to the spatial attention block as the key and value in the multi-head attention block. The cross-attention enables the intermediate visual features to capture the semantic information that facilitates an alignment between the language and vision embeddings. Formally, z\u2032 \u03c4,j = CAf\u03b8(Q = z\u03c4,j; K = rj; V = rj) (2) where CAf\u03b8 is the cross attention mechanism with Q, K, V as the query, key, and value, respectively, in the spatial blocks of the denoiser network. Additionally, z\u2032 \u03c4,j is the intermediate representation that is informed with the visual and textual content of the data. In addition to the spatial blocks, the denoiser network also consists temporal blocks that aggregate features across video frames which are useful for maintaining visual consistency in the generated video. 2.3 Multi-Scene Text-to-Video Generation In many real-world scenarios, such as movies, stories, and instructional videos [38], a video may depict multiple transitions with the same or changing entities, as well as multiple actions or events. In addition, the different video segments often share contextual information such as the background or location. These videos are considered multi-scene videos. In this work, we aim to generate multi-scene video X = {x1, x2, . . . , xn} from multi-scene descriptions Y = {y1, y2, . . . , yn} where n are the number of sentences and each sentence yj is a scene description for scene j. Additionally, the index j also defines the temporal order of events in the multi-scene script i.e., we want the events 4 \fText2Video Denoising UNet Scene 1: \u201cA red panda climbing a tree.\u201d Scene 2 : \u201cThe red panda sleeps on the top of the tree\u201d Figure 3: The architecture of Time-Aligned Captions (TALC). During the generation process of the video, the initial half of the video frames are conditioned on the embeddings of the description of scene 1 (ry1), while the subsequent video frames are conditioned on the embeddings of the description of scene 2 (ry2). described in the scene j to be depicted earlier than the events described in the scene k where k > j. Further, we want the parts of the entire generated video X, given by xj, to have high video-text semantic alignment with the corresponding scene description yj, also referred to as text adherence. For instance, consider a two-scene description Y = {\u2018A red panda climbs on a bamboo forest.\u2019, \u2018The red panda sleeps peacefully in the treetop.\u2019}. Here, we need the T2V generative model to synthesize the appearance of the red panda (an entity) that remains consistent throughout the generated video, also referred to as entity consistency. In addition, we will expect that the context of the multi-scene video of a forest (a background) to remain consistent, also referred to as background consistency. 3 Method 3.1 TALC: Time-Aligned Captions for Multi-Scene T2V Generation Most of the existing T2V generative models [10, 16, 6] are trained with large-scale short video-text datasets (10 seconds 30 seconds) such as WebVid-10M [14]. Here, each instance of the dataset consists of a video and a human-written video description. These videos either lack the depiction of multiple events, or the video descriptions do not cover the broad set of events in the video, instead focusing on the major event shown. As a result, the pretrained T2V generative models only synthesize single video scenes depicting individual events. We introduce TALC, a novel and effective framework to generate multi-scene videos from diffusion T2V generative models based on the scene descriptions. Our approach focuses on the role of text conditioning mechanism that is widely used in the modern T2V generative models (\u00a72.2). Specifically, we take inspiration from the fact that the parts of the generated video xj should depict the events described in the scene description yj. To achieve this, we ensure that the representations for the part of the generated video aggregates language features from the scene description yj. Consider that we want to generate a multi-scene video X \u2208RL\u00d73\u00d7H\u00d7W from the scene descriptions yj \u2208Y , using a T2V generative model f\u03b8. Furthermore, we assume that individual video segments xj are allocated L/n frames within the entire video X. Let zX = [zx1; zx2; . . . ; zxn] \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032 represent the representation for the entire video X, and zxj \u2208R(L/n)\u00d7C\u00d7H\u2032\u00d7W \u2032 for the jth part of the video that are concatenated in the temporal dimension. In addition, consider rY = {ry1, . . . , ryn} be the set of text embeddings for the multi-scene description Y and yj be an individual scene description. In the TALC framework, the Eq. 2 is changed to: z\u2032 \u03c4,xj = CAf\u03b8(Q = z\u03c4,xj, K = ryj, V = ryj) (3) z\u2032 \u03c4,X = [z\u2032 x1; z\u2032 x2; . . . ; z\u2032 xn] (4) Here, \u03c4 represents the timestamp in the diffusion modeling setup, which is applied during training as well as inference. We illustrate the framework in Figure 3. While TALC aims to equip the generative model with the ability to depict all the events in the multi-scene descriptions, the visual consistency 5 \fis ensured by the temporal modules (attentions and convolution blocks) in the denoiser network. By design, our approach can be applied to the pretrained T2V model during inference. 3.2 Baselines Here, we describe the baseline methods that could be used to generate videos for the multi-scene descriptions from a given diffusion text-to-video generative model. 3.2.1 Merging Captions In this setup, we create a single caption by merging all the multi-scene descriptions. Specifically, the multi-scene descriptions Y = {y1, y2, . . . , yn} can be written as a single prompt \u2018P = y1.Then, y2. . . . Then, yn.\u2019 For instance, the two-scene description Y = {\u2018A red panda climbs on a bamboo forest.\u2019, \u2018The red panda sleeps peacefully in the treetop.\u2019} will change to P = \u2018A red panda climbs on a bamboo forest. Then, the red panda sleeps peacefully in the treetop.\u2019 Subsequently, we generate a video from the T2V model f\u03b8 by conditioning it on P. While this approach mentions the temporal sequence of the events in a single prompt, the T2V model does not understand the temporal boundaries between the two events. Specifically, the Eq. 2 suggests that the visual features for all the video frames will aggregate information from the entire multi-scene description, at once, without any knowledge about the alignment between the scene description and its expected appearance in the generated video. 3.2.2 Merging Videos In this setup, we generate videos for each scene description individually and merge them in the raw input space. Formally, the individual scene description yi conditions the T2V model f\u03b8 to generate the parts of the multi-video xi. Finally, we stitch the individual videos together to synthesize the entire video X = x1, x2, . . . , xn. In this process, the parts of the multi-scene video closely adhere to the scene descriptions, leading to high text fidelity. However, since the generated videos do not have access to all the multi-scene descriptions (e.g., the video for Scene 2 is not informed about Scene 1), the visual consistency across the entire video is quite poor. 3.3 Multi-Scene Video-Text Data Generation 0:00 0:08 0:12 0:17 0:22 Seconds Gemini Multi-Image Captions The lady gets the dried/smoked prawns ready for use She then adds the dried crayfish to the pot Next, she includes tomato puree for that rich, tangy flavor Salt is added to taste, and everything is stirred together PyScene Scene Cuts Caption A woman in a colorful scarf is showing how to make a stew Figure 4: Our approach for generating time-aligned video captions. The process begins with PyScene cuts identifying the boundaries of distinct scenes within a video. Keyframes are then selected from the median of each scene. These frames are processed collectively through the Gemini model to produce multi-image captions that maintain narrative continuity by contextualizing each scene within the video\u2019s overall sequence. While our approach generates better multi-scene videos, the text adherence capabilities of the pretrained T2V generative model are limited. This is due to the lack of multi-scene video-text data during its pretraining. Unlike single video-text datasets, the multi-scene video-text datasets are not widely available and are hard to curate for model training. This is attributed to the fact that high-quality caption generation requires a lot of human labor which is time-consuming and expensive. Prior work such as ActivityNet [39] has curated human captions for specific video scenes 6 \fdepicting useful actions in long videos. However, the video scenes are either overlapping or have a large temporal gap between them that will be harmful for natural and smooth variations between the generated multi-scene videos. Hence, the absence of high-quality captions for continuous video scenes in the dataset makes unsuitable for T2V generative training. To this end, we aim to create a real-world multi-scene video-text dataset to allow further training of the pretrained T2V models. Specifically, we leverage the capability of the multimodal foundation model, Gemini-Pro-Vision [40], to generate high-quality synthetic data for enhanced video-text training [41]. Formally, we start with a video-text dataset M = A \u00d7 B consisting of pairs of (Ai, Bi) where Ai is a raw video and Bi is the corresponding video description from the dataset. Subsequently, we utilize PySceneDetect library 1 to generate continuous video scenes from Ai = {Ai,1, Ai,2, . . . , Ai,m} where m is the number of scene cuts in the video. A similar approach was used in a prior work [12] to detect scene changes in the video data. Then, we sample the middle video frame Fi,j as a representative of the semantic content in the video scene Ai,j. Finally, we input all the video frames Fi = {Fi,1, . . . , Fi,m} for a single video Ai and the entire video caption Bi to a large multimodal model [40]. Specifically, the model is prompted to generate high-quality captions for each of the frames Fi,j such they form a coherent narrative guided by the common caption Bi. We provide the prompt provided to the multimodal model in Appendix \u00a7A. In Figure 4 we provide an instance for the multi-scene video-text data generation. Datasets. To construct a multi-scene video-text dataset, we utilize existing dataset that include natural (real) videos and associated high-quality human-written captions that summarize the entire video. Specifically, we choose MSR-VTT [42] and VaTeX [43]. Most of the videos in MSR-VTT are 10-30 seconds long while VaTeX consists 10 seconds long videos. In addition, each video in MSR-VTT and VaTex consists 20 captions and 10 captions, respectively, out of which one is selected at random for multi-scene data generation. As described above, a single video is cut into multiple video segments using Pyscene library. In our experiments, we retain the first four video segments and discard any additional segments if the library generates more than four. Since the accuracy of the multi-scene captioning and the computational demands during finetuning are influenced by the number of scenes, we opt to limit the scene count to four for our experiments. However, future work could employ similar methodologies to scale the number of scenes, given more computing power and advanced multi-scene captioning models. We provide the data statistics for the final multi-scene data in Appendix \u00a7G. 4 Evaluation In this section, we describe the evaluation scheme for videos generated from multi-scene text descriptions. First, we describe the evaluation metrics that we aim to assess in this work (\u00a74.1). Then, we generate multi-scene descriptions for a diverse set of tasks (\u00a74.2). Finally, we present the details for automatic and human evaluation of the generated videos (\u00a74.3). 4.1 Metrics The ability to assess the quality of the generated multi-scene videos is a challenging task itself. As humans, we can judge the multi-scene videos across diverse perceptual dimensions [44] that the existing automatic methods often fails to capture [45]. Following [28], we focus on the visual consistency of the generated video, text adherence capabilities of the T2V models, and video quality of the video. Here, we present the metrics with the aspects that they intend to assess in the generated video for multi-scene text description. Visual Consistency. This metric aims to assess the (entity or background) consistency between the frames of the multi-scene videos. Here, the entity consistency aims to test whether the entities in the multi-scene video are consistent across the video frames. For instance, the appearance of an animal should not change without a change described in the text description. In addition, the background consistency aims to test whether the background of the multi-scene video remains consistent across the video frames. For instance, the room should not change without a change description in the text. 1https://github.com/Breakthrough/PySceneDetect 7 \fText Adherence. This metric aims to test whether the generated video adheres to the multi-scene text description. For instance, the events and actions described in the text script should be presented in the video accurately, and in the correct temporal order. In our experiments, we compute the visual consistency and text adherence with the automatic and human evaluators. Further, we compute the overall score, which is the average of the visual consistency and text adherence scores. In addition, we also assess the visual quality of the generated videos using human evaluation to understand whether the video contains any flimsy frames, shaky images, or undesirable artifacts (Table 1. 4.2 Task Prompts Here, we curate a set of task prompts for diverse scenarios, aiming to holistically assess the quality of the generated videos. Single character in multiple visual contexts (S1). In this scenario, we instruct an LLM, GPT-4, to create a coherent script consisting of four scenes. Each scene features a specific animal character performing diverse activities in every scene. This task assesses the capability of the T2V model to generate consistent appearance of the entity and its background while adhering to the different actions (or events) described in the multi-scene text script. For instance, a generated script could be \u2018Scene 1: A red panda is climbing a tree. Scene 2: The red panda eats the leaves on the tree. Scene 3: The red panda lies down on the branch of the tree. Scene 4: The red panda sleeps on the branch\u2019. In total, we generate 100 prompts in this scenario. Different characters in a specific visual context (S2). In this scenario, we instruct a language model, GPT-4, to create a coherent script consisting of four scenes. Each scene features different animal characters engaging in the same activity in every scene [20]. This task assesses the capability of the T2V model to generate consistent appearance of the background while adhering to the appearance of the different characters in the multi-scene text script. For instance, a generated script could be \u2018Scene 1: A cat leaps onto countertop. Scene 2: A dog leaps onto the same countertop. Scene 3: A rabbit leaps onto the same countertop. Scene 4: A raccoon leaps onto the same countertop\u2019. In total, we generate 100 prompts in this scenario. Multi-scene captions from real videos (S3). Here, we aim to assess the ability of the model to generate multi-scene videos for open-ended prompts that are derived from real-world videos. This task also assesses the ability of the T2V model to generate consistent appearances of the entity and its background while adhering to multi-scene descriptions. Specifically, we use our multi-scene video-text data generation pipeline (\u00a73.3) to create such prompts for the real videos from the test splits of the video-text datasets. For example, a multi-scene text script could be \u2018Scene 1: A beauty vlogger introduces her skincare routine. Scene 2: She applies a serum to her face, smoothing it in\u2019. We present a sample of the various task prompts in the Appendix \u00a7B. In total, we generate 100 prompts in this scenario. 4.3 Evaluator In this work, we devise an automatic evaluation framework and perform human evaluation to assess the quality of the multi-scene generated videos. Automatic Evaluation. Here, we utilize the capability of a large multimodal model, GPT-4-Vision [46], to reason over multiple image sequences. First, we sample four video frames, uniformly, from each scene in the generated video (e.g., 8 videos frames for two-scene video). Then, we prompt the multimodal model with the temporal sequence of video frames from different scenes and the multi-scene text description. Specifically, we instruct the multimodal model to decide the quality of the generated video across various metrics including entity consistency, background consistency, and text adherence. For each metric, the multimodal model assigns one of three possible response {yes = 1, partial = 0.5, no = 0}. For instance, yes for the entity consistency metric implies that the video frames sampled from the generated video have consistent appearance of the entity described in the multi-scene script. In this work, we do not utilize any existing video-text alignment models [47, 41] for evaluating text adherence as they are trained on single-scene video-text datasets. We present the automatic evaluation prompt in Appendix \u00a7C. 8 \fHuman Evaluation. We also conduct a human evaluation to assess the multi-scene generated videos along the dimensions of visual consistency, text adherence, and visual quality. Specifically, we ask the annotators from Amazon Mechanical Turk (AMT) to choose one of three options for each metric {yes, partial, no}, similar to the automatic evaluation. In addition, we choose the annotators that pass a preliminary qualification exam. We present the screenshot of the UI in Appendix \u00a7D. 4.4 Evaluation Setup Since merging captions (\u00a73.2) and TALC (\u00a73.1) methods input the entire multi-scene text description at once, the quality of the video generated by these methods is influenced by the number of scenes described in the text script. Hence, we calculate the performance of the baselines and TALC by averaging the scores assigned to videos generated for two, three, and four scenes. Additionally, we report on visual consistency by averaging the performance across the entity and background consistency metrics. Here, the entity consistency scores are calculated for the task prompts S1 and S3 (since S2 aims to change the characters across scenes), and the background consistency and text adherence scores are computed for all the task prompts. We also evaluate the impact of TALC-based finetuning on the single scene generation in Appendix \u00a7I. 5 Experiments 5.1 Text-to-Video Generative Models In this work, we utilize ModelScope [10] and Lumiere [6] T2V models for multi-scene video generation. Here, ModelScope is an open-source T2V model with 1.7 billion parameters including the video encoder, text encoder, and denoising U-net network. Specifically, it is trained to generate 16 video frames on the mix of WebVid [14] video-text dataset and LAION [48] image-text dataset. We perform most of our experiments on ModelScope due to its easy-of-access and adoption in prior works [28]. In addition, we also include Lumiere-T2V, a model that leverages space-time U-Net denoising networks to generate high-quality videos. In this work, we include early experiments with Lumiere to showcase the flexibility of the TALC approach for multi-scene video generation. Base model with TALC. As described in \u00a73.1, our approach modifies the traditional text-conditioning mechanism to be aware of the alignment between text descriptions and individual video scenes. By design, the TALC framework can be applied to the base T2V model during inference, without any multi-scene finetuning. Thus, we compare the performance of the multi-scene videos generated from ModelScope and Lumiere T2V base models under three settings: merging captions, merging videos, and TALC. In this setting, we generate 16 frames per scene from ModelScope and 80 frames per scene from Lumiere. We provide more details on the inference in Appendix \u00a7F. Finetuning with TALC. Since the base model is pretrained with single-scene data, we aim to show the usefulness of TALC framework when we have access to the multi-scene video-text data. To this end, we finetune ModelScope on the multi-scene video-text data (\u00a73.3) with TALC framework. As a pertinent baseline, we also finetune the ModelScope without TALC framework by naively merging the scene-specific captions in the raw text space. In this setting, we finetune the T2V model with 8 frames per scene and the maximum number of scenes in an instance is set to 4. We provide further details on the finetuning setup in Appendix \u00a7H. The inference settings are identical to the prior method of generating videos from the base model without finetuning. In this section, we present the results for the baselines and TALC framework averaged over a diverse task prompts and multiple scenes using automatic evaluation (\u00a75.2) and human evaluation (\u00a75.3). Finally, we provide qualitative examples for the multi-scene generated videos to showcase the usefulness of our approach (\u00a75.4). 5.2 Automatic Evaluation We compare the performance of the baselines (e.g., merging captions and merging videos) with the TALC framework for ModelScope and Lumiere using the automatic evaluation in Figure 5. TALC outperforms the baselines without any finetuning. In Figure 5(a), we find that the overall score, average of visual consistency and text adherence, of the multi-scene videos generated using 9 \fVisual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (0-100) 91.0 65.0 89.9 77.0 89.0 32.4 70.0 47.2 37.5 62.3 61.7 67.5 68.6 57.3 75.6 Merging Captions (Base) Merging Videos (Base) TALC (Base) Merging Captions (F .T.) TALC (F .T.) (a) Performance on ModelScope T2V model. Visual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (%) 94.7 68.0 97.8 34.0 65.0 39.0 64.4 66.5 68.4 Merging Captions (Base) Merging Videos (Base) TALC (Base) (b) Performance on Lumiere T2V model. Figure 5: Automatic evaluation results for (a) ModelScope and (b) Lumiere. In (a), we observe that TALC-finetuned ModelScope model achieves the highest overall score, that is the average of the visual consistency and text adherence scores. In (b), we find that TALC framework with the Lumiere base model outperforms merging captions and merging videos on the overall scores. We report the average performance across the diverse multi-scene prompts and the number of generated scenes. the base ModelScope with TALC (68.6 points), outperforms the overall score achieved by the videos generated using merging captions (61.7 points) and merging videos (67.5 points) with the base ModelScope. Specifically, we observe that the visual consistency of the generated video is high for merging captions (91 points) and TALC (89.9 points) while it is low for merging videos (65 points). This indicates that merging videos independently for the individual scene descriptions does not preserve the background and entity appearances across the different frames. In addition, we observe that the text adherence using TALC outperforms merging captions by 14.8 points, while the text adherence is the highest with a score of 70 points using merging videos. This can be attributed to the design of the merging videos baseline where individual video scenes adhere to the scene-specific descriptions well. Hence, merging videos independently approach can be viewed as an upper bound on the text adherence metric. In Figure 5(b), we observe similar trends for the Lumiere T2V generative model. Specifically, we find that the overall score for TALC outperforms merging captions and merging videos by 4 points and 2 points, respectively. In addition, we observe that merging captions and TALC achieve a high visual consistency score while merging videos independently has poor visual consistency. Further, we find that TALC outperforms merging captions by 5 points on text adherence, while merging videos achieves the highest text adherence 65 points. This highlights that the model more easily generates 10 \fmulti-scene videos that adhere to individual text scripts, whereas adherence to the text diminishes when the model is given descriptions of multiple scenes all at once. Finetuning with TALC achieves the best performance. Earlier, we evaluated the usefulness of the TALC framework with the base model. However, the base models are trained with the singlescene video-text data that might limit their capability for multi-scene video generation. To alleviate this issue, we finetune ModelScope T2V model on the multi-scene video-text data (\u00a73.3). Specifically, we finetune the model using the merging captions method and TALC framework, independently. In Figure 5(a), we find that finetuning with TALC achieves the highest overall score of 75.6 points in comparison to all the baselines. Specifically, we observe that the visual consistency does not change much with finetuning using the TALC method (89.9 points vs 89 points). Interestingly, we observe that finetuning with merging captions reduces the visual consistency by a large margin of 14 points. This can be attributed to the lack of knowledge about the natural alignment between video scenes and individual scene descriptions, which gets lost during the merging of captions. Additionally, we find that the text adherence of the TALC-finetuned model is 15.1 points more than the text adherence of the TALC-base model. Similarly, we find that the text adherence of the merging captions-finetuned model is 5.1 points more than the text adherence of the merging captions-base model. This highlights that finetuning a T2V model with multi-scene video-text data helps the most with enhancing its text adherence capability. Fine-grained Results. To perform fine-grained analysis of the performance, we assess the visual consistency and text adherence scores for the baselines and TALC framework across diverse task prompts and number of scenes on ModelScope. We present their results in Appendix \u00a7E. In our analysis, we find that finetuning with TALC achieves the highest overall score over the baselines across all the scenarios. In addition, we notice that the highest performance is achieved in the scenario that consist of the different entities in a specific visual context. Further, we observe that the performance of the all the methods reduces when the task prompts get more complex i.e., multiscene captions from real videos. In addition, we observe that finetuning with TALC achieves the highest overall score over the baselines across all the number of scenes. Specifically, we observe that the performance of the merging captions and TALC framework reduces as the number of scenes being generated increases. Overall, we show that the TALC strikes a good balance between visual consistency and text adherence to generate high-quality multi-scene videos. 5.3 Human Evaluation Table 1: Human evaluation results on the visual quality of the generated videos from ModelScope. We observe that the visual quality of the generated videos are close to each other for the base model. However, finetuning the model with merging captions reduces the video quality by a large margin while TALC-finetuned model retains the video quality. Method Quality Merging Captions (Base) 80.5 Merging Videos (Base) 86.5 TALC (Base) 84.5 Merging Captions (F.T.) 63.4 TALC (F.T.) 83.3 TALC achieves the best performance in human evaluation. We compare the performance of the baselines and TALC framework for ModelScope using human evaluation in Figure 6. We find that TALC-finetuned model outperforms the merging captions and merging video methods with the base model by 12 points and 15.5 points, respectively, on the overall score. In addition, we find that using TALC framework in the base model outperforms the merging captions and merging video methods with the base model by 7.6 points and 11.1 points, respectively, on the overall score. Further, we observe that the merging captions with the base model achieves the highest visual consistency score of 96.5 points while it is the lowest for merging videos generated from the base model. In addition, we find that the text adherence of the TALCfinetuned and TALC-base model is better than merging captions-finetuned and merging captions-base model, respectively. Our results highlight at the benefit of including the inductive bias of temporal alignment between the video scenes and their scene descriptions for multi-scene video generation. Visual quality of the generated videos. We compare the visual quality of the generated videos using human evaluation in Table 1. We find that the visual quality of videos generated from the base 11 \fVisual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (%) 96.5 55.0 92.3 80.0 86.4 33.0 67.5 52.5 42.3 67.2 64.8 61.3 72.4 61.1 76.8 Merging Captions (Base) Merging Videos (Base) TALC (Base) Merging Captions (F .T.) TALC (F .T.) Figure 6: Human evaluation results for ModelScope model. We observe that the base model using the TALC framework outperforms the merging captions and merging videos baselines on the overall score. In addition, TALC-finetuned model enhances the text adherence and achieves the highest overall score. We report the average performance across the diverse multi-scene prompts and the number of generated scenes. model ranges from 80.5 \u221286.5 using the baselines and TALC framework. However, we observe that the visual quality of generated videos is quite poor for the model finetuned with merging captions with a score of 63.4 points. This highlights that finetuning a T2V model with multi-scene video-text data by naively merging the scene-specific descriptions in the raw text space leads to undesirable artifacts in the generated video. Finally, we find that the TALC-finetuned model (83.3) achieves a video quality score similar to that of the TALC-base model (84.5), indicating that our finetuning data preserves the visual quality observed during the model\u2019s pretraining. While our work is centered around multi-scene evaluation, we also perform single-scene evaluation in Appendix \u00a7I. 5.4 Qualitative Analysis We provide qualitative examples of generating multi-scene videos using ModelScope (fine-tuned with TALC) and Lumiere (base model with TALC) for diverse scenarios in Figure 12. Our analysis reveals that both ModelScope and Lumiere are capable of producing multi-scene videos that exhibit high text adherence and visual consistency. Considering the case of the same animal engaging in multiple actions (referred to as \"one character multiple contexts\"). The videos generated by ModelScope successfully maintained the same animal while varying the background and action between the scenes. Conversely, the videos generated by Lumiere displayed the same animal performing different actions with minimal background alterations. We believe that this distinction is attributed to ModelScope\u2019s fine-tuning with TALC. Considering different animals within a particular visual setting (referred to as \"multiple-characters same context\"), both ModelScope and Lumiere demonstrated impressive abilities in preserving the consistency of the background across the videos and adhering closely to the provided text. During our analysis, we noticed that the multi-scene captions derived from real videos (referred to as \"open-ended captions\") exhibited a substantial number of changes between the various scenes. In this scenario, Lumiere, when employed without fine-tuning, displayed challenges in adhering to the text, while ModelScope achieved a higher degree of text adherence but was also prone to visual artifacts. 6 Related Work Text-to-Video Generative Modeling. The field of text-to-video (T2V) synthesis has significantly evolved from its inception with models like VGAN [2] and MoCoGAN [49], leveraging the foun12 \fdational technologies of GANs [50] and VAEs [51] to produce concise, single-scene videos. The narrative depth was further expanded through transformer-based architectures such as CogVideo [52] and VideoGPT [53], enhancing the complexity of video content yet remaining within the confines of single scenes. The advent of diffusion models, exemplified by Imagen Video [54], marked a notable advancement in T2V synthesis. Despite these strides, the challenge of creating multi-scene videos that reflect the complexity of the physical world [1, 2, 3] remains. Our work, TALC, extends the capabilities of T2V models to multi-scene storytelling, filling a crucial gap in the synthesis landscape. Image-to-Video Animation. The exploration of multi-scene video generation, innovative methods such as Lumiere [6] and Make-a-Video [55] have employed a two-step process, transforming text to images and then animating these images into videos. While these approaches have advanced visual quality, they often fall short in weaving seamless multi-scene narratives. This limitation is echoed in the work of Emu Video [8], which underscores the difficulty of achieving narrative coherence across multiple scenes. TALC focuses on direct generation of multi-scene narratives from textual prompts aiming for a narrative flow and visual consistency across scenes. Multi-Scene Video Generation. The pursuit of multi-scene T2V synthesis has been furthered by recent innovations like Phenaki [20] and Stable Video Diffusion [12], which have explored new frontiers in video generation from textual prompts and the scaling of latent diffusion models, respectively. Additionally, Dreamix [56] and Pix2Video [57] have broadened the scope of diffusion models, applying them to video editing and animation. Despite these advancements, the task of generating videos that convey coherent narratives across multiple scenes remains formidable, highlighted by recent works such as VideoPoet [9], ModelScope [10] and Make-A-Scene [58]. TALC tackles this task and offers a framework produces videos spanning multiple scenes. We also introduce nuanced evaluation approach. This approach integrates both automated assessments and human evaluations to rigorously gauge the quality and narrative coherence of the generated content, evaluating text adherence, object consistency and background consistency, contributing to the ongoing refinement of T2V synthesis. 7 Conclusion We introduced TALC, a simple and effective method for improving the text-to-video (T2V) models for multi-scene generation. Specifically, it incorporates the knowledge of the natural alignment between the video segments and the scene-specific descriptions. Further, we show that TALCfinetuned T2V model achieve high visual consistency and text adherence while the baselines suffer from one or both of the metrics. Given its design, our framework can be easily adapted into any diffusion-based T2V model. An important future direction will be to scale the amount of multi-scene video-text data and deploy TALC framework during pretraining of the T2V models. 8 Acknowledgement We would like to thank Ashima Suvarna for providing feedback on the draft. Hritik Bansal is supported in part by AFOSR MURI grant FA9550-22-1-0380."
18
+ }
title_10K/test_title_short_2405.04700v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04700v1",
3
+ "title": "Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures",
4
+ "abstract": "Large Language Models (LLMs) deployed on edge devices learn through\nfine-tuning and updating a certain portion of their parameters. Although such\nlearning methods can be optimized to reduce resource utilization, the overall\nrequired resources remain a heavy burden on edge devices. Instead,\nRetrieval-Augmented Generation (RAG), a resource-efficient LLM learning method,\ncan improve the quality of the LLM-generated content without updating model\nparameters. However, the RAG-based LLM may involve repetitive searches on the\nprofile data in every user-LLM interaction. This search can lead to significant\nlatency along with the accumulation of user data. Conventional efforts to\ndecrease latency result in restricting the size of saved user data, thus\nreducing the scalability of RAG as user data continuously grows. It remains an\nopen question: how to free RAG from the constraints of latency and scalability\non edge devices? In this paper, we propose a novel framework to accelerate RAG\nvia Computing-in-Memory (CiM) architectures. It accelerates matrix\nmultiplications by performing in-situ computation inside the memory while\navoiding the expensive data transfer between the computing unit and memory. Our\nframework, Robust CiM-backed RAG (RoCR), utilizing a novel contrastive\nlearning-based training method and noise-aware training, can enable RAG to\nefficiently search profile data with CiM. To the best of our knowledge, this is\nthe first work utilizing CiM to accelerate RAG.",
5
+ "authors": "Ruiyang Qin, Zheyu Yan, Dewen Zeng, Zhenge Jia, Dancheng Liu, Jianbo Liu, Zhi Zheng, Ningyuan Cao, Kai Ni, Jinjun Xiong, Yiyu Shi",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "cs.DC",
13
+ "cs.IR"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
17
+ "gt": "Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures",
18
+ "main_content": "INTRODUCTION The emerging Large Language Models (LLMs) are deployed primarily on centralized cloud platforms [1, 2] (Cloud LLMs), raising concerns about user privacy and trustworthy issues [3]. These issues become even more prominent in areas such as healthcare [4], companionship [5], and personal assistance [6], where the user privacy and trustworthiness of LLMs are crucial. To address these issues, the cloud LLMs will eventually transform into personalized LLMs, capable of generating personalized responses, deployed on edge devices (Edge LLMs), where users can keep all their private data and the model learns from those data locally. To better suit the needs of individual users, Edge LLMs must learn from user interactions. However, their capability of learning is constrained by their limited RAM and computational power. Similar to Cloud LLMs, the Edge LLMs primarily learn by finetuning their model parameters. Yet, given that these models often contain over 3 billion parameters, updates can be challenging, even with numerous efforts to accelerate them [7\u20139]. For example, using the experimental high-performance embedded system like NVIDIAAGX, the pockengine method [9] can still take 90 hours to learn from a middle-sized dataset Alpaca with only 52k documents, making this option impractical for normal users. E x : \u201cI am sick?\u201d Sentence Embedding Model E(x) User Query Profile Data Embedding Space S \u2026 P(x, d83) P(x, d29) P(x, d37) Top k (k = 1) DAC ADC \u2026 CiM E(d83) Data: d83, d29, d37 User Query LLM Output NVM Digital Logic \ud835\udc6c(\ud835\udc99) \u2219\ud835\udc6c(\ud835\udc85\ud835\udc8a) = \ud835\udc0f(\ud835\udc31, \ud835\udc85\ud835\udc8a) E(d1) E(d2) E(d3) E(d4) E(dn) Document 2 Document 1 Document n \u2026 Figure 1: The workflow of RAG on edge-based CiM. CiM performs max inner product search (MIPS) to retrieve the top-ranked documents, concatenating them with user query to allow the LLM to generate personalized responses. Retrieval-augmented generation (RAG), on the other hand, is a more resource-efficient choice [10], and hence becoming the de facto learning method for Edge LLMs. In a typical RAG system, it consists of a retriever and a generator. The retriever is commonly backed by max inner product search (MIPS). When the retriever receives a user query, it will retrieve the most relevant document from profile data, as shown in Figure 1. The profile data has many documents, and each document \ud835\udc51\ud835\udc56contains specific information that may be relevant to user queries. The generator can be seen as a LLM, which takes the user query \ud835\udc65and retriever-obtained documents as a prompt and generates a corresponding response. For every document\ud835\udc51\ud835\udc56and the user query \ud835\udc65, RAG utilizes a sentence embedding model shown in Figure 1 to convert them into vectors (i.e., \ud835\udc38(\ud835\udc51\ud835\udc56) and \ud835\udc38(\ud835\udc65), respectively). The vectors for documents can be named as document embeddings and stored as a matrix as shown in Figure 1. The vector for user query, named query embedding \ud835\udc38(\ud835\udc65), will be used in MIPS to perform inner product with every document embedding. The larger the product \ud835\udc43(\ud835\udc65,\ud835\udc51\ud835\udc56), the more semantic similar it will be between the user query and the document. Using RAG, Edge LLMs can provide user-preferred responses by retrieving relevant documents from profile data, and the profile data can be incrementally updated with new documents. This is an efficient learning process without costly updating the model parameters via fine-tuning [11]. Other than the inevitable LLM inference cost, the primary computational cost of RAG is about retrieval, which is more than ten times less than the cost of updating model parameters. While the computational cost of RAG is more edge-friendly, there still exist two issues impeding RAG from being deployed for real-time user interaction on Edge LLMs. Firstly, the growing profile data as stored cannot be unlimited without affecting the access time. If the size of the profile data exceeds the RAM capacity, arXiv:2405.04700v1 [cs.LG] 7 May 2024 \fRuiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 it will need to be offloaded into the storage, such as a hard disk drive (HDD) or solid-state drive (SSD). Accessing data from HDD or SSD will significantly increase the data transfer latency [12], rendering real-time user interaction impractical. Secondly, the core retrieval method of RAG, MIPS, may experience decreased efficiency as profile data grows, and it can become potentially prohibitive when dealing with overwhelmingly large datasets. For example, on Raspberry Pi 4B, MIPS can take 5 minutes to find one appropriate profile data among 21M documents [10], which is even longer than the 2-minute inference time of an Edge LLM. Unfortunately, few efforts have been made to optimize RAG towards Edge LLMs. Thus, we propose to utilize the Computing-in-Memory (CiM) architecture to address this issue. As shown in Figure 1, CiM architectures using memory arrays have shown substantial promise in accelerating matrix-vector multiplication [13], which is the key operation of MIPS. The CiM architectures often utilize massive parallel processing to perform computations directly within the memory array where the data is stored, such that they can minimize the data movement through in-situ data access and significantly increase the throughput [14]. Given the same amount of documents, CiM can finish computation within 50ms [15], which is negligible compared to the computation latency on normal edge devices. Furthermore, by incorporating non-volatile memory (NVM) devices, such as phase-change memories (PCMs), resistive random-access memories (RRAMs), and ferroelectric field-effect transistors (FeFETs), CiM can outperform conventional MOSFET-based designs in terms of energy efficiency [16]. 0.00 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Level of noise ( ) 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Citation Movie Rating News DBLP Figure 2: The impact on MIPS accuracy when the RAG\u2019s document embedding is perturbed by various levels of Gaussian noise caused by the device variations. An accurate retrieval means the document retrieved under the impact of the noise is the same as that retrieved without noise. Unfortunately, simply changing the underlying hardware is not enough, as the non-idealities of the NVM devices in CiM array could greatly deteriorate the RAG performance. First, the operations performed in CiM architectures are susceptible to various sources of noise, including electronic noise (thermal, shot, and flicker), deviceto-device variability, and line noise from the supporting circuitry [17]. These noise sources can corrupt the computations, especially when the signal levels are close to the noise floor, which is a common scenario in high-precision tasks. Such noise issues are critical in RAG applications where the accuracy and quality of the generated content heavily rely on the precision of the underlying computations. Additionally, the CiM architecture is primarily designed and optimized for low-resolution computation [18]. Moreover, CiM arrays are typically sized at a fixed dimension, such as 64x64 [19], which is different from the documents\u2019 embedding dimension (e.g., 128). Therefore, both RAG\u2019s data precision (typically FP32) and its embedding dimension need to be reduced to fit in the size of CiM\u2019s crossbar arrays. To illustrate the impact of these on RAG, as an example, we present a preliminary study on MIPS performance in Figure 2, where we use a simple yet representative Gaussian noise to simulate the noise from the device variations in CiM. As shown in Figure 2, as the noise level increases, MIPS accuracy (specified in section 4.1.3) drops dramatically, approaching random guessing. To address these issues, we further propose a novel optimization framework for CiM-backed RAG, called Robust CiM-backed RAG (RoCR). The framework consists of three parts. The first part is a contrastive learning method. We use it to optimize the document embedding model. The second part is a novel data construction method to generate both positively and negatively labeled data pairs for contrastive learning. For the profile data, they can be either labeled to indicate the explicit user-preferred response to certain input, or simply statements without explicit labels that only implicitly indicate user preferences. Our data construction method is capable of dealing with both types of profile data. The third part is a noise-aware training method. It goes in tandem with contrastive learning to obtain a sentence embedding model that can generate document and user query embeddings with high noise-resilient capability, while such embeddings can fit into CiM architectures under different designs and configurations. Our major contributions can be summarized as: \u2022 We propose the first work to harvest CiM advantages for RAG acceleration on the edge. We provide a pathway to utilize emerging CiM devices to expand the Edge LLMs\u2019 capability in terms of storing a high volume of profile data with fast MIPS computing. \u2022 We introduce noise-aware training to enhance the noiseresilient capabilities of RAG\u2019s document embedding. The resulting noise-resilient embeddings can be reused robustly, saving resources needed to calibrate and regenerate embeddings. \u2022 Our experiments on various datasets show that our proposed framework can improve the RAG performance on multiple CiM devices up to 35%, approaching to the theoretical RAG performance. Across a wide device variation (noise) range on a single CiM device, our proposed framework can still improve the RAG performance. 2 RELATED WORK 2.1 CiM Architectures and their NVMs As shown in the middle part of Figure 1, memory arrays are the key component for vector-matrix multiplication. In this array, matrix values are stored at NVM cells, such as emerging NVM technologies like PCMs, RRAMs, and FeFETs, at the cross-points of vertical and horizontal lines. Simultaneously, vector values flow along the horizontal lines of the array. Operations within the memory array take place in the analog domain by exploiting law of physics directly. However, for other essential functions like shift-and-add for multiple bits and sorting to find the top-k ranked values would be done in the digital domain. Thus, digital-to-analog and analog-to-digital \fRobust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures profile data Data Construction Module positive examples anchor examples \u2026 Reshape Module embeddings Contrastive Learning close far Device Variation NVMs Sentence Embedding Model negative examples Flexible Noise-aware Training Module optimize constraints Figure 3: Overview of the proposed Robust CiM-backed RAG framework (RoCR). It optimizes the sentence embedding model to adapt different types of NVMs utilized by CiM. converters (DACs and ADCs) are used to connect these different components. CiM arrays suffer from various sources of variations and noises. Two major ones include spatial variations and temporal variations. Spatial variations result from fabrication defects and have both local and global correlations. FeFET devices also suffer from temporal variations due to the stochasticity in memory switching and also aging, which causes fluctuations in conductance when programmed at different times. Temporal variations are typically independent from device to device and are irrelevant to the value to be programmed [20]. In this work, as a proof of concept, we focus on the impact of temporal variations in the programming process on DNN performance. Temporal variation makes the programmed resistance of a device deviate from what is expected. The proposed framework can also be extended to other sources of variations with modification. Measurement results [21, 22] show that the noise on DNN weights caused by device variations can be safely modeled as a Gaussian noise with zero mean, each with a standard deviation associated with the weight value. A detailed representation is given by: v = v0 + \u0394v, \u0394v \u223cN (0, \ud835\udf0e\ud835\udc63) (1) where v is the actual embedding deployed on the accelerators, v0 is the target embedding value, and \ud835\udf0e\ud835\udc63is a value measured by the experiments. We collect the measurement results from RRAM and FeFET devices and the specific value will be discussed in Section 4.1. 2.2 Past Noise Mitigation Methods Several strategies have been introduced to tackle the challenge of device variations in CiM accelerators. These methods can be separated into software and hardware-based techniques. The software-based techniques are generally developed to obtain more robust DNN models [19, 22\u201324] or recommendation systems [25], and are thus not suitable for generating more robust MIPS solutions. For the hardware techniques, the write-verify procedure [26, 27] is one of the most commonly used approach during programming. Initially, a NVM device is programmed to a set state via a designated pulse pattern. Subsequent to this, the device\u2019s value is verified to ascertain if its conductance aligns with a stipulated range of the desired value, essentially assessing its accuracy. If discrepancies arise, a supplemental update pulse is initiated to reset the device conductance nearer to the target. This loop persists until the disparity between the programmed device value and the target value diminishes to a satisfactory margin, typically taking a handful of cycles. Cutting-edge research suggests that by selectively applying write-verify to a subset of pivotal devices, one can uphold the average accuracy of a DNN [21]. Additionally, a variety of circuit design initiatives [18, 28] have been put forth to counteract device variations. 3 PROPOSED WORK 3.1 Framework Overview As shown in Figure 3, our proposed framework, Robust CiM-backed RAG (RoCR), consists of three stages. First, we apply contrastive learning to utilize the training data to optimize the training module. To do that, in the second stage, we take the profile data and construct via a data construction module to obtain contrastive training data pairs, which are then used in the flexible noise-aware training module. In the third stage, we obtain the constraints of NVMs in CiM via profiling. These constraints will be encoded into the flexible noise-aware training module and used to train the sentence embedding model so that it can generate embedding that are robust against device variation of the target NVMs. After training, the training module can be turned into a new sentence embedding model and generate CiM-friendly embeddings. 3.2 Contrastive Learning: Triplet Loss Function When we apply RAG using CiM, we first need to store embeddings into NVMs as shown in Figure 1. Such embeddings are generated by the sentence embedding model, and they are the numerical representations of profile data. Each single document in the profile data can have its unique embedding, which is a vector. The embeddings stored on NVMs can consist of a matrix as the orange blocks shown in Figure 1. Given a user query, which will also be converted into an embedding, CiM can operate MIPS between this user query embedding and all profile embeddings simultaneously via vector-matrix multiplication. The top-ranked values in the product will be used as the index to retrieve the corresponding document data, as the pink block shown in Figure 1. This retrieved user-relevant document is the output of MIPS. However, as we have explained in Section 2.1, writing the document embeddings into NVMs can cause them to suffer from temporal variations (device variations). Then, the NVM-stored embeddings will be different from the original sentence embedding model generated embeddings. As shown in Figure 4, the vanilla embedding model generates desired embedding, which will deviate to the noise embedding under device variation, such that the irrelevant embedding is ranked higher than desired embedding due to its larger inner product. Contrastive learning can learn the representations via push away dissimilar examples and pull close similar examples [29]. In particular, the contrastive loss function can be used to increase the distance between dissimilar examples. In our work, we propose to improve the noise-resilient capability by contrastive learning. By increasing the distance between \fRuiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 noise embedding irrelevant embedding query retrieve the wrong data irrelevant embedding query NVMs Device Variation lead to Vanilla CiM-backed RAG Robust CiM-backed RAG Our embedding model noise-resilient embeddings user profile data NVMs Device Variation desired embedding vanilla embedding model user profile data retrieve the desired data embeddings Figure 4: Improvement by our Robust CiM-backed RAG. Our framework generates noise-resilient embeddings, as shown the orange and blue point in right subfigure dissimilar examples, as shown the right subfigure in Figure 4, deviated desired embedding will still have a larger inner product with the query compared to the irrelevant embedding. Our contrastive learning loss function is based on Weinberger et al. [30]. For each example \ud835\udc65\ud835\udc56in a mini-batch of N anchor examples, our data construction method will construct \ud835\udc3epositive and \ud835\udc3enegative examples corresponding to \ud835\udc65\ud835\udc56. We can have {{(\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56)\ud835\udc58}\ud835\udc56=1,...,\ud835\udc41}\ud835\udc58=1,...,\ud835\udc3e, in which \ud835\udc65\u2212and \ud835\udc65+ are negative and positive examples corresponding to \ud835\udc65\ud835\udc56, where \ud835\udc65\ud835\udc56is closer to \ud835\udc65+ \ud835\udc56compared to \ud835\udc65\u2212 \ud835\udc56. Also, \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) represents the learned embedding of \ud835\udc65\ud835\udc56. Then the loss function L can be defined as: L = \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 1 \ud835\udc3e \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 max \u0010 0, d(\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58)) \u2212d(\ud835\udc65\ud835\udc56,\ud835\udc65+ \ud835\udc56(\ud835\udc58)) + \ud835\udc5a \u0011 , d(\ud835\udc65\ud835\udc4e,\ud835\udc65\ud835\udc4f) = sim(emb(\ud835\udc65\ud835\udc4e), emb(\ud835\udc65\ud835\udc4f)) (2) The distance \ud835\udc51(\ud835\udc65\ud835\udc4e,\ud835\udc65\ud835\udc4f) is calculated by the Euclidean distance between embeddings of two data \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc4e) and \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc4f). The function \ud835\udc60\ud835\udc56\ud835\udc5a() calculate the semantic similarity. 3.3 Data Construction To train the sentence embedding model via contrastive learning, it is critical to construct pairs of examples where the positive examples and negative examples need to be distinct from each other [31]. In our work, since we use triplet contrastive loss, instead of pairs of examples, we will construct trios of examples where each triplet contains an anchor, positive, and negative example. We use profile data to construct triplets of examples. For the profile data, it is generated by the user during the user-LLM interaction and contains the user preference information. There exists two situations for such data. First, the profile data can contain explicit labels indicating the user preferred response to the corresponding content. Second, the profile data also can be statements containing the user-related information but without explicit user preferences As shown in Figure 5, to deal with the two situations, we come up with two data construction methods: Construction Data with Explicit labels (CDE) and Construction Data with Implicit labels (CDI). \u201cJake Blues, just released from prison, puts his old band back together to save the Catholic home where he and his brother Elwood were raised.\u201d is \u201cdystopia\u201d negative r = 0.1 r = 0 r = 0 \u201cFresh out of prison, Jake Blues rallies his old band to save their childhood Catholic home\u201d is \u201cclassic\u201d positive example (embedding) \u201cJake Blues, just released\u2026\u201d is \u201cclassic\u201d anchor example (embedding) \u201cJake Blues, just released \u2026\u201d is \u201cdystopia\u201d negative example (embedding) r = 0.1 \u201cVictims of traumatized \u2026\u201d r = 0 r = 0.9 CDE CDI E anchor/positive example negative example \u201cTwo victims of traumatized childhoods become lovers and serial murderers irresponsibly glorified by the mass media.\u201d anchor/positive/negative example \u201cTwo people with traumatic pasts turn into a couple on a crime spree, mistakenly idolized by the media.\u201d \u201cIndividuals, mired in traumas, unite *() crime-ridden bond, enthrall\u2606\u2609\u00a7ing the media's distorted spotlight.\" \u201cJake Blues, just released from prison, puts his old band back together to save the Catholic home where he and his brother Elwood were raised.\u201d is \u201cclassic\u201d explicit label Statement/implicit label Figure 5: Examples of the two data construction methods. For data with explicit labels, CDE is used to construct the training data. For data without explicit labels (implicit labeled data), CDI is used to construct the training data. 3.3.1 Construction Trios via Data with Explicit Labels (CDE). For the data with explicit labels, each of the data consists of a textual content c and its corresponding label l which indicates the user preferred response regarding to the content c. As shown in the CDE part in Figure 5, there exists explicit label circled by dashed line. Using the profile data, we will construct triplet examples in the format of (\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56). Given a dataset D with size of \ud835\udc5bprofile documents, each piece of data consists of a content \ud835\udc50\ud835\udc56and the corresponding label \ud835\udc59\ud835\udc56where \ud835\udc56\u2208{1, 2, ...,\ud835\udc5b}. The anchor example \ud835\udc65\ud835\udc56can be constructed as: \ud835\udc65\ud835\udc56= \ud835\udc50\ud835\udc56\u2295\ud835\udc59\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (3) where \u2295denotes a concatenation operation, specifically used here to combine label and content. Negative examples \ud835\udc65\u2212 \ud835\udc56can be constructed by concatenating \ud835\udc50\ud835\udc56with a random label \ud835\udc59\ud835\udc57that is different from \ud835\udc59\ud835\udc56as follows: \ud835\udc65\u2212 \ud835\udc56= \ud835\udc50\ud835\udc56\u2295\ud835\udc59\ud835\udc57, where \ud835\udc59\ud835\udc56\u2260\ud835\udc59\ud835\udc57. (4) Randomly assigning a different label ensures diversity in the negative examples while maintaining the same content from the anchor. Different from constructing anchor and its negative examples, it is challenging to construct positive examples corresponding to the anchor examples since it is more difficult to formalize semantically similar data than to formalize semantically dissimilar data. To construct positive examples, we follow the SimCSE method [32] to add a dropout rate \ud835\udc5finto the sentence embedding model M. The process for constructing positive examples involves two main steps. First, the textual positive example is formalized as: \ud835\udc65+ \ud835\udc56= \ud835\udc65\ud835\udc56, for \ud835\udc56= 1, 2, ...,\ud835\udc5b (5) where we align each anchor with the corresponding positive example. This step effectively duplicates the anchor data as a starting point for generating the embeddings. \fRobust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures Second, the embedding generation process varies based on the dropout rate applied within the model M. When model M is utilized to generate embeddings for anchor and negative examples, the dropout rate is set to 0. In contrast, for generating embeddings for positive examples, a non-zero dropout rate \ud835\udc5fis used. The anchor, negative, positive examples, as shown in Figure 5, can be constructed as: \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) = M(\ud835\udc65\ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= 0) \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\u2212 \ud835\udc56) = M(\ud835\udc65\u2212 \ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= 0) \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65+ \ud835\udc56) = M(\ud835\udc65+ \ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= \ud835\udc5f) (6) The condition of \ud835\udc5f\u22600 can induce variation in the embeddings, enhancing the model\u2019s ability to recognize semantically similar yet variably expressed content. Given the construction factor \ud835\udc3e, we can construct the triplet data examples as: D\ud835\udc61\ud835\udc5f\ud835\udc56\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc61= \ud835\udc41 \u00d8 \ud835\udc56=1 n (\ud835\udc65\ud835\udc56(\ud835\udc58),\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58),\ud835\udc65+ \ud835\udc56(\ud835\udc58)) : \ud835\udc58= 1, 2, . . . , \ud835\udc3e o (7) For the triplet data examples D\ud835\udc61\ud835\udc5f\ud835\udc56\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc61, their embeddings for each augmentation \ud835\udc58are given by: E = \ud835\udc41 \u00d8 \ud835\udc56=1 n (\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56(\ud835\udc58)),\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58)),\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65+ \ud835\udc56(\ud835\udc58)) : \ud835\udc58= 1, 2, . . . , \ud835\udc3e o (8) As shown in Figure 5, for data with explicit labels, a content\ud835\udc50can concatenate with its corresponding label \ud835\udc59to formalize the positive and anchor example. That content \ud835\udc50can also concatenate with other labels \ud835\udc59\u2032 to formalize the negative example. The positive example can be finally obtained from the sentence embedding model with dropout rate \ud835\udc5f. The anchor and negative example can be finally obtained from the sentnece embedding model with \ud835\udc5f= 0. 3.3.2 Construction Trios via Data with Implicit Labels (CDI). For data with implicit labels, each of the data consists of solely textual content c. As shown of the CDI part in Figure 5, there is no explicit label to indicate user preferences. Instead, the data can be seen as a statement containing some user-related information. To construct the anchor examples and positive examples, we can use the exact same method in EDC. Given a dataset D with size of n profile data, each piece of data consits of a content \ud835\udc50\ud835\udc56. The anchor data \ud835\udc65\ud835\udc56can be constructed as: \ud835\udc65\ud835\udc56= \ud835\udc50\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (9) For each anchor data \ud835\udc65\ud835\udc56, constructing its corresponding negative example is not as simple as merely concatenating the content\ud835\udc50\ud835\udc56with a non-corresponding label \ud835\udc59\ud835\udc58. To construct negative examples, we employ a reciprocal approach with the positive examples, applying a similar method to both. We first initialize the negative example and positive example following the equation 5: \ud835\udc65\u2212 \ud835\udc56= \ud835\udc65+ \ud835\udc56= \ud835\udc65\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (10) For the positive example \ud835\udc65+ \ud835\udc56, it can be finalized by incorporating a dropout rate \ud835\udc5finto the sentence embedding model M, where a rate of 0 < \ud835\udc5f\u22640.2 can generate a sentence embedding with a semantic representation similar to \ud835\udc65\ud835\udc56and ensure good model training performance [32]. Increasing the dropout rate to a higher value, such as 0.5, can distort the semantic representation of \ud835\udc65+ \ud835\udc56, making it dissimilar to that of \ud835\udc65\ud835\udc56. Training the model with such positive examples can result in poorer performance. For positive examples in training the sentence embedding model, the higher dropout rate performs more like a noise rather than a data augmentation method. In our work, we train the sentence embedding model to generate embeddings that maintain their integrity under noisy conditions, such as during writing into Compute-in-Memory (CiM). The noise can alter or fragment the original semantic representations. For instance, as illustrated in Figure 5, using a high dropout rate \ud835\udc5f= 0.9 can lead to a negative example with a corrupted representation. Although it may lack certain informative content, this negative example becomes semantically distinct from both the anchor and positive examples, effectively simulating the effect of CiM corruption. This approach not only differentiates the negative examples semantically but also aligns them with the corrupted data scenarios for noise-aware training. Given the triple examples (\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56), for \ud835\udc56= 1, 2, ...,\ud835\udc5bas shown in equation 10, we have the dropout rate \ud835\udc5ffor formalizing the positive examples where 0 < \ud835\udc5f\u22640.2. Correspondingly, the dropout rate for formailzing the negative examples can be 1 \u2212\ud835\udc5f. Given the sentence embedding model M, the anchor example, positive example, and negative example can be constructed as: emb(\ud835\udc65\ud835\udc56) = M(\ud835\udc65\ud835\udc56, dropout = 0) emb(\ud835\udc65\u2212 \ud835\udc56) = M(\ud835\udc65\u2212 \ud835\udc56, dropout = 1 \u2212\ud835\udc5f) emb(\ud835\udc65+ \ud835\udc56) = M(\ud835\udc65+ \ud835\udc56, dropout = \ud835\udc5f) (11) 3.4 Flexible Noise-aware Training In the previous two stages, we construct the data to train the sentence embedding model based on contrastive learning. Meanwhile, the training can be more effective when injecting the simulated device variation [33] so that the model can be optimized with consideration of the device variation. Additionally, the sentence embedding model needs to produce embeddings that can fit with the different CiMs, which might have various NVM designs. To do that, we need the sentence embedding model reshapes its output embeddings into certain dimensions and precision. Hence, we propose a flexible noise-aware training method, which can generate the noise-resilient embedding, fitting to various CiMs. As shown in Figure 3, in the flexible noise-aware training module, the embedding generated by sentence embedding model will be shaped based on the CiM\u2019s NVMs constraints where required dimension is \ud835\udc51and required precision is \ud835\udc5d, and being injected device variation to formalize the embeddings. The reshape module, shown in Figure 3, seen as an autoencoder to reconstruct its input embedding [34], can be expressed as \ud835\udc60\u210e\ud835\udc5d(), initialized by \ud835\udc51 and \ud835\udc5d, takes the anchor embedding \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) as input. We can have \ud835\udc60\u210e\ud835\udc5d(\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)) = \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d. Based on the device variation shown as Table 2, we can have: \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d \ud835\udf0e = (\ud835\udc52\u2032 \u2217\ud835\udc3f0 + \ud835\udc52\u2032 \u2217\ud835\udc3f1 + \ud835\udc52\u2032 \u2217\ud835\udc3f2 + \ud835\udc52\u2032 \u2217\ud835\udc3f3) \u2217\ud835\udf0e, (12) \fRuiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 Table 1: Performance comparison between our framework and four baselines on five CiM devices with device variation specified in Table 2 across five datasets. Evaluate the performance of our framework using EDC (RoCR-EDC) and using IDC (RoCR-IDC) to optimize the performance of RAG, which utilizes Gemma-2 as its LLM. Dataset Citation Movie Rating News DBLP CiM Method Acc \u2191 F1 \u2191 Acc \u2191 F1 \u2191 MAE \u2193 RMSE \u2193 ROUGE-1 \u2191 ROUGE-L \u2191 ROUGE-1 \u2191 ROUGE-L \u2191 Device-1 SWV 0.4208 0.3339 0.1305 0.1974 0.3850 0.8093 0.0754 0.0731 0.1709 0.1590 CxDNN 0.4223 0.3576 0.1516 0.1762 0.4404 0.9135 0.0640 0.0632 0.1646 0.1449 CorrectNet 0.4155 0.3791 0.0996 0.1305 0.3609 0.7071 0.0512 0.0764 0.1603 0.1538 Vanilla RAG 0.4401 0.3476 0.1017 0.0838 0.3903 0.8944 0.0754 0.0731 0.1731 0.1473 RoCR-CDE 0.5536 0.3956 0.2242 0.2303 0.3108 0.6856 0.1041 0.0987 0.2066 0.1924 RoCR-CDI 0.5409 0.5117 0.2273 0.2487 0.2767 0.6083 0.0831 0.0808 0.2317 0.2176 Device-2 SWV 0.1831 0.1552 0.1992 0.1957 0.4205 0.8775 0.0296 0.0289 0.1968 0.1874 CxDNN 0.4013 0.3557 0.2167 0.2019 0.4423 0.8367 0.0604 0.0791 0.1517 0.1401 CorrectNet 0.3827 0.3209 0.1625 0.1909 0.3762 0.8062 0.0513 0.0505 0.2042 0.1945 Vanilla RAG 0.4801 0.3462 0.1576 0.2079 0.4153 0.9354 0.0296 0.0289 0.1618 0.1353 RoCR-CDE 0.5407 0.4396 0.2924 0.2509 0.2553 0.5385 0.1209 0.0946 0.2025 0.1906 RoCR-CDI 0.5299 0.4591 0.2971 0.2386 0.2124 0.5763 0.0884 0.0853 0.2240 0.2098 Device-3 SWV 0.2450 0.2564 0.1695 0.1641 0.3460 0.7416 0.0725 0.069 0.1018 0.0954 CxDNN 0.4811 0.4006 0.2367 0.2113 0.2851 0.6928 0.0761 0.0707 0.1425 0.1111 CorrectNet 0.4510 0.3918 0.0792 0.1029 0.3704 0.7937 0.0585 0.0555 0.1715 0.1346 Vanilla RAG 0.4852 0.3618 0.1614 0.1636 0.3255 0.7649 0.0725 0.0690 0.1647 0.1437 RoCR-CDE 0.5139 0.4116 0.2242 0.2215 0.3208 0.6481 0.0825 0.0805 0.1893 0.1754 RoCR-CDI 0.5515 0.4984 0.2152 0.2131 0.2916 0.6245 0.1099 0.1049 0.2294 0.2140 Device-4 SWV 0.5135 0.4260 0.1271 0.1178 0.3610 0.8196 0.0259 0.0256 0.1871 0.1786 CxDNN 0.4733 0.3964 0.1267 0.2158 0.3468 0.7616 0.0646 0.0634 0.1603 0.1538 CorrectNet 0.4628 0.4019 0.1592 0.1847 0.4013 0.9274 0.0705 0.0750 0.1628 0.1292 Vanilla RAG 0.2101 0.2401 0.1219 0.2019 0.4015 0.8544 0.0505 0.0489 0.1929 0.1814 RoCR-CDE 0.5836 0.5555 0.1706 0.2817 0.3139 0.6856 0.0873 0.0851 0.1984 0.1882 RoCR-CDI 0.5352 0.4289 0.1642 0.2445 0.2706 0.5916 0.1154 0.1128 0.2148 0.1978 Device-5 SWV 0.4320 0.3541 0.1250 0.1076 0.3652 0.7616 0.0434 0.0427 0.0985 0.0923 CxDNN 0.4301 0.0538 0.0751 0.0458 0.3503 0.8185 0.0707 0.0682 0.2042 0.1945 CorrectNet 0.4145 0.3926 0.1083 0.1395 0.5526 0.8185 0.0735 0.0776 0.2096 0.1879 Vanilla RAG 0.4256 0.3522 0.0847 0.0863 0.3951 0.8515 0.0676 0.0653 0.2018 0.1846 RoCR-CDE 0.5698 0.5223 0.2152 0.1669 0.2959 0.6245 0.0936 0.0891 0.1946 0.1844 RoCR-CDI 0.5254 0.4504 0.2394 0.2458 0.2624 0.6325 0.0799 0.0764 0.2238 0.2095 where \ud835\udc52\u2032 = \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d. The device variation, as noise, is injected into embeddings to formalize \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d \ud835\udf0e , which will be used in contrastive learning to train the sentence embedding model, as shown in Figure 3. 4 EXPERIMENTAL EVALUATION 4.1 Experimental Setup 4.1.1 Datasets. To demonstrate our robust CiM-backed RAG, we employ five datasets with different tasks and domains, including Citation Identification [35] (Citation), Movie Tagging [36] (Movie), Product Rating [37] (Rationg), News Headline Generation [38] (News), and DBLP-Citation-network V14 [39] (DBLP) to evaluate the proposed framework. The data in each dataset consists of query data and profile data. In our evaluation, the profile data will be used to formalize user history, and the profile corresponding query data will be used as the user input. The first three datasets contain binary, five-class, and fifteen-class classification tasks respectively. The last two datasets contain text generation tasks. In the Citation Identification dataset, every piece of query data consists of a paper title and two references, and the correct reference is provided. RAG uses the profile data corresponding to the paper titles with their detailed contents to choose the appropriate reference. In the Movie Tagging dataset, each query data contains a description of a movie, and RAG uses a similar description and its corresponding tag in the profile data to tag the query data. The Product Rating dataset has a similar structure as the Movie Tagging dataset. In News Headline Generation and DBLP datasets, each query data contains an abstract, which can be summarized into a title. RAG uses a similar abstract and its corresponding title in profile data to generate the title for query data. All five datasets have labels in their query data. 4.1.2 Default Experimental Setting. Our framework chooses all-MiniLM-L6-v2 [40] as the sentence embedding model. For each dataset, we randomly select 2000 documents from profile data as the anchor examples. To examine the data construction method of CDE, we set the augmentation factor \ud835\udc58= 5 to obtain 10000 negative and positive examples. We set dropout rate as 0.1 to obtain the positive examples while maintain it as 0 when process anchor and negative examples. To examine the data construction method CDI, we set dropout rate for positive examples as 0.1 and dropout rate for negative examples as 0.9. To align with experiments for CDE, we also set \ud835\udc58= 5 in the experiments for CDI. For the experimental results, we run five times and get the average. In experiments, we set the device variation \ud835\udf0e= 0.1 and shape embeddings into dimension of 64 with precision of \ud835\udc56\ud835\udc5b\ud835\udc618. The learning rate is 2\ud835\udc52\u22125. In all experiments, we adhere to the device variation model previously described. The specific parameters are abstracted and then simplified from three representative NVM devices, two of them \fRobust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (a) Citation on Gemma-2B D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (b) Citation on Phi-2 D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (c) Citation on Mistral-7B D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (d) Citation on Llama-2-3B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (e) Movie on Gemma-2B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (f) Movie on Phi-2 D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (g) Movie on Mistral-7B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (h) Movie on Llama-2-3B Figure 6: Performance comparison between our framework and four baselines on RAG utilizing the LLMs including Gemma-2B, Phi-2, Mistral-7B, and Llama-2-3B with device variation specified in Table 2, given dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band \ud835\udc40\ud835\udc5c\ud835\udc63\ud835\udc56\ud835\udc52. Table 2: Device non-ideality modeling for different real and synthesized devices. For devices with more than two levels, the device variation for each level is depicted as \ud835\udc3f\ud835\udc65. Name # of Levels Device Variations \ud835\udf0e\ud835\udc63 \ud835\udc3f0 \ud835\udc3f1 \ud835\udc3f2 \ud835\udc3f3 \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc401 (Device-1) 1 0.0100 0.0100 0.0100 0.0100 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472 (Device-2) 4 0.0067 0.0135 0.0135 0.0067 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc473 (Device-3) 4 0.0049 0.0146 0.0146 0.0049 \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc404 (Device-4) 4 0.0038 0.0151 0.0151 0.0038 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc476 (Device-5) 4 0.0026 0.0155 0.0155 0.0026 are resistive random-access memory (RRAM) devices extracted from [27, 41] and the other is a ferroelectric field effect transistor (FeFET) device extracted from [42]. We name them \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc401, \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc404 and \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472, respectively. We also extrapolate the modeling data to obtain two synthesized \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc473 and \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc476 devices. Detailed device modeling results are demonstrated in Table 2. A \ud835\udc65-level device means this device can represent \ud835\udc65distinct values and \ud835\udf0e\ud835\udc3f2 = 0.01 means the variation of this device is 0.01 when it is representing the level value 2. Using the device variations obtained from real CiM devices, we perform our experiments on a single Nvidia A10 GPU. Document embeddings are shaped based on different CiM devices and stored as parallel arrays, similar to how they would be mapped to multiple NVM devices in practical scenarios. For example, if an embedding is shaped to contain all uint8 values, when it is mapped to 4-level (2-bit) devices such as \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472, each element of the vector is represented by four devices. 4.1.3 Evaluation Methods. Our first three datasets examine the model classification capability, and the rest of two datasets examine the text generation capability. In particular, dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band \ud835\udc40\ud835\udc5c\ud835\udc63\ud835\udc56\ud835\udc52has two and fifteen labels respectively. We can examine the binary and multiclass classification capabilities of the LLMs enhanced by our framework. In this way, we use accuracy to examine the ability of the models to correctly classify instances across different classes, and we use F1 score to examine the balance between precision and recall in classification tasks. For dataset \ud835\udc45\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5b\ud835\udc54, while it has five labels and also examine the multiclass classification, we use mean absolute error (MAE) and root mean square error (RMSE) to evaluate from from a regression perspective [43]. For MAE, it measures the average magnitude of errors in the predictions, providing a straightforward assessment of the model\u2019s overall accuracy in predicting the rating values. For RMSE, it captures the square root of the average squared differences between predicted and actual ratings, offering a metric sensitive to larger errors, which can highlight significant discrepancies between the model\u2019s predictions and true values. For dataset \ud835\udc41\ud835\udc52\ud835\udc64\ud835\udc60and \ud835\udc37\ud835\udc35\ud835\udc3f\ud835\udc43, their labels are sentences. Such datasets examine the text generation capabilities. We use ROUGE1 and ROUGE-L to evaluate the overlap between generated texts and reference texts [44], capturing both the precision and recall of individual words (ROUGE-1) and the longest matching sequence (ROUGE-L), ensuring a comprehensive evaluation of the text generation quality. For accuracy, F1, ROUGE-1 and ROUGE-L, their higher values reflect the better performance. For MAE and RMSE, their lower value represent the better performance. Additionally, we use accuracy to measure the MIPS performance (MIPS accuracy), representing the ratio of MIPS results under device variation and MIPS results without device variation (references). 4.1.4 Baselines. As this is the first work to improve the RAG robustness on Edge-based CiM, we do not have state-of-the-art for comparison. As such, we construct baselines from the past noise mitigation methods originally designed to boost DNN robustness. The first baseline is selective write verify [21] (SWV). While it originally utilizes the second derivation to evaluate the device variation impact on neural network weights, we use the second derivation to measure the embedding deviation between the ground truth embedding and the embedding under device variation. The second baseline is (CxDNN) [45]. While they use compensation factor to improve the robustness of vector-matrix multiplication, we use the compensation factor the calibrate the embedding impacted by device variation. The third baseline is CorrectNet [46], where it utilizes the cross entropy loss and regularization to improve the robustness of neural networks in CiM. To use it as a baseline, we also use the cross entropy loss the regularization as the loss function to \fRuiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 calibrate the device output embedding. Additionally, we examine the Vanilla RAG, which contains no noise mitigation methods, as our fourth baseline. The baselines use the same experimental setting as our framework does. 4.2 Results For RAG, it can be simplified as the combination of MIPS and LLM, where the MIPS as a retriever searches the appropriate information and the LLM as a generator processes the searched results. Hence, in our experiments, we first evaluate the performance of MIPS under the device variation of device-1. We take the MIPS results obtained without device variation as the references (i.e., ground truth). Using the metric of MIPS accuracy, we examine how many MIPS results under device variation will match the references. Since the quality of retrieved content largely depends on the base sentence embedding model, and we focus on mitigating the device variation impact on the embedding model, we do not assess the quality of references. As shown in Table 3, our framework using the two data construction methods outperforms the four baselines across five datasets. It shows that our framework can mitigate the embedding perturbation due to device variation. These results can also correspond to the preliminary study shown in Figure 2, where the increment of \ud835\udf0e in naive Gaussian noise will jeopardize the MIPS performance. Table 3: Performance (MIPS accuracy) comparison between our framework and baselines. Accuracy is computed based on MIPS-retrieved documents under device variation of device-1 and the these retrieved without device variation. Dataset Citation Movie Rating News DBLP SWV 0.4200 0.1728 0.1050 0.0855 0.2295 CxDNN 0.4401 0.2017 0.0503 0.0754 0.1681 CorrectNet 0.4013 0.0699 0.0509 0.0533 0.1609 Vanilla RAG 0.4547 0.1694 0.0933 0.0649 0.1747 RoCR-CDE 0.9231 0.4639 0.1583 0.1921 0.2750 RoCR-CDI 0.9344 0.4355 0.1266 0.1708 0.2905 After we compare the MIPS performance of our framework and baselines, we further present a comprehensive evaluation to show the RAG performance of them. We use Gemma-2B as the LLM in RAG. Additionally, with Gemma-2B, we run RAG without device variation to obverse its ideal performance, where we get 0.5200 of accuracy for Citation, 0.3728 of accuracy for Movie, 0.3150 of MAE for Rating, 0.0855 of ROUGE-1 for News, and 0.2295 of ROUGE-1 for DBLP. On five CiM devices, whose device variations have been shown in Table 2, we examine RAG with five datasets. As shown in Table 1, given the same datasets, it is clear that each device variation significantly compromises the RAG robustness, whereas our framework can mitigate the different device variation. For example, the RAG performance for Citation dataset on Device-2 can range from 0.18 to 0.48, while our framework can boost the accuracy performance of Citation dataset above 0.5 for all five devices. Compared to the four baselines whose performances are relatively worse than the ideal performance, our framework significantly approaches and sometimes outperforms the ideal performance via generating better sentence embeddings. This is because RoCR also serves as a regularization to improve the model\u2019s generalization. In addition, we evaluate the impact of different LLMs on the performance of our framework. As Figure 1 shown, the LLM takes the concatenation of MIPS searched data and user query as the input and generates the response regarding the user query. Since different LLMs may have different response given the same query, we select four emerging edge-friendly medium-size LLMs in our experiments to examine the performance of our framework. Gemma-2B [47] is a new SOTA open model introduced by Google, with 4.95G model weights. According to Google, Gemma can outperform the same sized Llama-2 in reasoning capabilities. Hence, we also use Llama2-3B [48], one of the earliest open LLMs introduced by Meta, with 6.85G model weights. Similarly, Phi-2 [49] released by Microsoft, is a powerful small LLM with 5G model weights. Additionally, Mistral7B-GPTQ [50] made by Mistral AI, is a well-performed LLM after Llama model. We select dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band dataset \ud835\udc40\ud835\udc5c\ud835\udc56\ud835\udc63\ud835\udc52. We use the default experimental setting with \ud835\udf0e= 0.1 and use CiM Device-1 as the experimental environment. The results are shown on Figure 6. It is evident that our framework outperforms each baseline across five CiM devices. Besides, the performance of each baseline on the same dataset can be largely different given different device, while our framework can produce a more robust performance. 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Device Variation ( ) 0.10 0.15 0.20 0.25 0.30 ROUGE-1 SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI Figure 7: Performance comparison between our framework and four baselines on CiM device-1 with different device variation \ud835\udf0e, given dataset DBLP. By default, we use \ud835\udf0e= 0.1 to calculate the device variation of the five CiM devices. We also conduct an additional study to evaluate our framework given different \ud835\udf0evalues. Since we have already use dataset Citation and dataset Movie to study the performance of our frameworks seen in Figure 6, we choose a different dataset DBLP, using ROUGE-1 as the metric. For the LLM in RAG, we choose Mistral-7B. We examine the \ud835\udf0evalues higher and lower than 0.1, including 0, 0.025, 0.05, 0.075, 0.125, and 0.15. The case of \ud835\udf0e = 0 reflects the ideal performance. For the CiM device, we use CiM device-1. As shown in Figure 7, our framework outperforms baselines across different device variation values. Finally, RoCR is a training method that generates more robust weights for the sentence embedding model. It does not change the model structure. Thus, there is no hardware (e.g., energy and latency) overhead during inference. 5 CONCLUSION In this paper, we present a novel framework for retrieval-augmented generation (RAG) acceleration via computing-in-memory (CiM) architectures. Our approach provide a solution to free RAG from \fRobust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures the constraints of latency and scalability on edge devices. By optimizing the sentence embedding model, our framework enable the utilization of CiM devices in storing and processing the document embeddings, minimizing the impact of CiM device variations. Experimental results show that our framework achieves superior RAG performance and largely mitigates the impact of device variations. This paper marks the first RAG acceleration via CiM framework."
19
+ }
title_10K/test_title_short_2405.04781v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04781v1",
3
+ "title": "CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization",
4
+ "abstract": "Large language models (LLMs) have demonstrated astonishing capabilities in\nnatural language processing (NLP) tasks, sparking interest in their application\nto professional domains with higher specialized requirements. However,\nrestricted access to closed-source LLMs via APIs and the difficulty in\ncollecting massive high-quality datasets pose obstacles to the development of\nlarge language models in education fields of various courses. Given these\nchallenges, we propose CourseGPT-zh, a course-oriented education LLM that\nsupports customization and low-cost deployment. To address the\ncomprehensiveness and diversity requirements of course-specific corpora, we\ndesign a high-quality question-answering corpus distillation framework\nincorporating prompt optimization, which effectively mines textbook knowledge\nand enhances its diversity. Moreover, considering the alignment of LLM\nresponses with user needs, a novel method for discrete prompt optimization\nbased on LLM-as-Judge is introduced. During optimization, this framework\nleverages the LLM's ability to reflect on and exploit error feedback and\npatterns, allowing for prompts that meet user needs and preferences while\nsaving response length. Lastly, we obtain CourseGPT-zh based on the open-source\nLLM using parameter-efficient fine-tuning. Experimental results show that our\ndiscrete prompt optimization framework effectively improves the response\nquality of ChatGPT, and CourseGPT-zh exhibits strong professional capabilities\nin specialized knowledge question-answering, significantly outperforming\ncomparable open-source models.",
5
+ "authors": "Zheyan Qu, Lu Yin, Zitong Yu, Wenbo Wang, Xing zhang",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
14
+ "gt": "CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization",
15
+ "main_content": "Introduction Large language models, such as ChatGPT [1], GPT4 [2], LLaMA [3], and ChatGLM [4], have demonstrated remarkable performance and generalization capabilities across various NLP tasks, significantly expanding the boundaries of language applications. With the increase in model parameters and pretraining corpus size, capabilities such as logical reasoning, instruction following, and In-Context Learning [5],[6],[7] have emerged. Based on these breakthroughs, the latest LLMs have shown profound understanding and professionalism in various fields, such as virtual assistants, text generation, and code annotation. Utilizing LLMs to disrupt industries has become an inevitable trend, including the field of education[8],[9]. Recently, there has been a desire to leverage the extensive knowledge of large language models to construct domainspecific LLMs in various vertical fields, which require greater expertise and accuracy. To address the issue that general-purpose LLMs cannot meet specific domain requirements, a variety of methods have been proposed. For instance, steering foundation models through role-playing or prompt engineering have been used to tap into the knowledge learned during the pre-training phase, which can unleash their deep-seated expert capabilities [10],[11]. Other approaches involve pretraining or continual pre-training with domain-specific corpus to incorporate domainspecific knowledge into large language models [8],[12],[13],[14]. In addition, to reduce the hallucination during the response generation, retrieval augmentation has also been applied to provide reliable references [8],[15]. Based on these \u2217Xing zhang is the corresponding author. arXiv:2405.04781v1 [cs.CL] 8 May 2024 \fapproaches, successful implementations such as MedAgents [10], ChatLaw [15], EduChat [8], and FinGPT [16] have demonstrated the potential of LLMs to provide professional responses and insights in various vertical fields, including healthcare, law, finance, and education. However, constructing domain-specific large language models is still labor-consuming and expensive. To begin with, for closed-source large language models like ChatGPT, the high costs of text generation and fine-tuning services are often prohibitive. As for open-source LLMs, there is a significant gap in parameter size and pre-training corpus compared to closed-source LLMs, resulting in significantly weaker general capabilities such as reasoning, and domain-specific knowledge extraction [9],[17],[18],[19]. Faced with complex professional terminology, open-source large language models often fail to meet user requirements for domain knowledge. In this context, it often requires a large amount of in-domain pre-training corpus or expertise datasets to enhance professionalism in vertical fields. Although various existing works have developed specialized datasets and evaluation criteria for various fields such as philosophy, medicine, and law, as well as for scenarios including network operation and geospatial semantics [17],[18],[19],[20],[21], there is still a considerable demand for manual effort in constructing datasets for courses or privatized scenarios that are not covered by these datasets. This challenge is particularly pronounced when accessible corpora in the field are scarce, making it extremely difficult to construct tens of thousands of specialized instruction data. Furthermore, the majority of models are primarily pre-trained on English corpora, which may lead to a degradation in their performance in other languages [22],[23]. In addition to the challenges of constructing specialized corpora, the high cost of inference incurred by open-source large language models cannot be overlooked. Compared to the concise responses provided by humans, the responses generated by large language models, while more comprehensive, also include a significant amount of redundant information, resulting in unnecessary inference overhead. Typically, to further align the responses of large language models with specific preferences, methods such as RLHF (Reinforcement Learning from Human Feedback)[24] are introduced for fine-tuning models. However, this approach still requires a substantial amount of human-labeled preference data. Consequently, promoting alignment between the responses and human preferences, as well as reducing inference costs, is also a key factor in fostering the widespread adoption of open-source large models in specialized vertical domains. Targeted at these issues, we propose CourseGPT-zh, an open-source education large language model, and design a pipeline for constructing high-quality question-answer pairs through mining textbook knowledge. By utilizing the constructed diverse question-answer pairs, we perform parameter-efficient fine-tuning on the open-source model to mitigate the resource constraints required for deployment. In addition, in the data construction process, we incorporate LLM-as-Judge and utilize discrete prompt optimization to generate optimal prompts, steering ChatGPT to produce high-quality training data aligned with human preferences. Through this method, we ensure high-quality responses while reducing the deployment costs associated with response length. Our main contributions can be summarized as: \u2022 In this paper, we propose CourseGPT-zh, an open-source education large language model, with a pipeline for constructing high-quality and diverse question-answer pairs. Based on textbooks, we guide the model to conduct thorough exploration and questioning of textbooks, extracting knowledge from both closed-source large language models and specialized texts. Additionally, we employ a method inspired by self-instruct to guide the large language models in generating related questions, further enhancing the diversity. \u2022 Considering that although large language models can generate comprehensive answers, some content may be redundant or incorrect. Therefore, we employ prompt engineering to guide ChatGPT in generating responses that align with human preferences. To obtain the optimal prompts, we have designed an iterative discrete prompt optimization framework, which incorporates LLM-as-Judge to facilitate automatic evaluation of the quality of responses guided by prompts. Furthermore, the optimized prompt allows the large language model to achieve a balance between the quality of responses and their length, achieving information compression in responses. \u2022 A parameter-efficient fine-tuning method of the ChatGLM3 model is conducted based on constructed highquality question-answering data, resulting in the CourseGPT-zh. Experimental evidence has shown that CourseGPT-zh exhibits improved alignment with human responses, and delivers more concise answers while maintaining a high level of response quality. On various NLP task evaluation metrics, CourseGPT-zh significantly outperforms other open-source large models. 2 \f2 Related-work With fierce competition and rapid development, large language models ranging from billions to trillions of parameters have achieved remarkable performance across various NLP tasks after being pre-trained on massive amounts of text. Represented by LLMs such as ChatGPT, GPT4, and GPT4-Turbo, the OpenAI model family has successively reset the benchmarks for NLP tasks, being regarded as one of the greatest inventions in history. Concurrently, a multitude of open-source large language models, including llama-2-13b, ChatGLM3-6b, and Mistral-8x7B-MoE[25], have also shown astonishing improvements, even surpassing the level of ChatGPT on some dimensions. More importantly, they can be deployed on a single to several GPUs and can be flexibly customized through fine-tuning. 2.1 Domain-specific LLMs Although general-purpose large language models have achieved exceptional performance on generic NLP tasks, they often fall short in vertical domains that necessitate extensive specialized knowledge and high accuracy requirements. The performance of zero-shot large language models in these domains is typically inadequate, thereby granting domainspecific LLMs significant attention. Closed-source large language models, while exhibiting superior performance across various capabilities, present challenges for continual pre-training and fine-tuning with private corpora. Therefore, the construction of domain-specific models based on closed-source LLMs frequently leverages role-playing or collaboration abilities to extract knowledge in the specialized field during the pre-training phase. In contrast, open-source LLMs can be further pre-trained or fine-tuned with extensive high-quality domain-specific data, and they have achieved multiple successful applications in fields such as medicine, law, education, finance, etc. HuatuoGPT [26] employs a mixed dataset comprising distilled data from ChatGPT and real-world data provided by physicians\u2019 medical advice to fine-tune an open-source model. Furthermore, it aligns the model\u2019s response with human preferences through RLAIF (Reinforcement Learning from Artificial Intelligence Feedback). By learning from the response styles of real-world doctor-patient interactions, the fine-tuned model can engage with users in a human-like manner and significantly surpasses other models at a similar level across various metrics. MedChatZH [12] has developed a dialogue model specifically designed for Traditional Chinese Medicine, incorporating extensive Chinese medical literature for continual pre-training. After fine-tuning millions of question-answer data from the Internet and various Chinese hospitals, the model achieves state-of-the-art performance in the field of Chinese medicine. ChatLaw [15], targeting the legal domain, not only provides professional responses concerning legal knowledge but also acquires problem-solving abilities through training on multiple-choice question data. Furthermore, it employs a method combining vector database retrieval with keyword search, effectively reducing the hallucination in responses. EduChat [8] offers a range of functionalities, including open-ended question answering, paper assessment, and Socratic teaching, enhancing various skills through fine-tuning and the integration of tools. The model gains interdisciplinary knowledge through continual pre-training and strengthens its question-answering and instruction-following capabilities with large-scale instruction and open-domain dialogue datasets. FinGPT [16] adopts a data-centric approach, focusing on automated data management pipelines and lightweight adaptive technologies, establishing a comprehensive framework from data processing to feature engineering and application, while also enhancing the transparency of the overall framework. One of its strengths lies in its ability to integrate seamlessly with both open-source and closed-source large language models without the need for further training. 2.2 Discrete prompt engineering Prompt engineering aims to guide large language models to fully leverage their potential through the meticulous design of prompts. Extensive research has demonstrated that well-crafted prompts can significantly enhance the ability of large language models to improve their performance across various NLP tasks [27],[28]. Prompt engineering encompasses continuous prompt learning and discrete prompt optimization. Continuous prompt learning aims to adapt large language models to various tasks by incorporating learnable parameters within the prompts [29], [30]. However, continuous prompt learning typically requires access to the gradient vectors of the LLMs, which restricts its application in closed-source models that are accessed only through APIs. For discrete prompts, traditional methods often rely on meticulous manual design, which not only demands considerable human effort but also may not necessarily maximize the model\u2019s performance. Consequently, numerous methods for automatically generating optimal discrete prompts have been explored, leveraging the large model itself as an optimizer to autonomously enhance its performance in NLP tasks. Recently, several leading automated discrete prompt optimization frameworks have been proposed. EVOPROMPT[31] draws on the principles of evolutionary algorithms (EAs) to iteratively guide LLMs to generate new prompts through evolutionary operators. It does not require any gradient information from LLMs and can achieve a balance between exploration and exploitation. Experiments on nine datasets have shown that optimized prompts can significantly improve task performance. APE[32], inspired by program synthesis, represents discrete prompting optimization as 3 \fOpen-source Pre-trained Model Course-oriented Chat Model Factual Accuracy User Satisfaction Clarity Condensability Paragraphs Reflection Resample"
16
+ }
title_10K/test_title_short_2405.04795v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04795v1",
3
+ "title": "Variational Schr\u00f6dinger Diffusion Models",
4
+ "abstract": "Schr\\\"odinger bridge (SB) has emerged as the go-to method for optimizing\ntransportation plans in diffusion models. However, SB requires estimating the\nintractable forward score functions, inevitably resulting in the costly\nimplicit training loss based on simulated trajectories. To improve the\nscalability while preserving efficient transportation plans, we leverage\nvariational inference to linearize the forward score functions (variational\nscores) of SB and restore simulation-free properties in training backward\nscores. We propose the variational Schr\\\"odinger diffusion model (VSDM), where\nthe forward process is a multivariate diffusion and the variational scores are\nadaptively optimized for efficient transport. Theoretically, we use stochastic\napproximation to prove the convergence of the variational scores and show the\nconvergence of the adaptively generated samples based on the optimal\nvariational scores. Empirically, we test the algorithm in simulated examples\nand observe that VSDM is efficient in generations of anisotropic shapes and\nyields straighter sample trajectories compared to the single-variate diffusion.\nWe also verify the scalability of the algorithm in real-world data and achieve\ncompetitive unconditional generation performance in CIFAR10 and conditional\ngeneration in time series modeling. Notably, VSDM no longer depends on warm-up\ninitializations and has become tuning-friendly in training large-scale\nexperiments.",
5
+ "authors": "Wei Deng, Weijian Luo, Yixin Tan, Marin Bilo\u0161, Yu Chen, Yuriy Nevmyvaka, Ricky T. Q. Chen",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Variational Schr\u00f6dinger Diffusion Models",
15
+ "main_content": "Introduction Diffusion models have showcased remarkable proficiency across diverse domains, spanning large-scale generations *Equal contribution (Alphabetical) 1Machine Learning Research, Morgan Stanley 2Peking University 3Duke University 4Meta AI (FAIR). Correspondence to: Wei Deng <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). of image, video, and audio, conditional text-to-image tasks, and adversarial defenses (Dhariwal & Nichol, 2022; Ho et al., 2022; Kong et al., 2021; Ramesh et al., 2022; Zhang et al., 2024). The key to their scalability lies in the closedform updates of the forward process, highlighting both statistical efficiency (Koehler et al., 2023) and diminished dependence on dimensionality (Vono et al., 2022). Nevertheless, diffusion models lack a distinct guarantee of optimal transport (OT) properties (Lavenant & Santambrogio, 2022) and often necessitate costly evaluations to generate higherfidelity content (Ho et al., 2020; Salimans & Ho, 2022; Lu et al., 2022; Xue et al., 2023; Luo, 2023). Alternatively, the Schr\u00a8 odinger bridge (SB) problem (L\u00b4 eonard, 2014; Chen & Georgiou, 2016; Pavon et al., 2021; Caluya & Halder, 2022; De Bortoli et al., 2021), initially rooted in quantum mechanics (L\u00b4 eonard, 2014), proposes optimizing a stochastic control objective through the use of forward-backward stochastic differential equations (FBSDEs) (Chen et al., 2022b). The alternating solver gives rise to the iterative proportional fitting (IPF) algorithm (Kullback, 1968; Ruschendorf, 1995) in dynamic optimal transport (Villani, 2003; Peyr\u00b4 e & Cuturi, 2019). Notably, the intractable forward score function plays a crucial role in providing theoretical guarantees in optimal transport (Chen et al., 2023c; Deng et al., 2024). However, it simultaneously sacrifices the simulation-free property and largely relies on warm-up checkpoints for conducting large-scale experiments (De Bortoli et al., 2021; Chen et al., 2022b). A natural follow-up question arises: Can we train diffusion models with efficient transport? To this end, we introduce the variational Schr\u00a8 odinger diffusion model (VSDM). Employing variational inference (Blei et al., 2017), we perform a locally linear approximation of the forward score function, and denote it by the variational score. The resulting linear forward stochastic differential equations (SDEs) naturally provide a closed-form update, significantly enhancing scalability. Compared to the singlevariate score-based generative model (SGM), VSDM is a multivariate diffusion (Singhal et al., 2023). Moreover, hyperparameters are adaptively optimized for more efficient transportation plans within the Schr\u00a8 odinger bridge framework (Chen et al., 2022b). 1 arXiv:2405.04795v1 [cs.LG] 8 May 2024 \fVariational Schr\u00a8 odinger Diffusion Models Theoretically, we leverage stochastic approximation (Robbins & Monro, 1951) to demonstrate the convergence of the variational score to the optimal local estimators. Although the global transport optimality is compromised, the notable simulation-free speed-ups in training the backward score render the algorithm particularly attractive for training various generation tasks from scratch. Additionally, the efficiency of simulation-based training for the linearized variational score significantly improves owing to computational advancements in convex optimization. We validate the strength of VSDM through simulations, achieving compelling performance on standard image generation tasks. Our contributions unfold in four key aspects: \u2022 We introduce the variational Schr\u00a8 odinger diffusion model (VSDM), a multivariate diffusion with optimal variational scores guided by optimal transport. Additionally, the training of backward scores is simulationfree and becomes much more scalable. \u2022 We study the convergence of the variational score using stochastic approximation (SA) theory, which can be further generalized to a class of state space diffusion models for future developments. \u2022 VSDM is effective in generating data of anisotropic shapes and motivates straighter transportation paths via the optimized transport. \u2022 VSDM achieves competitive unconditional generation on CIFAR10 and conditional generation in time series modeling without reliance on warm-up initializations. 2. Related Works Flow Matching and Beyond Lipman et al. (2023) utilized the McCann displacement interpolation (McCann, 1997) to train simulation-free CNFs to encourage straight trajectories. Consequently, Pooladian et al. (2023); Tong et al. (2023) proposed straightening by using minibatch optimal transport solutions. Similar ideas were achieved by Liu (2022); Liu et al. (2023) to iteratively rectify the interpolation path. Albergo & Vanden-Eijnden (2023); Albergo et al. (2023) developed the stochastic interpolant approach to unify both flow and diffusion models. However, \u201cstraighter\u201d transport maps may not imply optimal transportation plans in general and the couplings are still not effectively optimized. Dynamic Optimal Transport Finlay et al. (2020); Onken et al. (2021) introduced additional regularization through optimal transport to enforce straighter trajectories in CNFs and reduce the computational cost. De Bortoli et al. (2021); Chen et al. (2022b); Vargas et al. (2021) studied the dynamic Schr\u00a8 odinger bridge with guarantees in entropic optimal transport (EOT) (Chen et al., 2023c); Shi et al. (2023); Peluchetti (2023); Chen et al. (2023b) generalized bridge matching and flow matching based EOT and obtained smoother trajectories, however, scalability remains a significant concern for Schr\u00a8 odinger-based diffusions. 3. Preliminaries 3.1. Diffusion Models The score-based generative models (SGMs) (Ho et al., 2020; Song et al., 2021b) first employ a forward process (1a) to map data to an approximate Gaussian and subsequently reverse the process in Eq.(1b) to recover the data distribution. d\u2212 \u2192 x t = f t(\u2212 \u2192 x t)dt + p \u03b2td\u2212 \u2192 wt (1a) d\u2190 \u2212 x t = \u0002 f t(\u2190 \u2212 x t) \u2212\u03b2t\u2207log \u03c1t \u0000\u2190 \u2212 x t \u0001\u0003 dt + p \u03b2td\u2190 \u2212 wt, (1b) where \u2190 \u2212 x t, \u2212 \u2192 x t \u2208Rd; \u2212 \u2192 x 0 \u223c\u03c1data and \u2190 \u2212 x T \u223c\u03c1prior; f t denotes the vector field and is often set to 0 (a.k.a. VE-SDE) or linear in x (a.k.a. VP-SDE); \u03b2t > 0 is the time-varying scalar; \u2212 \u2192 wt is a forward Brownian motion from t \u2208[0, T] with \u03c1T \u2248\u03c1prior; \u2190 \u2212 wt is a backward Brownian motion from time T to 0. The marginal density \u03c1t of the forward process (1a) is essential for generating the data but remains inaccessible in practice due to intractable normalizing constants. Explicit Score Matching (ESM) Instead, the conditional score function \u2207log \u03c1t|0 (\u00b7) \u2261\u2207log \u03c1t \u0000\u00b7|\u2212 \u2192 x 0 \u0001 is estimated by minimizing a user-friendly ESM loss (weighted by \u03bb) between the score estimator st \u2261s\u03b8(\u00b7, t) and exact score (Song et al., 2021b) such that Et \u0002 \u03bbtE\u2212 \u2192 x 0E\u2212 \u2192 x t|\u2212 \u2192 x 0[\u2225st(\u2212 \u2192 x t) \u2212\u2207log \u03c1t|0 \u0000\u2212 \u2192 x t \u0001 \u22252 2] \u0003 . (2) Notably, both VPand VE-SDEs yield closed-form expressions for any \u2212 \u2192 x t given \u2212 \u2192 x 0 in the forward process (Song et al., 2021b), which is instrumental for the scalability of diffusion models in real-world large-scale generation tasks. Implicit Score Matching (ISM) By integration by parts, ESM is equivalent to the ISM loss (Hyv\u00a8 arinen, 2005; Huang et al., 2021; Luo et al., 2024b) and the evidence lower bound (ELBO) follows log \u03c10 (x0) \u2265E\u03c1T |0(\u00b7) \u0002 log \u03c1T |0 (xT ) \u0003 \u22121 2 Z T 0 E\u03c1t|0(\u00b7) h \u03b2t \u2225st\u22252 2 + 2\u2207\u00b7 (\u03b2tst \u2212f t) i dt. ISM is naturally connected to Song et al. (2020), which supports flexible marginals and nonlinear forward processes but becomes significantly less scalable compared to ESM. 3.2. Schr\u00a8 odinger Bridge The dynamic Schr\u00a8 odinger bridge aims to solve a full bridge inf P\u2208D(\u03c1data,\u03c1prior) KL(P|Q), (3) 2 \fVariational Schr\u00a8 odinger Diffusion Models where D(\u03c1data, \u03c1prior) is the family of path measures with marginals \u03c1data and \u03c1prior at t = 0 and t = T, respectively; Q is the prior process driven by dxt = f t(xt)dt+\u221a2\u03b2t\u03b5d\u2212 \u2192 wt. It also yields a stochastic control formulation (Chen et al., 2021; Pavon et al., 2021; Caluya & Halder, 2022). inf u\u2208U E \u001a Z T 0 1 2\u2225ut(\u2212 \u2192 x t)\u22252 2dt \u001b s.t. d\u2212 \u2192 x t = h f t(\u2212 \u2192 x ) + p \u03b2tut(\u2212 \u2192 x ) i dt + p 2\u03b2t\u03b5d\u2212 \u2192 wt (4) \u2212 \u2192 x 0 \u223c\u03c1data, \u2212 \u2192 x T \u223c\u03c1prior, where U is the family of controls. The expectation is taken w.r.t \u2212 \u2192 \u03c1 t(\u00b7), which denotes the PDF of the controlled diffusion (4); \u03b5 is the temperature of the diffusion and the regularizer in EOT (Chen et al., 2023c). Solving the underlying Hamilton\u2013Jacobi\u2013Bellman (HJB) equation and invoking the time reversal (Anderson, 1982) with \u03b5 = 1 2, Schr\u00a8 odinger system yields the desired forward-backward stochastic differential equations (FBSDEs) (Chen et al., 2022b): d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2t\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t) i dt + p \u03b2td\u2212 \u2192 wt, (5a) d\u2190 \u2212 x t = \u0002 f t(\u2190 \u2212 x t) \u2212\u03b2t\u2207log \u2190 \u2212 \u03c6 t(\u2190 \u2212 x t) \u0003 dt + p \u03b2td\u2190 \u2212 wt, (5b) where \u2212 \u2192 \u03c8 t(\u00b7)\u2190 \u2212 \u03c6 t(\u00b7) = \u2212 \u2192 \u03c1 t(\u00b7), \u03c10(\u00b7) \u223c\u03c1data, \u03c1T (\u00b7) \u223c\u03c1prior. To solve the optimal controls (scores) (\u2207log \u2212 \u2192 \u03c8 , \u2207log \u2190 \u2212 \u03c6 ), a standard tool is to leverage the nonlinear Feynman-Kac formula (Ma & Yong, 2007; Karatzas & Shreve, 1998; Chen et al., 2022b) to learn a stochastic representation. Proposition 1 (Nonlinear Feynman-Kac representation). Assume Lipschitz smoothness and linear growth condition on the drift f and diffusion g in the FB-SDE (5). Define \u2212 \u2192 y t = log \u2212 \u2192 \u03c8 t(xt) and \u2190 \u2212 y t = log \u2190 \u2212 \u03c6 t(xt). Then the stochastic representation follows \u2190 \u2212 y s = E \u0014 \u2190 \u2212 y T \u2212 Z T s \u0393\u03b6(\u2190 \u2212 z t; \u2212 \u2192 z t)dt \f \f \f \f\u2212 \u2192 x s = xs \u0015 , \u0393\u03b6(\u2190 \u2212 z t; \u2212 \u2192 z t)\u22611 2\u2225\u2190 \u2212 z t\u22252 2 + \u2207\u00b7 \u0000p \u03b2t\u2190 \u2212 z t \u2212f t \u0001 + \u03b6\u27e8\u2190 \u2212 z t, \u2212 \u2192 z t\u27e9, (6) where \u2212 \u2192 z t = \u221a\u03b2t\u2207\u2212 \u2192 y t, \u2190 \u2212 z t = \u221a\u03b2t\u2207\u2190 \u2212 y t, and \u03b6 = 1. 4. Variational Schr\u00a8 odinger Diffusion Models SB outperforms SGMs in the theoretical potential of optimal transport and an intractable score function \u2207log \u2212 \u2192 \u03c8 t(xt) is exploited in the forward SDE for more efficient transportation plans. However, there is no free lunch in achieving such efficiency, and it comes with three notable downsides: \u2022 Solving \u2207log \u2212 \u2192 \u03c8 t in Eq.(5a) for optimal transport is prohibitively costly and may not be necessary (Marzouk et al., 2016; Liu et al., 2023). \u2022 The nonlinear diffusion no longer yields closed-form expression of \u2212 \u2192 x t given \u2212 \u2192 x 0 (Chen et al., 2022b). \u2022 The ISM loss is inevitable and the estimator suffers from a large variance issue (Hutchinson, 1989). 4.1. Variational Inference via Linear Approximation FB-SDEs naturally connect to the alternating-projection solver based on the IPF (a.k.a. Sinkhorn) algorithm, boiling down the full bridge (3) to a half-bridge solver (Pavon et al., 2021; De Bortoli et al., 2021; Vargas et al., 2021). With P1 given and k = 1, 2, ..., we have: P2k := arg min P\u2208D(\u03c1data, \u00b7) KL(P\u2225P2k\u22121), (7a) P2k+1 := arg min P\u2208D(\u00b7, \u03c1prior) KL(P\u2225P2k). (7b) More specifically, Chen et al. (2022b) proposed a neural network parameterization to model (\u2190 \u2212 z t, \u2212 \u2192 z t) using (\u2190 \u2212 z \u03b8 t , \u2212 \u2192 z \u03c9 t ), where \u03b8 and \u03c9 refer to the model parameters, respectively. Each stage of the half-bridge solver proposes to solve the models alternatingly as follows \u2190 \u2212 L (\u03b8) = \u2212 Z T 0 E\u2212 \u2192 x t\u223d(5a) \u0014 \u03931(\u2190 \u2212 z \u03b8 t ; \u2212 \u2192 z \u03c9 t )dt \f \f \f \f\u2212 \u2192 x 0 = x0 \u0015 (8a) \u2212 \u2192 L (\u03c9) = \u2212 Z T 0 E\u2190 \u2212 x t\u223d(5b) \u0014 \u03931(\u2212 \u2192 z \u03c9 t ; \u2190 \u2212 z \u03b8 t )dt \f \f \f \f\u2190 \u2212 x T = xT \u0015 , (8b) where \u03931 is defined in Eq.(6) and \u223ddenotes the approximate simulation parametrized by neural networks * However, solving the backward score in Eq.(8a) through simulations, akin to the ISM loss, is computationally demanding and affects the scalability in generative models. To motivate simulation-free property, we leverage variational inference (Blei et al., 2017) and study a linear approximation of the forward score \u2207log \u2212 \u2192 \u03c8 (x, t) \u2248Atx with f t(\u2212 \u2192 x t) \u2261\u22121 2\u03b2t\u2212 \u2192 x t, which ends up with the variational FB-SDE (VFB-SDE): d\u2212 \u2192 x t = \u0014 \u22121 2\u03b2t\u2212 \u2192 x t + \u03b2tAt\u2212 \u2192 x t \u0015 dt + p \u03b2td\u2212 \u2192 wt, (9a) d\u2190 \u2212 x t = \u0014 \u22121 2\u03b2t\u2190 \u2212 x t \u2212\u03b2t\u2207log \u2212 \u2192 \u03c1 t(\u2190 \u2212 x t) \u0015 dt + p \u03b2td\u2190 \u2212 wt, (9b) where t \u2208[0, T] and \u2207log \u2212 \u2192 \u03c1 t is the score function of (9a) and the conditional version is to be derived in Eq.(15). The half-bridge solver is restricted to a class of OU processes OU(\u03c1data, \u00b7) with the initial marginal \u03c1data. arg min P\u2208D(\u03c1data,\u00b7) KL(P\u2225P2k\u22121) \u21d2 arg min b P\u2208OU(\u03c1data,\u00b7) KL(b P\u2225P2k\u22121). *\u223c(resp. \u223d) denotes the exact (resp. parametrized) simulation. 3 \fVariational Schr\u00a8 odinger Diffusion Models By the mode-seeking property of the exclusive (reverse) KL divergence (Chan et al., 2022), we can expect the optimizer b P to be a local estimator of the nonlinear solution in (7a). Additionally, the loss function (8b) to learn the variational score At, where t \u2208[0, T], can be simplified to \u2212 \u2192 L (A) = \u2212 Z T 0 Ext\u223d(9b) \u0014 \u0393\u03b6(Atxt; \u2190 \u2212 z \u03b8 t )dt \f \f \f \f\u2190 \u2212 x T = xT \u0015 , (10) where \u0393\u03b6 is defined in Eq.(6). Since the structure property \u2212 \u2192 \u03c8 t\u2190 \u2212 \u03c6 t = \u2212 \u2192 \u03c1 t in Eq.(5) is compromised by the variational inference, we propose to tune \u03b6 in our experiments. 4.2. Closed-form Expression of Backward Score Assume a prior knowledge of At is given, we can rewrite the forward process (9a) in the VFB-SDE and derive a multivariate forward diffusion (Singhal et al., 2023): d\u2212 \u2192 x t = \u0014 \u22121 2\u03b2tI + \u03b2tAt \u0015 \u2212 \u2192 x tdt + p \u03b2td\u2212 \u2192 wt = \u22121 2Dt\u03b2t\u2212 \u2192 x tdt + p \u03b2td\u2212 \u2192 wt, (11) where Dt = I \u22122At \u2208Rd\u00d7d is a positive-definite matrix \u2020. Consider the multivariate OU process (11). The mean and covariance follow d\u00b5t|0 dt = \u22121 2\u03b2tDt\u00b5t|0 (12a) d\u03a3t|0 dt = \u22121 2\u03b2t \u0000Dt\u03a3t|0 + \u03a3t|0D\u22ba t \u0001 + \u03b2tI. (12b) Solving the differential equations with the help of integration factors, the mean process follows \u00b5t|0 = e\u22121 2 [\u03b2D]tx0, (13) where [\u03b2D]t = R t 0 \u03b2sDsds. By matrix decomposition \u03a3t|0 = CtH\u22121 t (S\u00a8 arkk\u00a8 a & Solin, 2019), the covariance process follows that: \u0012Ct Ht \u0013 = exp \" \u0012\u22121 2[\u03b2D]t [\u03b2I]t 0 1 2[\u03b2D\u22ba]t \u0013 # \u0012\u03a30 I \u0013 , (14) where the above matrix exponential can be easily computed through modern computing libraries. Further, to avoid computing the expensive matrix exponential for highdimensional problems, we can adopt a diagonal and timeinvariant Dt. Suppose \u03a3t|0 has the Cholesky decomposition \u03a3t|0 = LtL\u22ba t for some lower-triangular matrix Lt. We can have a closed-form update that resembles the SGM. \u2212 \u2192 x t = \u00b5t|0 + Lt\u03f5, \u2020Dt = \u22122At \u2208Rd\u00d7d when the forward SDE is VE-SDE. where \u00b5t|0 is defined in Eq.(13) and \u03f5 is the standard ddimensional Gaussian vector. The score function follows \u2207log \u2212 \u2192 \u03c1 t|0(\u2212 \u2192 x t) = \u22121 2\u2207[(\u2212 \u2192 x t \u2212\u00b5t)\u22ba\u03a3\u22121 t|0(\u2212 \u2192 x t \u2212\u00b5t)] = \u2212\u03a3\u22121 t|0(\u2212 \u2192 x t \u2212\u00b5t) (15) = \u2212L\u2212\u22ba t L\u22121 t Lt\u03f5 := \u2212L\u2212\u22ba t \u03f5. Invoking the ESM loss function in Eq.(2), we can learn the score function \u2207log \u2212 \u2192 \u03c1 t|0(\u2212 \u2192 x t|\u2212 \u2192 x 0) using a neural network parametrization st(\u00b7) and optimize the loss function: \u2207A\u2225L\u2212\u22ba t \u03f5 \u2212st(xt)\u22252 2. (16) One may further consider preconditioning techniques (Karras et al., 2022) or variance reduction (Singhal et al., 2023) to stabilize training and accelerate training speed. Speed-ups via time-invariant and diagonal Dt If we parametrize Dt as a time-invariant and diagonal positivedefinite matrix, the formula (14) has simpler explicit expressions that do not require calling matrix exponential operators. We present such a result in Corollary 1. For the image generation experiment in Section 7.3, we use such a diagonal parametrization when implementing the VSDM. Corollary 1. If Dt = \u039b := diag(\u03bb), where \u03bbi \u22650, \u22001 \u2264 i \u2264d. If we denote the \u03c32 t := R t 0 \u03b2sds, then matrices Ct and Ht has simpler expressions with Ct = \u039b\u22121\b exp(1 2\u03c32 t \u039b) \u2212exp(\u22121 2\u03c32 t \u039b) \t Ht = exp(1 2\u03c32 t \u039b), which leads to CtH\u22121 t = \u039b\u22121\b I \u2212exp(\u2212\u03c32 t \u039b) \t . As a result, the corresponding forward transition writes \u00b5t|0 = exp(\u22121 2\u03c32 t \u039b)x0, Lt = \u039b\u22121 2 q I \u2212exp(\u2212\u03c32 t \u039b). In Corrolary 1 detailed in Appendix A, since the matrix \u039b = diag(\u03bb) is diagonal and time-invariant, the matrix exponential and square root can be directly calculated elementwise on each diagonal elements \u03bbi independently. 4.2.1. BACKWARD SDE Taking the time reversal (Anderson, 1982) of the forward multivariate OU process (11), the backward SDE satisfies d\u2190 \u2212 x t = (\u22121 2Dt\u03b2t\u2190 \u2212 x t \u2212\u03b2tst(\u2190 \u2212 x t))dt + p \u03b2td\u2190 \u2212 wt. (17) Notably, with a general PD matrix Dt, the prior distribution follows that xT \u223cN(0, \u03a3T |0)\u2021. We also note that the prior is now limited to Gaussian distributions, which is not a general bridge anymore. \u2021See the Remark on the selection of \u03c1prior in section B.1. 4 \fVariational Schr\u00a8 odinger Diffusion Models 4.2.2. PROBABILITY FLOW ODE We can follow Song et al. (2021b) and obtain the deterministic process directly: d\u2190 \u2212 x t = \u0012 \u22121 2Dt\u03b2t\u2190 \u2212 x t \u22121 2\u03b2tst(\u2190 \u2212 x t) \u0013 dt, (18) where xT \u223cN(0, \u03a3T |0) and the sample trajectories follow the same marginal densities \u2212 \u2192 \u03c1 t(xt) as in the SDE. 4.3. Adaptive Diffusion via Stochastic Approximation Our major goal is to generate high-fidelity data with efficient transportation plans based on the optimal A\u22c6 t in the forward process (11). However, the optimal A\u22c6 t is not known a priori. To tackle this issue, we leverage stochastic approximation (SA) (Robbins & Monro, 1951; Benveniste et al., 1990) to adaptively optimize the variational score A(k) t through optimal transport and simulate the backward trajectories. (1) Simulate backward trajectoriest {\u2190 \u2212 x (k+1) nh }N\u22121 n=0 via the Euler\u2013Maruyama (EM) scheme of the backward process (17) with a learning rate h. (2) Optimize variational scores \b A(k) nh }N\u22121 n=0 : A(k+1) nh = A(k) nh \u2212\u03b7k+1\u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ), where \u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ) is the loss function (10) at time nh and is known as the random field. We expect that the simulation of backward trajectories {\u2190 \u2212 x (k+1) nh }N\u22121 n=0 given s(k+1) nh helps the optimization of A(k+1) nh and the optimized A(k+1) nh in turn contributes to a more efficient transportation plan for estimating s(k+2) nh and simulating the backward trajectories {\u2190 \u2212 x (k+2) nh }N\u22121 n=0 . Trajectory Averaging The stochastic approximation algorithm is a standard framework to study adaptive sampling algorithms (Liang et al., 2007). Moreover, the formulation suggests to stabilize the trajectories (Polyak & Juditsky, 1992) with averaged parameters A (k) nh as follows A (k) nh = k X i=1 A(i) nh = \u0012 1 \u22121 k \u0013 A (k\u22121) nh + 1 k A(k) nh , where A (k) nh is known to be an asymptotically efficient (optimal) estimator (Polyak & Juditsky, 1992) in the local state space A by assumption A1. Exponential Moving Average (EMA) Despite guarantees in convex scenarios, the parameter space differs tremendously in different surfaces in non-convex state space A. Empirically, if we want to exploit information from multiple modes, a standard extension is to employ the EMA technique (Trivedi & Kondor, 2017): A (k) nh = (1 \u2212\u03b7)A (k\u22121) nh + \u03b7A(k) nh , where \u03b7 \u2208(0, 1). The EMA techniques are widely used empirically in diffusion models and Schr\u00a8 odinger bridge (Song & Ermon, 2020; De Bortoli et al., 2021; Chen et al., 2022b) to avoid oscillating trajectories. Now we are ready to present our methodology in Algorithm 1. Computational Cost Regarding the wall-clock computational time: i) training (linear) variational scores, albeit in a simulation-based manner, becomes significantly faster than estimating nonlinear forward scores in Schr\u00a8 odinger bridge; ii) the variational parametrization greatly reduced the number of model parameters, which yields a muchreduced variance in the Hutchinson\u2019s estimator (Hutchinson, 1989); iii) since we don\u2019t need to update At as often as the backward score model, we can further amortize the training of At. In the simulation example in Figure.9(b), VSDM is only 10% slower than the SGM with the same training complexity of backward scores while still maintaining efficient convergence of variational scores. 5. Convergence of Stochastic Approximation In this section, we study the convergence of A(k) t to the optimal A\u22c6 t , where t \u2208[0, T] \u00a7. The primary objective is to show the iterates (19) follow the trajectories of the dynamical system asymptotically: dAt = \u2207\u2212 \u2192 L t(At)ds, (20) where dAt ds = lim\u03b7\u21920 A(k+1) t \u2212A(k) t \u03b7 and \u2207\u2212 \u2192 L t(\u00b7) is the mean field at time t: \u2207\u2212 \u2192 L t(At) = Z X \u2207\u2212 \u2192 L t(At; \u2190 \u2212 x (\u00b7) t )\u2190 \u2212 \u03c1 t(d\u2190 \u2212 x (\u00b7) t ), (21) where X denotes the state space of data x and \u2207\u2212 \u2192 L t denotes the gradient w.r.t. At; \u2190 \u2212 \u03c1 t is the distribution of the continuous-time interpolation of the discretized backward SDE (22) from t = T to 0. We denote by A\u22c6 t one of the solutions of \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0. The aim is to find the optimal solution A\u22c6 t to the mean field \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0. However, we acknowledge that the equilibrium is not unique in general nonlinear dynamical systems. To tackle this issue, we focus our analysis around a neighborhood \u0398 of the equilibrium by assumption A1. After running sufficient many iterations with a small enough \u00a7We slightly abuse the notation and generalize A(k) nh to A(k) t . 5 \fVariational Schr\u00a8 odinger Diffusion Models Algorithm 1 Variational Schr\u00a8 odinger Diffusion Models (VSDM). \u03c1prior is fixed to a Gaussian distribution. \u03b7k is the step size for SA and h is the learning rate for the backward sampling of Eq.(17). \u03ben denotes the standard Gaussian vector at the sampling iteration n. The exponential moving averaging (EMA) technique can be used to further stabilize the algorithm. repeat Simulation-free Optimization of Backward Score Draw x0 \u223c\u03c1data, n \u223c{0, 1, \u00b7 \u00b7 \u00b7 , N \u22121}, \u03f5 \u223cN(0, I). Sample xnh|x0 \u223cN(\u00b5nh|0, \u03a3nh|0) by Eq.(13) and (14) given A(k) nh . Cache {\u00b5nh|0}N\u22121 n=0 and {L\u2212\u22ba nh }N\u22121 n=0 via Cholesky decomposition of {\u03a3nh}N\u22121 n=0 to avoid repeated computations. Optimize the score functions s(k+1) nh sufficiently through the loss function \u2207\u03b8\u2225L\u2212\u22ba nh \u03f5 \u2212s(k+1) nh (xnh)\u22252 2. Optimization of Variational Score via Stochastic Approximation (SA) Simulate the backward trajectory \u2190 \u2212 x (k+1) nh given A(k) nh via Eq.(22), where \u2190 \u2212 x (k+1) (N\u22121) \u223cN(0, \u03a3(k) (N\u22121)h|0). Optimize variational score A(k+1) nh using the loss function (10), where n \u2208{0, 1, \u00b7 \u00b7 \u00b7 , N \u22121}: A(k+1) nh = A(k) nh \u2212\u03b7k+1\u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ). (19) until Stage k = kmax Sample \u2190 \u2212 x 0 with stochastic (resp. deterministic) trajectories via the discretized Eq.(17) (resp. Eq.(18)). step size \u03b7k, suppose A(k) t \u2208\u0398 is somewhere near one equilibrium A\u22c6 t (out of all equilibrium), then by the induction method, the iteration tends to get trapped in the same region as shown in Eq.(32) and yields the convergence to one equilibrium A\u22c6 t . We also present the variational gap of the (sub)-optimal transport and show our transport is more efficient than diffusion models with Gaussian marginals. Next, we outline informal assumptions and sketch our main results, reserving formal ones for readers interested in the details in the appendix. We also formulate the optimization of the variational score At using stochastic approximation in Algorithm 2 in the supplementary material. Assumption A1 (Regularity). (Positive definiteness) For any t \u22650 and At \u2208A, Dt = I \u22122At is positive definite. (Locally strong convexity) For any stable local minimum A\u22c6 t with \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0, there is always a neighborhood \u0398 s.t. A\u22c6 t \u2208\u0398 \u2282A and \u2212 \u2192 L t is strongly convex in \u0398. By the mode-seeking property of the exclusive (reverse) KL divergence (Chan et al., 2022), we only make a mild assumption on a small neighborhood of the solution and expect the convergence given proper regularities. Assumption A2 (Lipschitz Score). For any t \u2208[0, T], the score \u2207log \u2212 \u2192 \u03c1 t is L-Lipschitz. Assumption A3 (Second Moment Bound). The data distribution has a bounded second moment. Assumption A4 (Score Estimation Error). We have bounded score estimation errors in L2 quantified by \u03f5score. We first use the multivariate diffusion to train our score estimators {s(k) t }N\u22121 n=0 via the loss function (16) based on the pre-specified A(k) t at step k. Similar in spirit to Chen et al. (2023a; 2022a), we can show the generated samples based on {s(k) t }N\u22121 n=0 are close in distribution to the ideal samples in Theorem 1. The novelty lies in the extension of single-variate diffusions to multi-variate diffusions. Theorem 1 (Generation quality, informal). Assume assumptions A1-A4 hold with a fixed A(k) t , the generated data distribution is close to the data distributions \u03c1data such that TV(\u2190 \u2212 \u03c1 (k) 0 , \u03c1data) \u2272exp(\u2212T) + ( \u221a dh + \u03f5score) \u221a T. To show the convergence of A(k) t to A\u22c6 t , the proof hinges on a stability condition such that the solution asymptotically tracks the equilibrium A\u22c6 t of the mean field (20). Lemma 2 (Local stability, informal). Assume the assumptions A1 and A2 hold. For \u2200t \u2208[0, T] and \u2200A \u2208\u0398, the solution satisfies a local stability condition such that \u27e8A \u2212A\u22c6 t , \u2207\u2212 \u2192 L t(A)\u27e9\u2273\u2225A \u2212A\u22c6 t \u22252 2. The preceding result illustrates the convergence of the solution toward the equilibrium on average. The next assumption assumes a standard slow update of the SA process, which is standard for theoretical analysis but may not be always needed in empirical evaluations. Assumption A5 (Step size). The step size {\u03b7k}k\u2208N is a positive and decreasing sequence \u03b7k \u21920, \u221e X k=1 \u03b7k = +\u221e, \u221e X k=1 \u03b72 k < +\u221e. 6 \fVariational Schr\u00a8 odinger Diffusion Models Next, we use the stochastic approximation theory to prove the convergence of A(k) t to an equilibrium A\u22c6 t . Theorem 2 (Convergence in L2). Assume assumptions A1-A5 hold. The variational score A(k) t converges to an equilibrium A\u22c6 t in L2 such that E[\u2225A(k) t \u2212A\u22c6 t \u22252 2] \u22642\u03b7k, where the expectation is taken w.r.t samples from \u2190 \u2212 \u03c1 (k) t . In the end, we adapt Theorem 1 again to show the adaptively generated samples are asymptotically close to the samples based on the optimal A\u22c6 t in Theorem 3, which quantifies the quality of data based on more efficient transportation plans. Theorem 3 (Generation quality of adaptive samples). Given assumptions A1-A5, the generated sample distribution at stage k is close to the exact sample distribution based on the equilibrium A\u22c6 t such that TV(\u2190 \u2212 \u03c1 \u22c6 0, \u03c1data) \u2272exp(\u2212T) + ( \u221a dh + \u03f5score + \u221a\u03b7k) \u221a T. 6. Variational Gap Recall that the optimal and variational forward SDEs follow d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2t\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t) i dt + p \u03b2td\u2212 \u2192 wt, d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2tA(k) t \u2212 \u2192 x t i dt + p \u03b2td\u2212 \u2192 wt, d\u2212 \u2192 x t = \u0002 f t(\u2212 \u2192 x t) + \u03b2tA\u22c6 t \u2212 \u2192 x t \u0003 dt + p \u03b2td\u2212 \u2192 wt, where we abuse the notion of \u2212 \u2192 x t for the sake of clarity and they represent three different processes. Despite the improved efficiency based on the ideal A\u22c6 t compared to the vanilla At \u22610, the variational score inevitably yields a sub-optimal transport in general nonlinear transport. We denote the law of the above processes by L, L(k), and L\u22c6. To assess the disparity, we leverage the Girsanov theorem to study the variational gap. Theorem 3 (Variational gap). Assume the assumption A2 and Novikov\u2019s condition hold. Assume f t and \u2207log \u2212 \u2192 \u03c8 t are Lipschitz smooth and satisfy the linear growth. The variational gap follows that KL(L\u2225L\u22c6) = 1 2 Z T 0 E \u0014 \u03b2t\u2225A\u22c6 t \u2212 \u2192 x t \u2212\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t)\u22252 2 \u0015 dt KL(L\u2225L(k)) \u2272\u03b7k + KL(L\u2225L\u22c6). Connections to Gaussian Schr\u00a8 odinger bridge (GSB) When data follows a Gaussian distribution, VSDM approximates the closed-form OT solution of Schr\u00a8 odinger bridge (Janati et al., 2020; Bunne et al., 2023). We refer readers to Theorem 3 (Bunne et al., 2023) for the detailed transportation plans. Compared to the vanilla At \u22610, we can significantly reduce the variational gap with KL(L\u2225L\u22c6) using proper parametrization and sufficient training. 7. Empirical Studies 7.1. Comparison to Gaussian Schrodinger Bridge VSDM is approximating GSB (Bunne et al., 2023) when both marginals are Gaussian distributions. To evaluate the solutions, we run our VSDM with a fixed \u03b2t \u22614 in Eq.(25) in Song et al. (2021b) and use the same marginals to replicate the VPSDE of the Gaussian SB with \u03b1t \u22610 and ct \u2261\u22122 in Eq.(7) in Bunne et al. (2023). We train VSDM with 20 stages and randomly pick 256 samples for presentation. We compare the flow trajectories from both models and observe in Figure 1 that the ground truth solution forms an almost linear path, while our VSDM sample trajectories exhibit a consistent alignment with trajectories from Gaussian SB. We attribute the bias predominantly to score estimations and numerical discretization. (a) GSB (b) VSDM Figure 1. Gaussian v.s. VSDM on the flow trajectories. 7.2. Synthetic Data We test our variational Schr\u00a8 odinger diffusion models (VSDMs) on two synthetic datasets: spiral and checkerboard (detailed in section D.2.1). We include SGMs as the baseline models and aim to show the strength of VSDMs on general shapes with straighter trajectories. As such, we stretch the Y-axis of the spiral data by 8 times and the X-axis of the checkerboard data by 6 times and denote them by spiral-8Y and checkerboard-6X, respectively. We adopt a monotone increasing {\u03b2nh}N\u22121 n=0 similar to Song et al. (2021b) and denote by \u03b2min and \u03b2max the minimum and maximum of {\u03b2nh}N\u22121 n=0 . We fix \u03b6 = 0.75 and \u03b2min = 0.1 and we focus on the study with different \u03b2max. We find that SGMs work pretty well with \u03b2max = 10 (SGM-10) on standard isotropic shapes. However, when it comes to spiral-8Y, the SGM-10 struggles to recover the boundary regions on the spiral-8Y data as shown in Figure 2 (top). Generations of Anisotropic Shapes To illustrate the effectiveness of our approach, Figure 2 (bottom) shows that VSDM-10 accurately reconstructs the edges of the spiral 7 \fVariational Schr\u00a8 odinger Diffusion Models and generates high-quality samples. 2.5 0.0 2.5 20 0 20 t=0.00 2.5 0.0 2.5 20 0 20 t=0.33 0 5 10 0 10 20 t=0.67 2.5 0.0 2.5 4 2 0 2 4 t=1.00 2.5 0.0 2.5 20 0 20 t=0.00 2.5 0.0 2.5 20 0 20 t=0.33 2.5 0.0 2.5 10 0 10 t=0.67 2.5 0.0 2.5 4 2 0 2 4 t=1.00 Figure 2. Variational Schr\u00a8 odinger diffusion models (VSDMs, bottom) v.s. SGMs (top) with the same hyperparameters (\u03b2max = 10). Straighter Trajectories The SGM-10 fails to fully generate the anisotropic spiral-8Y and increasing \u03b2max to 20 or 30 (SGM-20 and SGM-30) significantly alleviates this issue. However, we observe that excessive \u03b2max values in SGMs compromises the straightness and leads to inefficient transport, especially in the X-axis of spiral-8Y. 3 1 1 3 25 10 5 20 (a) SGM-10 3 1 1 3 25 10 5 20 (b) SGM-20 3 1 1 3 25 10 5 20 (c) SGM-30 3 1 1 3 25 10 5 20 (d) VSDM-10 Figure 3. Probability flow ODE via VSDMs and SGMs. SGM with \u03b2max = 10 is denoted by SGM-10 for convenience. Instead of setting excessive \u03b2max on both axes, our VSDM10, by contrast, proposes conservative diffusion scales on the X-axis of spiral-8Y and explores more on the Y-axis of spiral-8Y. As such, we obtain around 40% improvement on the straightness in Figure 3 and Table 4. Additional insights into a similar analysis of the checkboard dataset, convergence analysis, computational time, assessments of straightness, and evaluations via a smaller number of function evaluations (NFEs) can be found in Appendix D.2. 7.3. Image Data Modeling Experiment Setup In this experiment, we evaluate the performance of VSDM on image modeling tasks. We choose the CIFAR10 datasetas representative image data to demonstrate the scalability of the proposed VSDM on generative modeling of high-dimensional distributions. We refer to the code base of FB-SDE (Chen et al., 2022b) and use the same forward diffusion process of the EDM model (Karras et al., 2022). Since the training of VSDM is an alternative manner between forward and backward training, we build our implementations based on the open-source Figure 4. Unconditional generated samples from VSDM on CIFAR10 (32\u00d732 resolution) trained from scratch. diffusion distillation code base (Luo et al., 2024a) \u00b6, which provides a high-quality empirical implementation of alternative training with EDM model on CIFAR10 data. To make the VSDM algorithm stable, we simplify the matrix Dt to be diagonal with learnable diagonal elements, which is the case as we introduced in Corollary 1. We train the VSDM model from scratch on two NVIDIA A100-80G GPUs for two days and generate images from the trained VSDM with the Euler\u2013Maruyama numerical solver with 200 discretized steps for generation. Performances. We measure the generative performances in terms of the Fretchat Inception Score (FID (Heusel et al., 2017), the lower the better), which is a widely used metric for evaluating generative modeling performances. Tables 2 summarize the FID values of VSDM along with other optimal-transport-based and score-based generative models on the CIFAR10 datasets (unconditional without labels). The VSDM outperforms other optimal transportbased models with an FID of 2.28. This demonstrates that the VSDM has applicable scalability to model highdimensional distributions. Figure 7.3 shows some noncherry-picked unconditional generated samples from VSDM trained on the CIFAR10 dataset. Convergence Speed. To demonstrate the convergence speed of VSDM along training processes, we record the FID values in Table 1 for a training trail with no warmup on CIFAR10 datasets (unconditional). We use a batch size of 256 and a learning rate of 1e \u22124. We use the 2nd-order Heun numerical solver to sample. The result shows that VSDM has a smooth convergence performance. \u00b6See code in https://github.com/pkulwj1994/diff_instruct 8 \fVariational Schr\u00a8 odinger Diffusion Models Table 1. CONVERGENCE SPEED OF FID VALUES FOR VSDM. K IMAGES 0 10K 20K 30K 40K 50K 100K 150K 200K CONVERGE FID\u2193(NFE=35) 406.13 13.13 8.65 6.83 5.66 5.21 3.62 3.29 3.01 2.28 Table 2. CIFAR10 EVALUATION USING SAMPLE QUALITY (FID SCORE). OUR VSDM OUTPERFORMS OTHER OPTIMAL TRANSPORT BASELINES BY A LARGE MARGIN. CLASS METHOD FID \u2193 OT VSDM (OURS) 2.28 SB-FBSDE (CHEN ET AL., 2022B) 3.01 DOT (TANAKA, 2019) 15.78 DGFLOW (ANSARI ET AL., 2020) 9.63 SGMS SDE (SONG ET AL. (2021B)) 2.92 SCOREFLOW (SONG ET AL., 2021A) 5.7 VDM (KINGMA ET AL., 2021) 4.00 LSGM(VAHDAT ET AL., 2021) 2.10 EDM(KARRAS ET AL., 2022) 1.97 7.4. Time Series Forecasting We use multivariate probabilistic forecasting as a real-world conditional modeling task. Let {(t1, x1), . . . , (tn, xn)}, x \u2208Rd, denote a single multivariate time series. Given a dataset of such time series we want to predict the next P values xn+1, . . . , xn+P . In probabilistic modeling, we want to generate forecasts from learned p(xn+1:n+P |x1:n). The usual approach is to have an encoder that represents a sequence x1:i with a fixed-sized vector hi \u2208Rh, \u2200i, and then parameterize the output distribution p(xi+1|hi). At inference time we encode the history into hn and sample the next value from p(xn+1|hn), then use xn+1 to get the updated hn+1 and repeat until we obtain xn+P . In the previous works, the output distribution has been specified with a Copulas (Salinas et al., 2019) and denoising diffusion (Rasul et al., 2021). We augment our approach to allow conditional generation which requires only changing the model to include the conditioning vector hi. For that we adopt the U-Net architecture. We use the LSTM neural network as a sequence encoder. We use three real-world datasets, as described in Appendix D.3. We compare to the SGM and the denoising diffusion approach from Rasul et al. (2021) which we refer to as DDPM. Table 3 shows that our method matches or outperforms the competitors. Figure 5 is a demo for conditional time series generation and more details are presented in Figure 12 to demonstrate the quality of the forecasts. 8. Conclusions and Future Works The Schr\u00a8 odinger bridge diffusion model offers a principled approach to solving optimal transport, but estimatTable 3. FORECASTING RESULTS (LOWER IS BETTER). CRPS-SUM ELECTRICITY EXCHANGE RATE SOLAR DDPM 0.026\u00b10.007 0.012\u00b10.001 0.506\u00b10.058 SGM 0.045\u00b10.005 0.012\u00b10.002 0.413\u00b10.045 VSDM (OUR) 0.038\u00b10.006 0.008\u00b10.002 0.395\u00b10.011 Figure 5. Example for Electricity for 2 (out of 370) dimensions. ing the intractable forward score relies on implicit training through costly simulated trajectories. To address this scalability issue, we present the variational Schr\u00a8 odinger diffusion model (VSDM), utilizing linear variational forward scores for simulation-free training of backward score functions. Theoretical foundations leverage stochastic approximation theory, demonstrating the convergence of variational scores to local equilibrium and highlighting the variational gap in optimal transport. Empirically, VSDM showcases the strength of generating data with anisotropic shapes and yielding the desired straighter transport paths for reducing the number of functional evaluations. VSDM also shows scalability in dealing large-scale image datasets without reliance on warm-up initializations. In future research, we aim to explore the critically damped (momentum) acceleration (Dockhorn et al., 2022) and Hessian approximations to develop the \u201cADAM\u201d alternative of diffusion models. 9. Impact Statements This paper proposed a principled approach to accelerate the training and sampling of generative models using optimal transport. This work will contribute to developing textto-image generation, artwork creation, and product design. However, it may also raise challenges in the fake-content generation and pose a threat to online privacy and security. Acknowledgements We would like to thank Valentin De Bortoli, Tianyang Hu, and the reviewers for their insightful suggestions. 9 \fVariational Schr\u00a8 odinger Diffusion Models"
16
+ }