XaiverZ commited on
Commit
2b5579a
·
1 Parent(s): 8325b8e
Files changed (39) hide show
  1. intro_8K/test_introduction_short_2405.03606v1.json +18 -0
  2. intro_8K/test_introduction_short_2405.03690v2.json +16 -0
  3. intro_8K/test_introduction_short_2405.03894v1.json +17 -0
  4. intro_8K/test_introduction_short_2405.03958v1.json +18 -0
  5. intro_8K/test_introduction_short_2405.03962v1.json +17 -0
  6. intro_8K/test_introduction_short_2405.03989v2.json +16 -0
  7. intro_8K/test_introduction_short_2405.04003v1.json +17 -0
  8. intro_8K/test_introduction_short_2405.04233v1.json +17 -0
  9. intro_8K/test_introduction_short_2405.04272v1.json +18 -0
  10. intro_8K/test_introduction_short_2405.04356v1.json +16 -0
  11. intro_8K/test_introduction_short_2405.04370v1.json +16 -0
  12. intro_8K/test_introduction_short_2405.04403v1.json +17 -0
  13. intro_8K/test_introduction_short_2405.04483v1.json +16 -0
  14. intro_8K/test_introduction_short_2405.04534v1.json +16 -0
  15. intro_8K/test_introduction_short_2405.04674v1.json +0 -0
  16. intro_8K/test_introduction_short_2405.04682v1.json +18 -0
  17. intro_8K/test_introduction_short_2405.04700v1.json +19 -0
  18. intro_8K/test_introduction_short_2405.04781v1.json +16 -0
  19. intro_8K/test_introduction_short_2405.04795v1.json +16 -0
  20. intro_8K/test_introduction_short_2405.04834v1.json +16 -0
  21. intro_8K/test_introduction_short_2405.04925v1.json +16 -0
  22. intro_8K/test_introduction_short_2405.04940v1.json +16 -0
  23. intro_8K/test_introduction_short_2405.05007v1.json +17 -0
  24. intro_8K/test_introduction_short_2405.05216v1.json +16 -0
  25. intro_8K/test_introduction_short_2405.05252v1.json +20 -0
  26. intro_8K/test_introduction_short_2405.05259v1.json +17 -0
  27. intro_8K/test_introduction_short_2405.05330v1.json +19 -0
  28. intro_8K/test_introduction_short_2405.05380v1.json +16 -0
  29. intro_8K/test_introduction_short_2405.05433v1.json +17 -0
  30. intro_8K/test_introduction_short_2405.05615v1.json +18 -0
  31. intro_8K/test_introduction_short_2405.05691v1.json +17 -0
  32. intro_8K/test_introduction_short_2405.05707v1.json +16 -0
  33. intro_8K/test_introduction_short_2405.05791v1.json +16 -0
  34. intro_8K/test_introduction_short_2405.05846v1.json +17 -0
  35. intro_8K/test_introduction_short_2405.05852v1.json +21 -0
  36. intro_8K/test_introduction_short_2405.05945v1.json +16 -0
  37. intro_8K/test_introduction_short_2405.05949v1.json +16 -0
  38. intro_8K/test_introduction_short_2405.05953v1.json +16 -0
  39. intro_8K/test_introduction_short_2405.05959v1.json +18 -0
intro_8K/test_introduction_short_2405.03606v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03606v1",
3
+ "title": "Strang Splitting for Parametric Inference in Second-order Stochastic Differential Equations",
4
+ "abstract": "We address parameter estimation in second-order stochastic differential\nequations (SDEs), prevalent in physics, biology, and ecology. Second-order SDE\nis converted to a first-order system by introducing an auxiliary velocity\nvariable raising two main challenges. First, the system is hypoelliptic since\nthe noise affects only the velocity, making the Euler-Maruyama estimator\nill-conditioned. To overcome that, we propose an estimator based on the Strang\nsplitting scheme. Second, since the velocity is rarely observed we adjust the\nestimator for partial observations. We present four estimators for complete and\npartial observations, using full likelihood or only velocity marginal\nlikelihood. These estimators are intuitive, easy to implement, and\ncomputationally fast, and we prove their consistency and asymptotic normality.\nOur analysis demonstrates that using full likelihood with complete observations\nreduces the asymptotic variance of the diffusion estimator. With partial\nobservations, the asymptotic variance increases due to information loss but\nremains unaffected by the likelihood choice. However, a numerical study on the\nKramers oscillator reveals that using marginal likelihood for partial\nobservations yields less biased estimators. We apply our approach to\npaleoclimate data from the Greenland ice core and fit it to the Kramers\noscillator model, capturing transitions between metastable states reflecting\nobserved climatic conditions during glacial eras.",
5
+ "authors": "Predrag Pilipovic, Adeline Samson, Susanne Ditlevsen",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "stat.ME",
9
+ "cats": [
10
+ "stat.ME",
11
+ "math.ST",
12
+ "stat.TH"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "Second-order stochastic differential equations (SDEs) are an effective instrument for modeling complex systems showcasing both deterministic and stochastic dynamics, which incorporate the second derivative of a variable - the acceleration. These models are extensively applied in many fields, including physics (Rosenblum and Pikovsky, 2003), molecular dynamics (Leimkuhler and Matthews, 2015), ecology (Johnson et al., 2008; Michelot and Blackwell, 2019), paleoclimate research (Ditlevsen et al., 2002), and neuroscience (Ziv et al., 1994; Jansen and Rit, 1995). arXiv:2405.03606v1 [stat.ME] 6 May 2024 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT The general form of a second-order SDE in Langevin form is given as follows: \u00a8 Xt = F(Xt, \u02d9 Xt, \u03b2) + \u03a3\u03bet. (1) Here, Xt \u2208Rd denotes the variable of interest, the dot indicates derivative with respect to time t, drift F represents the deterministic force, and \u03bet is a white noise representing the system\u2019s random perturbations around the deterministic force. We assume that \u03a3 is constant, that is the noise is additive. The main goal of this study is to estimate parameters in second-order SDEs. We first reformulate the d-dimensional second-order SDE (1) into a 2d-dimensional SDE in It\u00f4\u2019s form. We define an auxiliary velocity variable, and express the second-order SDE in terms of its position Xt and velocity Vt: dXt = Vt dt, X0 = x0, dVt = F (Xt, Vt; \u03b2) dt + \u03a3 dWt, V0 = v0, (2) where Wt is a standard Wiener process. We refer to Xt and Vt as the smooth and rough coordinates, respectively. A specific example of model (2) is F(x, v) = \u2212c(x, v)v \u2212\u2207U(x), for some function c(\u00b7) and potential U(\u00b7). Then, model (2) is called a stochastic damping Hamiltonian system. This system describes the motion of a particle subjected to potential, dissipative, and random forces (Wu, 2001). An example of a stochastic damping Hamiltonian system is the Kramers oscillator introduced in Section 2.1. Let Yt = (X\u22a4 t , V\u22a4 t )\u22a4, e F(x, v; \u03b2) = (v\u22a4, F(x, v; \u03b2)\u22a4)\u22a4and e \u03a3 = (0\u22a4, \u03a3\u22a4)\u22a4. Then (2) is formulated as dYt = e F (Yt; \u03b2) dt + e \u03a3 dWt, Y0 = y0. (3) The notation e over an object indicates that it is associated with process Yt. Specifically, the object is of dimension 2d or 2d \u00d7 2d. When it exists, the unique solution of (3) is called a diffusion or diffusion process. System (3) is usually not fully observed since the velocity Vt is not observable. Thus, our primary objective is to estimate the underlying drift parameter \u03b2 and the diffusion parameter \u03a3, based on discrete observations of either Yt (referred to as complete observation case), or only Xt (referred to as partial observation case). Diffusion Yt is said to be hypoelliptic since the matrix e \u03a3e \u03a3\u22a4= \u00140 0 0 \u03a3\u03a3\u22a4 \u0015 (4) is not of full rank, while Yt admits a smooth density. Thus, (2) is a subclass of a larger class of hypoelliptic diffusions. Parametric estimation for hypoelliptic diffusions is an active area of research. Ditlevsen and S\u00f8rensen (2004) studied discretely observed integrated diffusion processes. They proposed to use prediction-based estimating functions, which are suitable for non-Markovian processes and which do not require access to the unobserved component. They proved consistency and asymptotic normality of the estimators for N \u2192\u221e, but without any requirements on the sampling interval h. Certain moment conditions are needed to obtain results for fixed h, which are often difficult to fulfill for nonlinear drift functions. The estimator was applied to paleoclimate data in Ditlevsen et al. (2002), similar to the data we analyze in Section 5. Gloter (2006) also focused on parametric estimation for discretely observed integrated diffusion processes, introducing a contrast function using the Euler-Maruyama discretization. He studied the asymptotic properties as the sampling interval h \u21920 and the sample size N \u2192\u221e, under the so-called rapidly increasing experimental design Nh \u2192\u221e and Nh2 \u21920. To address the ill-conditioned contrast from the Euler-Maruyama discretization, he suggested using only the rough equations of the SDE. He proposed to recover the unobserved integrated component through the finite difference approximation (Xtk+1 \u2212Xtk)/h. This approximation makes the estimator biased and requires a correction factor of 3/2 in one of the terms of the contrast function for partial observations. Consequently, the correction increases the asymptotic variance of the estimator of the diffusion parameter. Samson and Thieullen (2012) expanded the ideas of (Gloter, 2006) and proved the results of (Gloter, 2006) in more general models. Similar to (Gloter, 2006), their focus was on contrasts using the Euler-Maruyama discretization limited to only the rough equations. Pokern et al. (2009) proposed an It\u00f4-Taylor expansion, adding a noise term of order h3/2 to the smooth component in the numerical scheme. They argued against the use of finite differences for approximating unobserved components. Instead, he suggested using the It\u00f4-Taylor expansion leading to non-degenerate conditionally Gaussian approximations of the transition density and using Markov Chain Monte Carlo (MCMC) Gibbs samplers for conditionally imputing missing components based on the observations. They found out that this approach resulted in a biased estimator of the drift parameter of the rough component. 2 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Ditlevsen and Samson (2019) focused on both filtering and inference methods for complete and partial observations. They proposed a contrast estimator based on the strong order 1.5 scheme (Kloeden and Platen, 1992), which incorporates noise of order h3/2 into the smooth component, similar to (Pokern et al., 2009). Moreover, they retained terms of order h2 in the mean, which removed the bias in the drift parameters noted in (Pokern et al., 2009). They proved consistency and asymptotic normality under complete observations, with the standard rapidly increasing experimental design Nh \u2192\u221eand Nh2 \u21920. They adopted an unconventional approach by using two separate contrast functions, resulting in marginal asymptotic results rather than a joint central limit theorem. The model was limited to a scalar smooth component and a diagonal diffusion coefficient matrix for the rough component. Melnykova (2020) developed a contrast estimator using local linearization (LL) (Ozaki, 1985; Shoji and Ozaki, 1998; Ozaki et al., 2000) and compared it to the least-squares estimator. She employed local linearization of the drift function, providing a non-degenerate conditional Gaussian discretization scheme, enabling the construction of a contrast estimator that achieves asymptotic normality under the standard conditions Nh \u2192\u221eand Nh2 \u21920. She proved a joint central limit theorem, bypassing the need for two separate contrasts as in Ditlevsen and Samson (2019). The models in Ditlevsen and Samson (2019) and Melnykova (2020) allow for parameters in the smooth component of the drift, in contrast to models based on second-order differential equations. Recent work by Gloter and Yoshida (2020, 2021) introduced adaptive and non-adaptive methods in hypoelliptic diffusion models, proving asymptotic normality in the complete observation regime. In line with this work, we briefly review their non-adaptive estimator. It is based on a higher-order It\u00f4-Taylor expansion that introduces additional Gaussian noise onto the smooth coordinates, accompanied by an appropriate higher-order mean approximation of the rough coordinates. The resulting estimator was later termed the local Gaussian (LG), which should be differentiated from LL. The LG estimator can be viewed as an extension of the estimator proposed in Ditlevsen and Samson (2019), with fewer restrictions on the class of models. Gloter and Yoshida (2020, 2021) found that using the full SDE to create a contrast reduces the asymptotic variance of the estimator of the diffusion parameter compared to methods using only rough coordinates in the case of complete observations. The most recent contributions are Iguchi et al. (2023a,b); Iguchi and Beskos (2023), building on the foundation of the LG estimator and focusing on high-frequency regimes addressing limitations in earlier methods. Iguchi et al. (2023b) presented a new closed-form contrast estimator for hypoelliptic SDEs (denoted as Hypo-I) based on Edgeworth-type density expansion and Malliavin calculus that achieves asymptotic normality under the less restrictive condition of Nh3 \u21920. Iguchi et al. (2023a) focused on a highly degenerate class of SDEs (denoted as Hypo-II) where smooth coordinates split into further sub-groups and proposed estimators for both complete and partial observation settings. Iguchi and Beskos (2023) further refined the conditions for estimators asymptotic normality for both Hypo-I and Hypo-II under a weak design Nhp \u21920, for p \u22652. The existing methods are generally based on approximations with varying degrees of refinements to correct for possible nonlinearities. This implies that they quickly degrade for highly nonlinear models if the step size is increased. In particular, this is the case for Hamiltonian systems. Instead, we propose to use splitting schemes, more precisely the Strang splitting scheme. Splitting schemes are established techniques initially developed for solving ordinary differential equations (ODEs) and have proven to be effective also for SDEs (Ableidinger et al., 2017; Buckwar et al., 2022; Pilipovic et al., 2024). These schemes yield accurate results in many practical applications since they incorporate nonlinearities in their construction. This makes them particularly suitable for second-order SDEs, where they have been widely used. Early work in dissipative particle dynamics (Shardlow, 2003; Serrano et al., 2006), applications to molecular dynamics (Vanden-Eijnden and Ciccotti, 2006; Melchionna, 2007; Leimkuhler and Matthews, 2015) and studies on internal particles (Pavliotis et al., 2009) all highlight the scheme\u2019s versatility. Burrage et al. (2007), Bou-Rabee and Owhadi (2010), and Abdulle et al. (2015) focused on the long-run statistical properties such as invariant measures. Bou-Rabee (2017); Br\u00e9hier and Gouden\u00e8ge (2019) and Adams et al. (2022) used splitting schemes for stochastic partial differential equations (SPDEs). Despite the extensive use of splitting schemes in different areas, statistical applications have been lacking. We have recently proposed statistical estimators for elliptic SDEs (Pilipovic et al., 2024). The straightforward and intuitive schemes lead to robust, easy-to-implement estimators, offering an advantage over more numerically intensive and less user-friendly state-of-the-art methods. We use the Strang splitting scheme to approximate the transition density between two consecutive observations and derive the pseudo-likelihood function since the exact likelihood function is often unknown or intractable. Then, to estimate parameters, we employ maximum likelihood estimation (MLE). However, two specific statistical problems arise due to hypoellipticity and partial observations. First, hypoellipticity leads to degenerate Euler-Maruyama transition schemes, which can be addressed by constructing the pseudo-likelihood solely from the rough equations of the SDE, referred to as the rough likelihood hereafter. The 3 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Strang splitting technique enables the estimator to incorporate both smooth and rough components (referred to as the full likelihood). It is also possible to construct Strang splitting estimators using only the rough likelihood, raising the question of which estimator performs better. Our results are in line with Gloter and Yoshida (2020, 2021) in the complete observation setting, where we find that using the full likelihood reduces the asymptotic variance of the diffusion estimator. We found the same results in the simulation study for the LL estimator proposed by Melnykova (2020). Second, we suggest to treat the unobserved velocity by approximating it using finite difference methods. While Gloter (2006) and Samson and Thieullen (2012) exclusively use forward differences, we investigate also central and backward differences. The forward difference approach leads to a biased estimator unless it is corrected. One of the main contributions of this work is finding suitable corrections of the pseudo-likelihoods for different finite difference approximations such that the Strang estimators are asymptotically unbiased. This also ensures consistency of the diffusion parameter estimator, at the cost of increasing its asymptotic variance. When only partial observations are available, we explore the impact of using the full likelihood versus the rough likelihood and how different finite differentiation approximations influence the parametric inference. We find that the choice of likelihood does not affect the asymptotic variance of the estimator. However, our simulation study on the Kramers oscillator suggests that using the full likelihood in finite sample setups introduce more bias than using only the rough marginal likelihood, which is the opposite of the complete observation setting. Finally, we analyze a paleoclimate ice core dataset from Greenland using a second-order SDE. The main contributions of this paper are: 1. We extend the Strang splitting estimator of (Pilipovic et al., 2024) to hypoelliptic models given by second-order SDEs, including appropriate correction factors to obtain consistency. 2. When complete observations are available, we show that the asymptotic variance of the estimator of the diffusion parameter is smaller when maximizing the full likelihood. In contrast, for partial observations, we show that the asymptotic variance remains unchanged regardless of using the full or marginal likelihood of the rough coordinates. 3. We discuss the influence on the statistical properties of using the forward difference approximation for imputing the unobserved velocity variables compared to using the backward or the central difference. 4. We evaluate the performance of the estimators through a simulation study of a second-order SDE, the Kramers oscillator. Additionally, we show numerically in a finite sample study that the marginal likelihood for partial observations is more favorable than the full likelihood. 5. We fit the Kramers oscillator to a paleoclimate ice core dataset from Greenland and estimate the average time needed to pass between two metastable states. The structure of the paper is as follows. In Section 2, we introduce the class of SDE models, define hypoellipticity, introduce the Kramers oscillator, and explain the Strang splitting scheme and its associated estimators. The asymptotic properties of the estimator are established in Section 3. The theoretical results are illustrated in a simulation study on the Kramers Oscillator in Section 4. Section 5 illustrates our methodology on the Greenland ice core data, while the technical results and the proofs of the main theorems and properties are in Section 6 and Supplementary Material S1, respectively. Notation. We use capital bold letters for random vectors, vector-valued functions, and matrices, while lowercase bold letters denote deterministic vectors. \u2225\u00b7 \u2225denotes both the L2 vector norm in Rd. Superscript (i) on a vector denotes the i-th component, while on a matrix it denotes the i-th column. Double subscript ij on a matrix denotes the component in the i-th row and j-th column. The transpose is denoted by \u22a4. Operator Tr(\u00b7) returns the trace of a matrix and det(\u00b7) the determinant. Id denotes the d-dimensional identity matrix, while 0d\u00d7d is a d-dimensional zero square matrix. We denote by [ai]d i=1 a vector with coordinates ai, and by [bij]d i,j=1 a matrix with coordinates bij, for i, j = 1, . . . , d. For a real-valued function g : Rd \u2192R, \u2202x(i)g(x) denotes the partial derivative with respect to x(i) and \u22022 x(i)x(j)g(x) denotes the second partial derivative with respect to x(i) and x(j). The nabla operator \u2207x denotes the gradient vector of g with respect of x, that is, \u2207xg(x) = [\u2202x(i)g(x)]d i=1. H denotes the Hessian matrix of function g, Hg(x) = [\u2202x(i)x(j)g(x)]d i,j=1. For a vector-valued function F : Rd \u2192Rd, the differential operator Dx denotes the Jacobian matrix DxF(x) = [\u2202x(i)F (j)(x)]d i,j=1. Let R represent a vector (or a matrix) valued function defined on (0, 1) \u00d7 Rd (or (0, 1) \u00d7 Rd\u00d7d), such that, for some constant C, \u2225R(a, x)\u2225< aC(1 + \u2225x\u2225)C for all a, x. When denoted by R, it refers to a scalar function. For an open set A, the bar A indicates closure. We write P \u2212 \u2192for convergence in probability P. 4 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT",
17
+ "main_content": "Let Y = (Yt)t\u22650 in (3) be defined on a complete probability space (\u2126, F, P\u03b8) with a complete right-continuous filtration F = (Ft)t\u22650, and let the d-dimensional Wiener process W = (Wt)t\u22650 be adapted to Ft. The probability measure P\u03b8 is parameterized by the parameter \u03b8 = (\u03b2, \u03a3). Rewrite equation (3) as follows: dYt = \ufffd A(\u03b2)(Yt \u2212\ufffd b(\u03b2)) dt + \ufffd N (Yt; \u03b2) dt + \ufffd \u03a3 dWt, (5) 0d\u00d7d Id \ufffd \ufffd \ufffd b(\u03b2) \ufffd \ufffd \ufffd 0d \ufffd (6) where \ufffd \ufffd A(\u03b2) = \ufffd 0d\u00d7d Id Ax(\u03b2) Av(\u03b2) ) is thus split as F(x, v; \u03b2) = \ufffd \ufffd \ufffd , \ufffd b(\u03b2) = \ufffd b(\u03b2) 0d = Ax(\u03b2)(x \u2212b(\u03b2)) \ufffd \ufffd , \ufffd N(x, v; \u03b2) = \ufffd 0d N(x, v; \u03b2) ) + Av(\u03b2)v + N(x, v; \u03b2). \ufffd . (6) \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd Function F in (2) is thus split as F(x, v; \u03b2) = Ax(\u03b2)(x \u2212b(\u03b2)) + Av(\u03b2)v + N(x, v; \u03b2). Let denote the closure of the parameter space with and being two conv \u2212 Let \u0398\u03b2 \u00d7 \u0398\u03a3 = \u0398 denote the closure of the parameter space with \u0398\u03b2 and \u0398\u03a3 being two convex open bounded subsets of Rr and Rd\u00d7d, respectively. The function N : R2d \u00d7 \u0398\u03b2 \u2192Rd is assumed locally Lipschitz; functions Ax and Av are defined on \u0398\u03b2 and take values in Rd\u00d7d; and the parameter matrix \u03a3 takes values in Rd\u00d7d. The matrix \u03a3\u03a3\u22a4is assumed to be positive definite, shaping the variance of the rough coordinates. As any square root of \u03a3\u03a3\u22a4induces the same distribution, \u03a3 is identifiable only up to equivalence classes. Hence, estimation of the parameter \u03a3 means estimation of \u03a3\u03a3\u22a4. The drift function \ufffd F in (3) is divided into a linear part given by the matrix \ufffd A and a nonlinear part given by \ufffd N. The true value of the parameter is denoted by \u03b80 = (\u03b20, \u03a30), and we assume that \u03b80 \u2208\u0398. When referring to the true parameters, we write Ax,0, Av,0, b0, N0(x), F0(x) and \u03a3\u03a3\u22a4 0 instead of Ax(\u03b20), Av(\u03b20), b(\u03b20), N(x; \u03b20), \ufffd \ufffd \ufffd The true value of the parameter is denoted by \u03b80 = (\u03b20, \u03a30), and we assume that \u03b80 \u2208\u0398. When referring to the true parameters, we write Ax,0, Av,0, b0, N0(x), F0(x) and \u03a3\u03a3\u22a4 0 instead of Ax(\u03b20), Av(\u03b20), b(\u03b20), N(x; \u03b20), F(x; \u03b20) and \u03a30\u03a3\u22a4 0 , respectively. We write Ax, Av, b, N(x), F(x), and \u03a3\u03a3\u22a4for any parameter \u03b8. 2.1 Example: The Kramers oscillator The abrupt temperature changes during the ice ages, known as the Dansgaard\u2013Oeschger (DO) events, are essential elements for understanding the climate (Dansgaard et al., 1993). These events occurred during the last glacial era spanning approximately the period from 115,000 to 12,000 years before present and are characterized by rapid warming phases followed by gradual cooling periods, revealing colder (stadial) and warmer (interstadial) climate states (Rasmussen et al., 2014). To analyze the DO events in Section 5, we propose a stochastic model of the escape dynamics in metastable systems, the Kramers oscillator (Kramers, 1940), originally formulated to model the escape rate of Brownian particles from potential wells. The escape rate is related to the mean first passage time \u2014 the time needed for a particle to exceed the potential\u2019s local maximum for the first time, starting at a neighboring local minimum. This rate depends on variables such as the damping coefficient, noise intensity, temperature, and specific potential features, including the barrier\u2019s height and curvature at the minima and maxima. We apply this framework to quantify the rate of climate transitions between stadial and interstadial periods. This provides an estimate on the probability distribution of the ocurrence of DO events, contributing to our understanding of the global climate system. Following Arnold and Imkeller (2000), we introduce the Kramers oscillator as the stochastic Duffing oscillator an example of a second-order SDE and a stochastic damping Hamiltonian system. The Duffing oscillator (Duffing, 1918) is a forced nonlinear oscillator, featuring a cubic stiffness term. The governing equation is given by: \u00a8 xt + \u03b7 \u02d9 xt + d dx d dxU(xt) = f(t), where U(x) = \u2212ax2 2 x2 2 + bx4 4 x4 4 , with a, b > 0, \u03b7 \u22650. (7) The parameter \u03b7 in (7) indicates the damping level, a regulates the linear stiffness, and b determines the nonlinear component of the restoring force. In the special case where b = 0, the equation simplifies to a damped harmonic oscillator. Function f represents the driving force and is usually set to f(t) = \u03b7 cos(\u03c9t), which introduces deterministic chaos (Korsch and Jodl, 1999). \u221a chaos (Korsch and Jodl, 1999). When the driving force is f(t) = \u221a2\u03b7T\u03be(t), where \u03be(t) is white noise, equation (7) characterizes the stochastic movement of a particle within a bistable potential well, interpreting T > 0 as the temperature of a heat bath. Setting \u03c3 = \u221a2\u03b7T, equation (7) can be reformulated as an It\u00f4 SDE for variables Xt and Vt = \u02d9 Xt, expressed as: dX = V dt, dXt = Vt dt, \ufffd dXt = Vt dt, dVt = \ufffd \u2212\u03b7Vt \u2212d dx d dxU(Xt) \ufffd dt + \u03c3 dWt, (8) 5 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where Wt denotes a standard Wiener process. The parameter set of SDE (8) is \u03b8 = {\u03b7, a, b, \u03c32}. The existence and uniqueness of the invariant measure \u03bd0(dx, dy) of (8) is proved in Theorem 3 in (Arnold and Imkeller, 2000). The invariant measure \u03bd0 is linked to the invariant density \u03c00 through \u03bd0(dx, dy) = \u03c00(x, v) dx dy. Here we write \u03c00(x, v) instead of \u03c0(x, v; \u03b80), and \u03c0(x, v) instead of \u03c0(x, v; \u03b8). The Fokker-Plank equation for \u03c0 is given by \u2212v \u2202 \u2202x\u03c0(x, v) + \u03b7\u03c0(x, v) + \u03b7v \u2202 \u2202v \u03c0(x, v) + d dxU(x) \u2202 \u2202v \u03c0(x, v) + \u03c32 2 \u22022 \u2202v2 \u03c0(x, v) = 0. (9) The invariant density that solves the Fokker-Plank equation is: \u03c0(x, v) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 exp \u0010 \u2212\u03b7 \u03c32 v2\u0011 , (10) where C is the normalizing constant. The marginal invariant probability of Vt is thus Gaussian with zero mean and variance \u03c32/(2\u03b7). The marginal invariant probability of Xt is bimodal driven by the potential U(x): \u03c0(x) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 . (11) At steady state, for a particle moving in any potential U(x) and driven by random Gaussian noise, the position x and velocity v are independent of each other. This is reflected by the decomposition of the joint density \u03c0(x, v) into \u03c0(x)\u03c0(v). Fokker-Plank equation (9) can also be used to derive the mean first passage time \u03c4 which is inversely related to Kramers\u2019 escape rate \u03ba (Kramers, 1940): \u03c4 = 1 \u03ba \u2248 2\u03c0 \u0012q 1 + \u03b72 4\u03c92 \u2212 \u03b7 2\u03c9 \u0013 \u2126 exp \u0012\u2206U T \u0013 , where xbarrier = 0 is the local maximum of U(x) and xwell = \u00b1 p a/b are the local minima, \u03c9 = p |U \u2032\u2032(xbarrier)| = \u221aa, \u2126= p U \u2032\u2032(xwell) = \u221a 2a, and \u2206U = U(xbarrier) \u2212U(xwell) = a2/4b, . The formula is derived assuming strong friction, or an over-damped system (\u03b7 \u226b\u03c9), and a small parameter T/\u2206U \u226a1, indicating sufficiently deep potential wells. For the potential defined in (7), the mean waiting time \u03c4 is then approximated by \u03c4 \u2248 \u221a 2\u03c0 q a + \u03b72 4 \u2212\u03b7 2 exp \u0012 a2\u03b7 2b\u03c32 \u0013 . (12) 2.2 Hypoellipticity The SDE (5) is said to be hypoelliptic if its quadratic diffusion matrix e \u03a3e \u03a3\u22a4is not of full rank, while its solutions admit a smooth transition density with respect to the Lebesgue measure. According to H\u00f6rmander\u2019s theorem (Nualart, 2006), this is fulfilled if the SDE in its Stratonovich form satisfies the weak H\u00f6rmander condition. Since \u03a3 does not depend on y, the It\u00f4 and Stratonovich forms coincide. We begin by recalling the concept of Lie brackets: for smooth vector fields f, g : R2d \u2192R2d, the i-th component of the Lie bracket, [f, g](i), is defined as [f, g](i) := D\u22a4 y g(i)(y)f(y) \u2212D\u22a4 y f (i)(y)g(y). We define the set H of vector fields by initially including e \u03a3(i), i = 1, 2, ..., 2d, and then recursively adding Lie brackets H \u2208H \u21d2[e F, H], [e \u03a3(1), H], . . . , [e \u03a3(2d), H] \u2208H. The weak H\u00f6rmander condition is met if the vectors in H span R2d at every point y \u2208R2d. The initial vectors span {(0, v) \u2208R2d | v \u2208Rd}, a d-dimensional subspace. We therefore need to verify the existence of some H \u2208H with a non-zero first element. The first iteration of the system yields [e F, e \u03a3(i)](1) = \u2212\u03a3(i), [e \u03a3(i), e \u03a3(j)](1) = 0, for i, j = 1, 2, ..., 2d. The first equation is non-zero, as are all subsequent iterations. Thus, the second-order SDE defined in (5) is always hypoelliptic. 6 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.3 Assumptions The following assumptions are a generalization of those presented in (Pilipovic et al., 2024). Let T > 0 be the length of the observed time interval. We assume that (5) has a unique strong solution Y = {Yt | t \u2208[0, T]}, adapted to F = {Ft | t \u2208[0, T]}, which follows from the following first two assumptions (Theorem 2 in Alyushina (1988), Theorem 1 in Krylov (1991), Theorem 3.5 in Mao (2007)). We need the last three assumptions to prove the properties of the estimators. (A1) Function N is twice continuously differentiable with respect to both y and \u03b8, i.e., N \u2208C2. Moreover, it is globally one-sided Lipschitz continuous with respect to y on R2d \u00d7 \u0398\u03b2. That is, there exists a constant C > 0 such that for all y1, y2 \u2208R2d, (y1 \u2212y2)\u22a4(N(y1; \u03b2) \u2212N(y2; \u03b2)) \u2264C\u2225y1 \u2212y2\u22252. (A2) Function N exhibits at most polynomial growth in y, uniformly in \u03b8. Specifically, there exist constants C > 0 and \u03c7 \u22651 such that for all y1, y2 \u2208R2d, \u2225N (y1; \u03b2) \u2212N (y2; \u03b2) \u22252 \u2264C \u00001 + \u2225y1\u22252\u03c7\u22122 + \u2225y2\u22252\u03c7\u22122\u0001 \u2225y1 \u2212y2\u22252. Additionally, its derivatives exhibit polynomial growth in y, uniformly in \u03b8. (A3) The solution Y to SDE (5) has invariant probability \u03bd0(dy). (A4) \u03a3\u03a3\u22a4is invertible on \u0398\u03a3. (A5) \u03b2 is identifiable, that is, if F(y, \u03b21) = F(y, \u03b22) for all y \u2208R2d, then \u03b21 = \u03b22. Assumption (A1) ensures finiteness of the moments of the solution X (Tretyakov and Zhang, 2013), i.e., E[ sup t\u2208[0,T ] \u2225Yt\u22252p] < C(1 + \u2225y0\u22252p), \u2200p \u22651. (13) Assumption (A3) is necessary for the ergodic theorem to ensure convergence in distribution. Assumption (A4) ensures that the model (5) is hypoelliptic. Assumption (A5) ensures the identifiability of the drift parameter. 2.4 Strang splitting scheme Consider the following splitting of (5): dY[1] t = e A(Y[1] t \u2212e b) dt + e \u03a3 dWt, Y[1] 0 = y0, (14) dY[2] t = e N(Y[2] t ) dt, Y[2] 0 = y0. (15) There are no assumptions on the choice of e A and e b, and thus the nonlinear function e N. Indeed, we show that the asymptotic results hold for any choice of e A and e b in both the complete and the partial observation settings. This extends the results in Pilipovic et al. (2024), where it is shown to hold in the elliptic complete observation case, as well. While asymptotic results are invariant to the choice of e A and e b, finite sample properties of the scheme and the corresponding estimators are very different, and it is important to choose the splitting wisely. Intuitively, when the process is close to a fixed point of the drift, the linear dynamics are dominating, whereas far from the fixed points, the nonlinearities might be dominating. If the drift has a fixed point y\u22c6, we therefore suggest setting e A = Dye F(y\u22c6) and e b = y\u22c6. This choice is confirmed in simulations (for more details see Pilipovic et al. (2024)). Solution of SDE (14) is an Ornstein\u2013Uhlenbeck (OU) process given by the following h-flow: Y[1] tk = \u03a6[1] h (Y[1] tk\u22121) = e \u00b5h(Y[1] tk\u22121; \u03b2) + e \u03b5h,k, (16) e \u00b5h(y; \u03b2) := e e Ah(y \u2212e b) + e b, (17) e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du, (18) where e \u03b5h,k i.i.d \u223cN2d(0, e \u2126h) for k = 1, . . . , N. It is useful to rewrite e \u2126h in the following block matrix form, e \u2126h = \" \u2126[SS] h \u2126[SR] h \u2126[RS] h \u2126[RR] h # , (19) 7 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where S in the superscript stands for smooth and R stands for rough. The Schur complement of e \u2126h with respect to \u2126[RR] h and the determinant of e \u2126h are given by: \u2126[S|R] h := \u2126[SS] h \u2212\u2126[SR] h (\u2126[RR] h )\u22121\u2126[RS] h , det e \u2126h = det \u2126[RR] h det \u2126[S|R] h . Assumptions (A1)-(A2) ensure the existence and uniqueness of the solution of (15) (Theorem 1.2.17 in Humphries and Stuart (2002)). Thus, there exists a unique function e fh : R2d \u00d7 \u0398\u03b2 \u2192R2d, for h \u22650, such that Y[2] tk = \u03a6[2] h (Y[2] tk\u22121) = e fh(Y[2] tk\u22121; \u03b2). (20) For all \u03b2 \u2208\u0398\u03b2, the h-flow e fh fulfills the following semi-group properties: e f0(y; \u03b2) = y, e ft+s(y; \u03b2) = e ft( e fs(y; \u03b2); \u03b2), t, s \u22650. For y = (x\u22a4, v\u22a4)\u22a4, we have: e fh(x, v; \u03b2) = \u0014 x fh(x, v; \u03b2) \u0015 , (21) where fh(x, v; \u03b2) is the solution of the ODE with vector field N(x, v; \u03b2). We introduce another assumption needed to define the pseudo-likelihood based on the splitting scheme. (A6) Inverse function e f \u22121 h (y; \u03b2) is defined asymptotically for all y \u2208R2d and all \u03b2 \u2208\u0398\u03b2, when h \u21920. Then, the inverse of \u02dc fh can be decomposed as: e f \u22121 h (x, v; \u03b2) = \u0014 x f \u22c6\u22121 h (x, v; \u03b2) \u0015 , (22) where f \u22c6\u22121 h (x, v; \u03b2) is the rough part of the inverse of e f \u22121 h . It does not equal f \u22121 h since the inverse does not propagate through coordinates when fh depends on x. We are now ready to define the Strang splitting scheme for model (5). Definition 2.1 (Strang splitting) Let Assumptions (A1)-(A2) hold. The Strang approximation of the solution of (5) is given by: \u03a6[str] h (Y[str] tk\u22121) = (\u03a6[2] h/2 \u25e6\u03a6[1] h \u25e6\u03a6[2] h/2)(Y[str] tk\u22121) = e fh/2(e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k). (23) Remark 1 The order of composition in the splitting schemes is not unique. Changing the order in the Strang splitting leads to a sum of 2 independent random variables, one Gaussian and one non-Gaussian, whose likelihood is not trivial. Thus, we only use the splitting (23). 2.5 Strang splitting estimators In this section, we introduce four estimators, all based on the Strang splitting scheme. We distinguish between estimators based on complete observations (denoted by C when both X and V are observed) and partial observations (denoted by P when only X is observed). In applications, we typically only have access to partial observations, however, the full observation estimator is used as a building block for the partial observation case. Additionally, we distinguish the estimators based on the type of likelihood function employed. These are the full likelihood (denoted by F) and the marginal likelihood of the rough component (denoted by R). We furthermore use the conditional likelihood based on the smooth component given the rough part (denoted by S | R) to decompose the full likelihood. 2.5.1 Complete observations Assume we observe the complete sample Y0:tN := (Ytk)N k=1 from (5) at time steps 0 = t0 < t1 < ... < tN = T. For notational simplicity, we assume equidistant step size h = tk \u2212tk\u22121. Strang splitting scheme (23) is a nonlinear transformation of a Gaussian random variable e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k. We define: e Zk,k\u22121(\u03b2) := e f \u22121 h/2(Ytk; \u03b2) \u2212e \u00b5h( e fh/2(Ytk\u22121; \u03b2); \u03b2), (24) 8 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT and apply change of variables to get: p(ytk | ytk\u22121) = pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121)| det Dy e f \u22121 h/2(ytk)|. Using \u2212log | det Dy e f \u22121 h/2 (y; \u03b2) | = log | det Dy e fh/2 (y; \u03b2) | and det Dy e fh/2 (y; \u03b2) = det Dvfh/2 (y; \u03b2), together with the Markov property of Y0:tN , we get the following objective function based on the full log-likelihood: L[CF](Y0:tN ; \u03b8) := N X k=1 \u0010 log det e \u2126h(\u03b8) + e Zk,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk,k\u22121(\u03b2) + 2 log | det Dvfh/2(Ytk; \u03b2)| \u0011 . (25) Now, split e Zk,k\u22121 from (24) into the smooth and rough parts e Zk,k\u22121 = ((Z[S] k,k\u22121)\u22a4, (Z[R] k,k\u22121)\u22a4)\u22a4defined as: Z[S] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]d i=1 = Xtk \u2212\u00b5[S] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (26) Z[R] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]2d i=d+1 = f \u22c6\u22121 h/2 (Ytk; \u03b2) \u2212\u00b5[R] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (27) where \u00b5[S] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]d i=1, \u00b5[R] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]2d i=d+1. (28) We also define the following sequence of vectors Z[S|R] k,k\u22121(\u03b2) := Z[S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z[R] k,k\u22121(\u03b2). (29) The formula for jointly normal distributions yields: pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121) = pN (0,\u2126[RR] h )(z[R] k,k\u22121 | ytk\u22121) \u00b7 pN (\u2126[SR] h (\u2126[RR] h )\u22121z[R] k,k\u22121,\u2126[S|R] h )(z[S] k,k\u22121 | z[R] k,k\u22121, ytk\u22121). This leads to dividing the full log-likelihood L[CF] into a sum of the marginal log-likelihood L[CR](Y0:tN ; \u03b8) and the smooth-given-rough log-likelihood L[CS|R](Y0:tN ; \u03b8): L[CF](Y0:tN ; \u03b8) = L[CR](Y0:tN ; \u03b8) + L[CS|R](Y0:tN ; \u03b8), where L[CR] (Y0:tN ; \u03b8) := N X k=1 log det \u2126[RR] h (\u03b8) + Z[R] k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z[R] k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 (Ytk; \u03b2) \f \f ! , (30) L[CS|R] (Y0:tN ; \u03b8) := N X k=1 \u0010 log det \u2126[S|R] h (\u03b8) + Z[S|R] k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z[S|R] k,k\u22121(\u03b2) \u0011 . (31) The terms containing the drift parameter in L[CR] in (30) are of order h1/2, as in the elliptic case, whereas the terms containing the drift parameter in L[CS|R] in (31) are of order h3/2. Consequently, under a rapidly increasing experimental design where Nh \u2192\u221eand Nh2 \u21920, the objective function (31) is degenerate for estimating the drift parameter. However, it contributes to the estimation of the diffusion parameter when the full objective function (25) is used. We show in later sections that employing (25) results in a lower asymptotic variance for the diffusion parameter making it more efficient in complete observation scenarios. The estimators based on complete observations are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (Y0:tN ; \u03b8) , obj \u2208{[CF], [CR]}. (32) Although the full objective function is based on twice as many equations as the marginal likelihood, its implementation complexity, speed, and memory requirements are similar to the marginal objective function. Therefore, if the complete observations are available, we recommend using the objective function (25) based on the full likelihood. 9 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.5.2 Partial observations Assume we only observe the smooth coordinates X0:tN := (Xtk)N k=0. The observed process Xt alone is not a Markov process, although the complete process Yt is. To approximate Vtk, we define the backward difference process: \u2206hXtk := Xtk \u2212Xtk\u22121 h . (33) From SDE (2) it follows that \u2206hXtk = 1 h Z tk tk\u22121 Vt dt. (34) We propose to approximate Vtk using \u2206hXtk by any of the three approaches: 1. Backward difference approximation: Vtk \u2248\u2206hXtk; 2. Forward difference approximation: Vtk \u2248\u2206hXtk+1; 3. Central difference approximation: Vtk \u2248 \u2206hXtk +\u2206hXtk+1 2 . The forward difference approximation performs best in our simulation study, which is also the approximation method employed in Gloter (2006) and Samson and Thieullen (2012). In the field of numerical approximations of ODEs, backward and forward finite differences have the same order of convergence, whereas the central difference has a higher convergence rate. However, the diffusion parameter estimator based on the central difference (Xtk+1 \u2212Xtk\u22121)/2h is less suitable because this approximation skips a data point and thus increases the estimator\u2019s variance. For further discussion, see Remark 6. Thus, we focus exclusively on forward differences, following Gloter (2006); Samson and Thieullen (2012), and all proofs are done for this approximation. Similar results also hold for the backward difference, with some adjustments needed in the conditional moments due to filtration issues. We start by approximating e Z for the case of partial observations denoted by e Z: e Zk+1,k,k\u22121(\u03b2) := e f \u22121 h/2(Xtk, \u2206hXtk+1; \u03b2) \u2212e \u00b5h( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2). (35) The smooth and rough parts of e Z are thus equal to: Z [S] k,k\u22121(\u03b2) := Xtk \u2212\u00b5[S] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (36) Z [R] k+1,k,k\u22121(\u03b2) := f \u22c6\u22121 h/2 (Xtk, \u2206hXtk+1; \u03b2) \u2212\u00b5[R] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (37) and Z [S|R] k+1,k,k\u22121(\u03b2) := Z [S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z [R] k+1,k,k\u22121(\u03b2). (38) Compared to Z[R] k,k\u22121 in (27), Z [R] k+1,k,k\u22121 in (37) depends on three consecutive data points, with the additional point Xtk+1 entering through \u2206hXtk+1. Furthermore, Xtk enters both f \u22c6\u22121 h/2 and e \u00b5[R] h , rending them coupled. This coupling has a significant influence on later derivations of the estimator\u2019s asymptotic properties, in contrast to the elliptic case where the derivations simplify. While it might seem straightforward to incorporate e Z, Z [S] k,k\u22121 and Z [R] k,k\u22121 into the objective functions (25), (30) and (31), it introduces bias in the estimators of the diffusion parameters, as also discussed in (Gloter, 2006; Samson and Thieullen, 2012). The bias arises because Xtk enters in both f \u22c6\u22121 h/2 and e \u00b5[R] h , and the covariances of e Z, Z [S] k,k\u22121, and Z [R] k,k\u22121 differ from their complete observation counterparts. To eliminate this bias, Gloter (2006); Samson and Thieullen (2012) applied a correction of 2/3 multiplied to log det of the covariance term in the objective functions, which is log det \u03a3\u03a3\u22a4in the Euler-Maruyama discretization. We also need appropriate corrections to our objective functions (25), (30) and (31), however, caution is necessary because log det e \u2126h(\u03b8) depends on both drift and diffusion parameters. To counterbalance this, we also incorporate an adjustment to h in \u2126h. Moreover, we add the term 4 log | det Dvfh/2| to objective function (31) to obtain consistency of the drift estimator under partial observations. The detailed derivation of these correction factors will be elaborated in the following sections. 10 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We thus propose the following objective functions: L[PF](X0:tN ; \u03b8) := 4 3(N \u22122) log det e \u21263h/4(\u03b8) (39) + N\u22121 X k=1 \u0010e Zk+1,k,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk+1,k,k\u22121(\u03b2) + 6 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 , L[PR] (X0:tN ; \u03b8) := 2 3(N \u22122) log det \u2126[RR] 3h/2(\u03b8) (40) + N\u22121 X k=1 \u0010 Z [R] k+1,k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z [R] k+1,k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001\f \f \u0011 , L[PS|R] (X0:tN ; \u03b8) := 2(N \u22122) log det \u2126[S|R] h (\u03b8) (41) + N\u22121 X k=1 \u0010 Z [S|R] k+1,k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z [S|R] k+1,k,k\u22121(\u03b2) + 4 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 . (42) Remark 2 Due to the correction factors in the objective functions, we now have that L[PF](X0:tN ; \u03b8) \u0338= L[PR](X0:tN ; \u03b8) + L[PS|R](X0:tN ; \u03b8). (43) However, when expanding the objective functions (39)-(41) using Taylor series to the lowest necessary order in h, their approximations will satisfy equality in (43), as shown in Section 6. Remark 3 Adding the extra term 4 log | det Dvfh/2| in (41) is necessary to keep the consistency of the drift parameter. However, this term is not initially present in objective function (31), making this correction somehow artificial. This can potentially make the objective function further from the true log-likelihood. The estimators based on the partial sample are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (X0:tN ; \u03b8) , obj \u2208{[PF], [PR]}. (44) In the partial observation case, the asymptotic variances of the diffusion estimators are identical whether using (39) or (40), in contrast to the complete observation scenario. This variance is shown to be 9/4 times higher than the variance of the estimator \u02c6 \u03b8[CF] N , and 9/8 times higher than that of the estimator based on the marginal likelihood \u02c6 \u03b8[CR] N . The numerical study in Section 4 shows that the estimator based on the marginal objective function (40) is less biased than the one based on the full objective function (39) in finite sample scenarios with partial observations. A potential reason for this is discussed in Remark 3. Therefore, we recommend using the objective function (40) for partial observations. 3 Main results This section states the two main results \u2013 consistency and asymptotic normality of all four proposed estimators. The key ideas for proofs are presented in Supplementary Materials S1. First, we state the consistency of the estimators in both complete and partial observation cases. Let L[obj] N be one of the objective functions (25), (30), (39) or (40) and b \u03b8[obj] N the corresponding estimator. Thus, obj \u2208{[CF], [CR], [PF], [PR]}. We use superscript [C\u00b7] to refer to any objective function in the complete observation case. Likewise, [\u00b7R] stands for an objective function based on the rough marginal likelihood either in the complete or the partial observation case. Theorem 3.1 (Consistency of the estimators) Assume (A1)-(A6), h \u21920, and Nh \u2192\u221e. Then under the complete observation or partial observation case, it holds: b \u03b2[obj] N P\u03b80 \u2212 \u2212 \u2192\u03b20, d \u03a3\u03a3 [obj] N P\u03b80 \u2212 \u2212 \u2192\u03a3\u03a3\u22a4 0 . 11 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Remark 4 We split the full objective function (25) into the sum of the rough marginal likelihood (30) and the conditional smooth-given-rough likelihood (31). Even if (31) cannot identify the drift parameter \u03b2, it is an important intermediate step in understanding the full objective function (25). This can be seen in the proof of Theorem 3.1, where we first establish consistency of the diffusion estimator with a convergence rate of \u221a N, which is faster than \u221a Nh, the convergence rate of the drift estimators. Then, under complete observations, we show that 1 Nh(L[CR] N (\u03b2, \u03c30) \u2212L[CR] N (\u03b20, \u03c30)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 Z (F0(y) \u2212F(y))\u22a4(\u03a3\u03a3\u22a4)\u22121(F0(y) \u2212F(y)) d\u03bd0(y). (45) The right-hand side of (45) is non-negative, with a unique zero for F = F0. Conversely, for objective function (31), it holds: 1 Nh(L[CS|R] N (\u03b2, \u03c3) \u2212L[CS|R] N (\u03b20, \u03c3)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0. (46) Hence, (46) does not have a unique minimum, making the drift parameter unidentifiable. Similar conclusions are drawn in the partial observation case. Now, we state the asymptotic normality of the estimator. First, we need some preliminaries. Let \u03c1 > 0 and B\u03c1 (\u03b80) = {\u03b8 \u2208\u0398 | \u2225\u03b8 \u2212\u03b80\u2225\u2264\u03c1} be a ball around \u03b80. Since \u03b80 \u2208\u0398, for sufficiently small \u03c1 > 0, B\u03c1(\u03b80) \u2208\u0398. For \u02c6 \u03b8[obj] N \u2208B\u03c1 (\u03b80), the mean value theorem yields: \u0012Z 1 0 HL[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt \u0013 (\u02c6 \u03b8[obj] N \u2212\u03b80) = \u2212\u2207\u03b8L[obj] N (\u03b80) . (47) Define: C[obj] N (\u03b8) := \uf8ee \uf8ef \uf8f0 h 1 Nh\u22022 \u03b2(i1)\u03b2(i2)L[obj] N (\u03b8) ir i1,i2=1 h 1 N \u221a h\u22022 \u03b2(i)\u03c3(j)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u221a h\u22022 \u03c3(j)\u03b2(i)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u22022 \u03c3(j1)\u03c3(j2)L[obj] N (\u03b8) is j1,j2=1 \uf8f9 \uf8fa \uf8fb, (48) s[obj] N := \"\u221a Nh( \u02c6 \u03b2[obj] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[obj] N \u2212\u03c30) # , \u03bb[obj] N := \uf8ee \uf8ef \uf8f0 \u2212 1 \u221a Nh \u2207\u03b2L[obj] N (\u03b80) \u22121 \u221a N \u2207\u03c3L[obj] N (\u03b80) \uf8f9 \uf8fa \uf8fb, (49) and D[obj] N := R 1 0 C[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt. Then, (47) is equivalent to D[obj] N s[obj] N = \u03bb[obj] N . Let: [C\u03b2(\u03b80)]i1,i2 := Z (\u2202\u03b2(i1)F0(y))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03b2(i2)F0(y)) d\u03bd0(y), 1 \u2264i1, i2 \u2264r, (50) [C\u03c3(\u03b80)]j1,j2 := Tr((\u2202\u03c3(j1)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03c3(j2)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121), 1 \u2264j1, j2 \u2264s. (51) Theorem 3.2 Let assumptions (A1)-(A6) hold, and let h \u21920, Nh \u2192\u221e, and Nh2 \u21920. Then under complete observations, it holds: \"\u221a Nh( \u02c6 \u03b2[CR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[CF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. If only partial observations are available and the unobserved coordinates are approximated using the forward or backward differences, then \"\u221a Nh( \u02c6 \u03b2[PR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[PF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. 12 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Here, we only outline the proof. According to Theorem 1 in Kessler (1997) or Theorem 1 in S\u00f8rensen and Uchida (2003), Lemmas 3.3 and 3.4 below are enough for establishing asymptotic normality of \u02c6 \u03b8N. For more details, see proof of Theorem 1 in S\u00f8rensen and Uchida (2003). Lemma 3.3 Let CN(\u03b80) be defined in (48). For h \u21920 and Nh \u2192\u221e, it holds: C[CR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015 , C[PR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2 3C\u03c3(\u03b80) \u0015 , C[CF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015 , C[PF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 8 3C\u03c3(\u03b80) \u0015 . Moreover, let \u03c1N be a sequence such that \u03c1N \u21920, then in all cases it holds: sup \u2225\u03b8\u2225\u2264\u03c1N \u2225C[obj] N (\u03b80 + \u03b8) \u2212C[obj] N (\u03b80)\u2225 P\u03b80 \u2212 \u2212 \u21920. Lemma 3.4 Let \u03bbN be defined (49). For h \u21920, Nh \u2192\u221eand Nh2 \u21920, it holds: \u03bb[CR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015\u0013 , \u03bb[CF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 4C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 16C\u03c3(\u03b80) \u0015\u0013 , under P\u03b80. Now, the two previous lemmas suggest s[obj] N = (D[obj] n )\u22121\u03bb[obj] N d \u2212 \u2192C[obj] N (\u03b80)\u22121\u03bb[obj] N . The previous line is not completely formal, but it gives the intuition. For more details on formally deriving the result, see Section 7.4 in Pilipovic et al. (2024) or proof of Theorem 1 in S\u00f8rensen and Uchida (2003). 4 Simulation study This Section illustrates the simulation study of the Kramers oscillator (8), demonstrating the theoretical aspects and comparing our proposed estimators against estimators based on the EM and LL approximations. We chose to compare our proposed estimators to these two, because the EM estimator is routinely used in applications, and the LL estimator has shown to be one of the best state-of-the-art methods, see Pilipovic et al. (2024) for the elliptic case. The true parameters are set to \u03b70 = 6.5, a0 = 1, b0 = 0.6 and \u03c32 0 = 0.1. We outline the estimators specifically designed for the Kramers oscillator, explain the simulation procedure, describe the optimization implemented in the R programming language R Core Team (2022), and then present and interpret the results. 4.1 Estimators used in the study For the Kramers oscillator (8), the EM transition distribution is: \u0014 Xtk Vtk \u0015 | \u0014 Xtk\u22121 Vtk\u22121 \u0015 = \u0014 x v \u0015 \u223cN \u0012\u0014 x + hv v + h \u0000\u2212\u03b7v + ax \u2212bx3\u0001 \u0015 , \u0014 0 0 0 h\u03c32 \u0015\u0013 . The ill-conditioned variance of this discretization restricts us to an estimator that only uses the marginal likelihood of the rough coordinate. The estimator for complete observations directly follows from the Gaussian distribution. The estimator for partial observations is defined as (Samson and Thieullen, 2012): b \u03b8[PR] EM = arg min \u03b8 ( 2 3(N \u22123) log \u03c32 + 1 h\u03c32 N\u22122 X k=1 (\u2206hXtk+1 \u2212\u2206hXtk \u2212h(\u2212\u03b7\u2206hXtk\u22121 + aXtk\u22121 \u2212bX3 tk\u22121))2 ) . To our knowledge, the LL estimator has not previously been applied to partial observations. Given the similar theoretical and computational performance of the Strang and LL discretizations, we suggest (without formal proof) to adjust the LL objective functions with the same correction factors as used in the Strang approach. The numerical evidence indicates 13 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT that the LL estimator has the same asymptotic properties as those proved for the Strang estimator. We omit the definition of the LL estimator due to its complexity (see Melnykova (2020); Pilipovic et al. (2024) and accompanying code). To define S estimators based on the Strang splitting scheme, we first split SDE (8) as follows: d \u0014 Xt Vt \u0015 = \u0014 0 1 \u22122a \u2212\u03b7 \u0015 | {z } A \u0014 Xt Vt \u0015 \u2212 \u0014 x\u22c6 \u00b1 0 \u0015 | {z } b ! dt + \u0014 0 aXt \u2212bX3 t + 2a(Xt \u2212x\u22c6 \u00b1) \u0015 | {z } N(Xt,Vt) dt + \u0014 0 \u03c3 \u0015 dWt, where x\u22c6 \u00b1 = \u00b1 p a/b are the two stable points of the dynamics. Since there are two stable points, we suggest splitting with x\u22c6 +, when Xt > 0, and x\u22c6 \u2212, when Xt < 0. This splitting follows the guidelines from (Pilipovic et al., 2024). Note that the nonlinear ODE driven by N(x, v) has a trivial solution where x is a constant. To obtain Strang estimators, we plug in the corresponding components in the objective functions (25), (30), (39) and (40). 4.2 Trajectory simulation We simulate a sample path using the EM discretization with a step size of hsim = 0.0001 to ensure good performance. To reduce discretization errors, we sub-sample from the path at wider intervals to get time step h = 0.1. The path has N = 5000 data points. We repeat the simulations to obtain 250 data sets. 4.3 Optimization in R For optimizing the objective functions, we proceed as in Pilipovic et al. (2024) using the R package torch (Falbel and Luraschi, 2022), which allows automatic differentiation. The optimization employs the resilient backpropagation algorithm, optim_rprop. We use the default hyperparameters and limit the number of optimization iterations to 2000. The convergence criterion is set to a precision of 10\u22125 for the difference between estimators in consecutive iterations. The initial parameter values are set to (\u22120.1, \u22120.1, 0.1, 0.1). 4.4 Results The results of the simulation study are presented in Figure 1. Figure 1A) presents the distributions of the normalized estimators in the complete and partial observation cases. The S and LL estimators exhibit nearly identical performance, particularly in the complete observation scenario. In contrast, the EM method displays significant underperformance and notable bias. The variances of the S and LL rough-likelihood estimators of \u03c32 are higher compared to those derived from the full likelihood, aligning with theoretical expectations. Interestingly, in the partial observation scenario, Figure 1A) reveals that estimators employing the full likelihood display greater finite sample bias compared to those based on the rough likelihood. Possible reasons for this bias are discussed in Remark 3. However, it is noteworthy that this bias is eliminated for smaller time steps, e.g. h = 0.0001 (not shown), thus confirming the theoretical asymptotic results. This observation suggests that the rough likelihood is preferable under partial observations due to its lower bias. Backward finite difference approximations of the velocity variables perform similarly to the forward differences and are therefore excluded from the figure for clarity. We closely examine the variances of the S estimators of \u03c32 in Figure 1B). The LL estimators are omitted due to their similarity to the S estimators, and because the computation times for the LL estimators are prohibitive. To align more closely with the asymptotic predictions, we opt for h = 0.02 and conduct 1000 simulations. Additionally, we set \u03c32 0 = 100 to test different noise levels. Atop each empirical distribution, we overlay theoretical normal densities that match the variances as per Theorem 3.2. The theoretical variance is derived from C\u03c32(\u03b80) in (51), which for the Kramers oscillator in (8) is: C\u03c32(\u03b80) = 1 \u03c34 0 . (52) Figure 1 illustrates that the lowest variance of the diffusion estimator is observed when using the full likelihood with complete observations. The second lowest variance is achieved using the rough likelihood with complete observations. The largest variance is observed in the partial observation case; however, it remains independent of whether the full or rough likelihood is used. Once again, we observe that using the full likelihood introduces additional finite sample bias. In Figure 1C), we compare running times calculated using the tictoc package in R. Running times are measured from the start of the optimization step until convergence. The figure depicts the median over 250 repetitions to mitigate the influence of outliers. The EM method is notably the fastest; however, the S estimators exhibit only slightly slower performance. The LL estimators are 10-100 times slower than the S estimators, depending on whether complete or partial observations are used and whether the full or rough likelihood is employed. 14 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 1: Parameter estimates in a simulation study for the Kramers oscillator, eq. (8). The color code remains consistent across all three figures. A) Normalized distributions of parameter estimation errors (\u02c6 \u03b8N \u2212\u03b80) \u2298\u03b80 in both complete and partial observation cases, based on 250 simulated data sets with h = 0.1 and N = 5000. Each column corresponds to a different parameter, while the color indicates the type of estimator. Estimators are distinguished by superscripted objective functions (F for full and R for rough). B) Distribution of b \u03c32 N estimators based on 1000 simulations with h = 0.02 and N = 5000 across different observation settings (complete or partial) and likelihood choices (full or rough) using the Strang splitting scheme. The true value of \u03c32 is set to \u03c32 0 = 100. Theoretical normal densities are overlaid for comparison. Theoretical variances are calculated based on C\u03c32(\u03b80), eq. (52). C) Median computing time in seconds for one estimation of various estimators based on 250 simulations with h = 0.1 and N = 5000. Shaded color patterns represent times in the partial observation case, while no color pattern indicates times in the complete observation case. 15 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 2: Ice core data from Greenland. Left: Trajectories over time (in kilo years) of the centered negative logarithm of the Ca2+ measurements (top) and forward difference approximations of its rate of change (bottom). The two vertical dark red lines represent the estimated stable equilibria of the double-well potential function. Green points denote upand down-crossings of level \u00b10.6, conditioned on having crossed the other level. Green vertical lines indicate empirical estimates of occupancy in either of the two metastable states. Right: Empirical densities (black) alongside estimated invariant densities with confidence intervals (dark red), prediction intervals (light red), and the empirical density of a simulated sample from the estimated model (blue). 5 Application to Greenland Ice Core Data During the last glacial period, significant climatic shifts known as Dansgaard-Oeschger (DO) events have been documented in paleoclimatic records (Dansgaard et al., 1993). Proxy data from Greenland ice cores, particularly stable water isotope composition (\u03b418O) and calcium ion concentrations (Ca2+), offer valuable insights into these past climate variations (Boers et al., 2017, 2018; Boers, 2018; Ditlevsen et al., 2002; Lohmann and Ditlevsen, 2019; Hassanibesheli et al., 2020). The \u03b418O ratio, reflecting the relative abundance of 18O and 16O isotopes in ice, serves as a proxy for paleotemperatures during snow deposition. Conversely, calcium ions, originating from dust deposition, exhibit a strong negative correlation with \u03b418O, with higher calcium ion levels indicating colder conditions. Here, we prioritize Ca2+ time series due to its finer temporal resolution. In Greenland ice core records, the DO events manifest as abrupt transitions from colder climates (stadials) to approximately 10 degrees warmer climates (interstadials) within a few decades. Although the waiting times between state switches last a couple of thousand years, their spacing exhibits significant variability. The underlying mechanisms driving these changes remain largely elusive, prompting discussions on whether they follow cyclic patterns, result from external forcing, or emerge from noise-induced processes (Boers, 2018; Ditlevsen et al., 2007). We aim to determine if the observed data can be explained by noise-induced transitions of the Kramers oscillator. The measurements were conducted at the summit of the Greenland ice sheet as part of the Greenland Icecore Project (GRIP) (Anklin et al., 1993; Andersen et al., 2004). Originally, the data were sampled at 5 cm intervals, resulting in a non-equidistant time series due to ice compression at greater depths, where 5 cm of ice core spans longer time periods. For our analysis, we use a version of the data transformed into a uniformly spaced series through 20-year binning and averaging. This transformation simplifies the analysis and highlights significant climatic trends. The dataset is available in the supplementary material of (Rasmussen et al., 2014; Seierstad et al., 2014). 16 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT To address the large amplitudes and negative correlation with temperature, we transform the data to minus the logarithm of Ca2+, where higher values of the transformed variable indicate warmer climates at the time of snow deposition. Additionally, we center the transformed measurements around zero. With the 20-year binning, to obtain one point per 20 years, we average across the bins, resulting in a time step of h = 0.02kyr (1kyr = 1000 years). Additionally, we addressed a few missing values using the na.approx function from the zoo package. Following the approach of Hassanibesheli et al. (2020), we analyze a subset of the data with a sufficiently good signal-to-noise ratio. Hassanibesheli et al. (2020) examined the data from 30 to 60kyr before present. Here, we extend the analysis to cover 30kyr to 80kyr, resulting in a time interval of T = 50kyr and a sample size of N = 2500. We approximate the velocity of the transformed Ca2+ by the forward difference method. The trajectories and empirical invariant distributions are illustrated in Figure 2. We fit the Kramers oscillator to the \u2212log Ca2+ time series and estimate parameters using the Strang estimator. Following Theorem 3.2, we compute C\u03b2(\u03b80) from (50). Applying the invariant density \u03c00(x, v) from (10), which decouples into \u03c00(x) (11) and a Gaussian zero-mean and \u03c32 0/(2\u03b70) variance, leads us to: C\u03b2(\u03b80) = \uf8ee \uf8ef \uf8ef \uf8f0 1 2\u03b70 0 0 0 1 \u03c32 0 R \u221e \u2212\u221ex2\u03c00(x) dx \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 0 \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 1 \u03c32 0 R \u221e \u2212\u221ex6\u03c00(x) dx \uf8f9 \uf8fa \uf8fa \uf8fb. (53) Thus, to obtain 95% confidence intervals (CI) for the estimated parameters, we plug b \u03b8N into (52) and (53). The estimators and confidence intervals are shown in Table 1. We also calculate the expected waiting time \u03c4, eq. (12), of crossing from one state to another, and its confidence interval using the Delta Method. Parameter Estimate 95% CI \u03b7 62.5 59.4 \u221265.6 a 296.7 293.6 \u2212299.8 b 219.1 156.4 \u2212281.7 \u03c32 9125 8589 \u22129662 \u03c4 3.97 3.00 \u22124.94 Table 1: Estimated parameters of the Kramers oscillator from Greenland ice core data. The model fit is assessed in the right panels of Figure 2. Here, we present the empirical distributions of the two coordinates along with the fitted theoretical invariant distribution and a 95% confidence interval. Additionally, a prediction interval for the distribution is provided by simulating 1000 datasets from the fitted model, matching the size of the empirical data. We estimate the empirical distributions for each simulated dataset and construct a 95% prediction interval using the pointwise 2.5th and 97.5th percentiles of these estimates. A single example trace is included in blue. While the fitted distribution for \u2212log Ca2+ appears to fit well, even with this symmetric model, the velocity variables are not adequately captured. This discrepancy is likely due to the presence of extreme values in the data that are not effectively accounted for by additive Gaussian noise. Consequently, the model compensates by estimating a large variance. We estimate the waiting time between metastable states to be approximately 4000 years. However, this approximation relies on certain assumptions, namely 62.5 \u2248\u03b7 \u226b\u221aa \u224817.2 and 73 \u2248\u03c32/2\u03b7 \u226aa2/4b \u2248100. Thus, the accuracy of the approximation may not be highly accurate. Defining the current state of the process is not straightforward. One method involves identifying successive upand down-crossings of predefined thresholds within the smoothed data. However, the estimated occupancy time in each state depends on the level of smoothing applied and the distance of crossing thresholds from zero. Using a smoothing technique involving running averages within windows of 11 data points (equivalent to 220 years) and detecting downand up-crossings of levels \u00b10.6, we find an average occupancy time of 4058 years in stadial states and 3550 years in interstadial states. Nevertheless, the actual occupancy times exhibit significant variability, ranging from 60 to 6900 years, with the central 50% of values falling between 665 and 2115 years. This classification of states is depicted in green in Figure 2. Overall, the estimated mean occupancy time inferred from the Kramers oscillator appears reasonable. 6 Technical results In this Section, we present all the necessary technical properties that are used to derive the main results of the paper. 17 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We start by expanding e \u2126h and its block components \u2126[RR] h (\u03b8)\u22121, \u2126[S|R] h (\u03b8)\u22121, log det \u2126[RR] h (\u03b8), log det \u2126[S|R] h (\u03b8) and log | det Dfh/2 (y; \u03b2) | when h goes to zero. Then, we expand e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) around Ytk\u22121 when h goes to zero. The main tools used are It\u00f4\u2019s lemma, Taylor expansions, and Fubini\u2019s theorem. The final result is stated in Propositions 6.6 and 6.7. The approximations depend on the drift function F, the nonlinear part N, and some correlated sequences of Gaussian random variables. Finally, we obtain approximations of the objective functions (25), (30), (31) and (39) (41). Proofs of all the stated propositions and lemmas in this section are in Supplementary Material S1. 6.1 Covariance matrix e \u2126h The covariance matrix e \u2126h is approximated by: e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du = he \u03a3e \u03a3\u22a4+ h2 2 ( e Ae \u03a3e \u03a3\u22a4+ e \u03a3e \u03a3\u22a4e A\u22a4) + h3 6 ( e A2 e \u03a3e \u03a3\u22a4+ 2 e Ae \u03a3e \u03a3\u22a4e A\u22a4+ e \u03a3e \u03a3\u22a4( e A2)\u22a4) + h4 24( e A3 e \u03a3e \u03a3\u22a4+ 3 e A2 e \u03a3e \u03a3\u22a4e A\u22a4+ 3 e Ae \u03a3e \u03a3\u22a4( e A2)\u22a4+ e \u03a3e \u03a3\u22a4( e A3)\u22a4) + R(h5, y0). (54) The following lemma approximates each block of e \u2126h up to the first two leading orders of h. The result follows directly from equations (4), (6), and (54). Lemma 6.1 The covariance matrix e \u2126h defined in (54)-(19) approximates block-wise as: \u2126[SS] h (\u03b8) = h3 3 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0), \u2126[SR] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (Av(\u03b2)\u03a3\u03a3\u22a4+ 2\u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RS] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (2Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RR] h (\u03b8) = h\u03a3\u03a3\u22a4+ h2 2 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h3, y0). Building on Lemma 6.1, we calculate products, inverses, and logarithms of the components of e \u2126h in the following lemma. Lemma 6.2 For the covariance matrix e \u2126h defined in (54) it holds: (i) \u2126[RR] h (\u03b8)\u22121 = 1 h(\u03a3\u03a3\u22a4)\u22121 \u22121 2((\u03a3\u03a3\u22a4)\u22121Av(\u03b2) + Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h, y0); (ii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121 = h 2 I \u2212h2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h3, y0); (iii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121\u2126[RS] h (\u03b8) = h3 4 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0); (iv) \u2126[S|R] h (\u03b8) = h3 12 \u03a3\u03a3\u22a4+ R(h5, y0); (v) log det \u2126[RR] h (\u03b8) = d log h + log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0); (vi) log det \u2126[S|R] h (\u03b8) = 3d log h + log det \u03a3\u03a3\u22a4+ R(h2, y0); (vii) log det e \u2126h(\u03b8) = 4d log h + 2 log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0). Remark 5 We adjusted the objective functions for partial observations using the term c log det \u2126[\u00b7] h/c, where c is a correction constant. This adjustment keeps the term h Tr Av(\u03b2) in (v)-(vii) constant, not affecting the asymptotic distribution of the drift parameter. There is no h4-term in \u2126[S|R] h (\u03b8) which simplifies the approximation of \u2126[S|R] h (\u03b8)\u22121 and log det \u2126[S|R] h (\u03b8). Consequently, this makes (41) a bad choice for estimating the drift parameter. 18 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.2 Nonlinear solution e fh We now state a useful proposition for the nonlinear solution e fh (Section 1.8 in (Hairer et al., 1993)). Proposition 6.3 Let Assumptions (A1), (A2) and (A6) hold. When h \u21920, the h-flow of (15) approximates as: e fh(y) = y + h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y), (55) e f \u22121 h (y) = y \u2212h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y). (56) Applying the previous proposition on (21) and (22), we get: fh(y) = v + hN(y) + h2 2 (DvN(y))N(y) + R(h3, y), (57) f \u22c6\u22121 h (y) = v \u2212hN(y) + h2 2 (DvN(y))N(y) + R(h3, y). (58) The following lemma approximates log | det Dfh/2 (y; \u03b2) | in the objective functions and connects it with Lemma 6.2. Lemma 6.4 Let e fh be the function defined in (21). It holds: 2 log | det Dfh/2 (Ytk; \u03b2) | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121), 2 log | det Dfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001 | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). An immediate consequence of the previous lemma and that DvF(y; \u03b2) = Av(\u03b2) + DvN(y; \u03b2) is log det \u2126[RR] h (\u03b8) + 2 log | det Dfh/2 (Ytk; \u03b2) | = log det h\u03a3\u03a3\u22a4+ h Tr DvF(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). The same equality holds when Ytk is approximated by (Xtk, \u2206hXtk+1). The following lemma expands function \u00b5h( e fh/2(y)) up to the highest necessary order of h. Lemma 6.5 For the functions e fh in (21) and e \u00b5h in (28), it holds \u00b5[S] h ( e fh/2(y)) = x + hv + h2 2 F(y) + R(h3, y), (59) \u00b5[R] h ( e fh/2(y)) = v + h(F(y) \u22121 2N(y)) + R(h2, y). (60) 6.3 Random variables e Zk,k\u22121 and e Zk+1,k,k\u22121 To approximate the random variables Z[S] k,k\u22121(\u03b2), Z[R] k,k\u22121(\u03b2), Z [S] k,k\u22121(\u03b2), and Z [R] k+1,k,k\u22121(\u03b2) around Ytk\u22121, we start by defining the following random sequences: \u03b7k\u22121 := 1 h1/2 Z tk tk\u22121 dWt, (61) \u03bek\u22121 := 1 h3/2 Z tk tk\u22121 (t \u2212tk\u22121) dWt, \u03be\u2032 k := 1 h3/2 Z tk+1 tk (tk+1 \u2212t) dWt, (62) \u03b6k\u22121 := 1 h5/2 Z tk tk\u22121 (t \u2212tk\u22121)2 dWt, \u03b6\u2032 k := 1 h5/2 Z tk+1 tk (tk+1 \u2212t)2 dWt. (63) The random variables (61)-(63) are Gaussian with mean zero. Moreover, at time tk they are Ftk+1 measurable and independent of Ftk. The following linear combinations of (61)-(63) appear in the expansions in the partial observation case: Uk,k\u22121 := \u03be\u2032 k + \u03bek\u22121, (64) Qk,k\u22121 := \u03b6\u2032 k + 2\u03b7k\u22121 \u2212\u03b6k\u22121. (65) 19 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT It is not hard to check that \u03be\u2032 k + \u03b7k\u22121 \u2212\u03be\u2032 k\u22121 = Uk,k\u22121. This alternative representation of Uk,k\u22121 will be used later in proofs. The It\u00f4 isometry yields: E\u03b80[\u03b7k\u22121\u03b7\u22a4 k\u22121 | Ftk\u22121] = I, E\u03b80[\u03b7k\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03b7k\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 2I, (66) E\u03b80[\u03bek\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 6I, E\u03b80[\u03bek\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03be\u2032 k\u03be\u2032\u22a4 k | Ftk\u22121] = 1 3I, (67) E\u03b80[Uk,k\u22121U\u22a4 k,k\u22121 | Ftk\u22121] = 2 3I, E\u03b80[Uk,k\u22121(Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4| Ftk\u22121] = I. (68) The covariances of other combinations of the random variables (61)-(63) are not needed for the proofs. However, to derive asymptotic properties, we need some fourth moments calculated in Supplementary Materials S1. The following two propositions are the last building blocks for approximating the objective functions (30)-(31) and (40)-(41). Proposition 6.6 The random variables e Zk,k\u22121(\u03b2) in (24) and e Zk+1,k,k\u22121(\u03b2) in (35) are approximated by: Z[S] k,k\u22121(\u03b2) = h3/2\u03a30\u03be\u2032 k\u22121 + h2 2 (F0(Ytk\u22121) \u2212F(Ytk\u22121)) + h5/2 2 DvF0(Ytk\u22121)\u03a30\u03b6\u2032 k\u22121 + R(h3, Ytk\u22121), Z[R] k,k\u22121(\u03b2) = h1/2\u03a30\u03b7k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 + h3/2DvF0(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h2, Ytk\u22121), Z [S] k,k\u22121(\u03b2) = \u2212h2 2 F(Ytk\u22121) \u2212h5/2 2 DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h3, Ytk\u22121), Z [R] k+1,k,k\u22121(\u03b2) = h1/2\u03a30Uk,k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h3/2DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + h3/2 2 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h2, Ytk\u22121). Remark 6 Proposition 6.6 yield E\u03b80[Z[R] k,k\u22121(\u03b2)Z[R] k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = \u2126[RR] h + R(h2, Ytk\u22121), E\u03b80[Z [R] k+1,k,k\u22121(\u03b2)Z [R] k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 2 3h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = 2 3\u2126[RR] h + R(h2, Ytk\u22121). Thus, the correction factor 2/3 in (40) compensates for the underestimation of the covariance of Z [R] k+1,k,k\u22121(\u03b2). Similarly, it can be shown that the same underestimation happens when using the backward difference. On the other hand, when using the central difference, it can be shown that E\u03b80[Z [R],central k+1,k,k\u22121(\u03b2)Z [R],central k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 5 12h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121), which is a larger deviation from \u2126[RR] h , yielding a larger correcting factor and larger asymptotic variance of the diffusion parameter estimator. Proposition 6.7 Let e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) be defined in (24) and (35), respectively. Then, Z[S|R] k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 \u2212h5/2 2 DvF0(Ytk\u22121)\u03a30(\u03be\u2032 k\u22121 \u2212\u03b6\u2032 k\u22121) + R(h3, Ytk\u22121), Z [S|R] k+1,k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30Uk,k\u22121 \u2212h2 2 F0(Ytk\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30Uk,k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h5/2 4 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h3, Ytk\u22121). 20 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.4 Objective functions Starting with the complete observation case, we approximate objective functions (30) and (31) up to order R(h3/2, Ytk\u22121) to prove the asymptotic properties of the estimators \u02c6 \u03b8[CR] N and \u02c6 \u03b8[CS|R] N . After omitting the terms of order R(h, Ytk\u22121) that do not depend on \u03b2, we obtain the following approximations: L[CR] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 (69) + 2 \u221a h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 + h N X k=1 Tr DvF(Ytk; \u03b2), L[CS|R] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ 3 N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) (70) \u22123h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121DvN(Ytk\u22121; \u03b2)\u03a30\u03b7k\u22121 \u2212h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 L[CF] N (Y0:tN ; \u03b8) = L[CR] N (Y0:tN ; \u03b8) + L[CS|R] N (Y0:tN ; \u03b8) . (71) The two last sums in (70) converge to zero because E\u03b80[(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u03b7\u22a4 k\u22121|Ftk\u22121] = 0. Moreover, (70) lacks the quadratic form of F(Ytk\u22121) \u2212F0(Ytk\u22121), that is crucial for the asymptotic variance of the drift estimator. This implies that the objective function L[CS|R] N is not suitable for estimating the drift parameter. Conversely, (70) provides a correct and consistent estimator of the diffusion parameter, indicating that the full objective function (the sum of L[CR] N and L[CS|R] N ) consistently estimates \u03b8. Similarly, the approximated objective functions in the partial observation case are: L[PR] N (Y0:tN ; \u03b8) = 2 3(N \u22122) log det \u03a3\u03a3\u22a4+ N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (72) + 2 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N\u22121 X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N\u22121 X k=1 (Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + h N\u22121 X k=1 Tr DvF(Ytk; \u03b2), L[PS|R] N (Y0:tN ; \u03b8) = 2(N \u22122) log det \u03a3\u03a3\u22a4+ 3 N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (73) + 6 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121F(Ytk\u22121; \u03b20) \u22123h N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 DvN(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + 2h N\u22121 X k=1 Tr DvN(Ytk; \u03b2), 21 Strang Splitting Parameter Estimator for Second-order SDEs A PREPRINT L[PF] N (Y0:tN ; \u03b8) = L[PR] N (Y0:tN ; \u03b8) + L[PS|R] N (Y0:tN ; \u03b8) . (74) This time, the term with Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121 vanishes because Tr(\u03a30Uk,k\u22121U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)) = 0 due to the symmetry of the matrices and the trace cyclic property. Even though the partial observation objective function L[PR] (X0:tN ; \u03b8) (40) depends only on X0:tN , we could approximate it with L[PR] N (Y0:tN ; \u03b8) (72). This is useful for proving the asymptotic normality of the estimator since its asymptotic distribution will depend on the invariant probability \u03bd0 defined for the solution Y. The absence of the quadratic form F(Ytk\u22121) \u2212F0(Ytk\u22121) in (73) indicates that L[PS|R] N is not suitable for estimating the drift parameter. Additionally, the penultimate term in (73) does not vanish, needing an additional correction term of 2h PN\u22121 k=1 Tr DvN(Ytk; \u03b2) for consistency. This correction is represented as 4 log | det Dvfh/2| in (41). Notably, this term is absent in the complete objective function (31), making this adjustment somewhat artificial and could potentially deviate further from the true log-likelihood. Consequently, the objective function based on the full likelihood (39) inherits this characteristic from (73), suggesting that in the partial observation scenario, using only the rough likelihood (72) may be more appropriate. 7 Conclusion Many fundamental laws of physics and chemistry are formulated as second-order differential equations, a model class important for understanding complex dynamical systems in various fields such as biology and economics. The extension of these deterministic models to stochastic second-order differential equations represents a natural generalization, allowing for the incorporation of uncertainties and variability inherent in real-world systems. However, robust statistical methods for analyzing data generated from such stochastic models have been lacking, presenting a significant challenge due to the inherent degeneracy of the noise and partial observation. In this study, we propose estimating model parameters using a recently developed methodology of Strang splitting estimator for SDEs. This estimator has demonstrated finite sample efficiency with relatively large sample time steps, particularly in handling highly nonlinear models. We adjust the estimator to the partial observation setting and employ either the full likelihood or only the marginal likelihood based on the rough coordinates. For all four obtained estimators, we establish the consistency and asymptotic normality. The application of the Strang estimator to a historical paleoclimate dataset obtained from ice cores in Greenland has yielded valuable insights and analytical tools for comprehending abrupt climate shifts throughout history. Specifically, we employed the stochastic Duffing oscillator, also known as the Kramers oscillator, to analyze the data. While our focus in this paper has been primarily confined to second-order SDEs with no parameters in the smooth components, we are confident that our findings can be extended to encompass models featuring parameters in the drift of the smooth coordinates. This opens up directions for further exploration and application of our methodology to a broader range of complex dynamical systems, promising deeper insights into their behavior and underlying mechanisms. Acknowledgement This work has received funding from the European Union\u2019s Horizon 2020 research and innovation program under the Marie Sk\u0142odowska-Curie grant agreement No 956107, \"Economic Policy in Complex Environments (EPOC)\"; and Novo Nordisk Foundation NNF20OC0062958."
18
+ }
intro_8K/test_introduction_short_2405.03690v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03690v2",
3
+ "title": "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs",
4
+ "abstract": "Recent advancements in Large Language Models (LLMs) have led to the\ndevelopment of Video Large Multi-modal Models (Video-LMMs) that can handle a\nwide range of video understanding tasks. These models have the potential to be\ndeployed in real-world applications such as robotics, AI assistants, medical\nsurgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our\ndaily lives underscores the importance of ensuring and evaluating their robust\nperformance in mirroring human-like reasoning and interaction capabilities in\ncomplex, real-world contexts. However, existing benchmarks for Video-LMMs\nprimarily focus on general video comprehension abilities and neglect assessing\ntheir reasoning capabilities over complex videos in the real-world context, and\nrobustness of these models through the lens of user prompts as text queries. In\nthis paper, we present the Complex Video Reasoning and Robustness Evaluation\nSuite (CVRR-ES), a novel benchmark that comprehensively assesses the\nperformance of Video-LMMs across 11 diverse real-world video dimensions. We\nevaluate 9 recent models, including both open-source and closed-source\nvariants, and find that most of the Video-LMMs, especially open-source ones,\nstruggle with robustness and reasoning when dealing with complex videos. Based\non our analysis, we develop a training-free Dual-Step Contextual Prompting\n(DSCP) technique to enhance the performance of existing Video-LMMs. Our\nfindings provide valuable insights for building the next generation of\nhuman-centric AI systems with advanced robustness and reasoning capabilities.\nOur dataset and code are publicly available at:\nhttps://mbzuai-oryx.github.io/CVRR-Evaluation-Suite/.",
5
+ "authors": "Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Jameel Hassan, Muzammal Naseer, Federico Tombari, Fahad Shahbaz Khan, Salman Khan",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Multi AND Modal AND LLM",
14
+ "gt": "Recently, Large Language Models (LLMs) [Touvron et al., 2023, Zheng et al., 2023, Jiang et al., 2024] have demonstrated impressive reasoning and planning capabilities while simultaneously handling a wide range of NLP tasks [Wei et al., 2022a, Brown et al., 2020]. Consequently, their integration with the vision modality, specifically for video understanding tasks, has given rise to Video Large Multi-modal Models (Video-LMMs) [Li et al., 2023b]. These models act as visual chatbots that accept both text and video as input and handle a diverse set of tasks, including video comprehension [Maaz et al., 2023], detailed video understanding [Lin et al., 2023], and action grounding [Zhang et al., 2023]. As these models directly capture video data, they hold substantial potential for deployment in real-world applications such as robotics, surveillance, medical surgery, and autonomous vehicles. However, as these models assume an expanding role in our everyday lives, assessing their performance in comprehending complex videos and demonstrating reliable reasoning and robustness capabilities arXiv:2405.03690v2 [cs.CV] 8 May 2024 Benchmark Textual Complex In the wild Contextual Multiple Temporal Order Robustness Reasoning (OOD) Dependency Actions & Fine-grained MSVD-QA [Xu et al., 2017] MSRVTT-QA [Xu et al., 2017] TGIF-QA [Jang et al., 2017] Activity Net-QA [Yu et al., 2019] VideoChat-GPT [Maaz et al., 2023] MVBench [Li et al., 2023c] SEED-Bench [Li et al., 2023a] CVRR-ES (ours) Table 1: Comparison of CVRR-ES with existing bench- marks for video QA. The CVRR-ES benchmark repre- sents an initial effort to assess Video-LMMs in the context of their applicability and suitabil- ity in real-world applications. Non-existent actions with non-existent scene depictions. 6.0% Multiple actions in a single video. 13.25% Fine-grained action understanding. 9.58% Partial actions. 8.58% Non-existent actions with existent scene depictions. 5.75% Interpretation of visual context. 11.38% Continuity and Object Instance Count. 7.38% Unusual and Physically Anomalous activities. 7.92% Interpretation of social context. 11.67% Understanding of emotional context. 12.17% Time order understanding. 6.33% CVRR Evaluation Suite 0 20 40 60 80 100 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro GPT4V(ision) Human Video LMMs 15.92% 16.41% 16.46% 21.62% 24.96% 25.78% 32.89% 53.2% 70.78% 96.67% Figure 1: Left: CVRR-ES comprises of 11 diverse complex video evaluation dimensions encompass- ing a variety of complex, real-world contexts. Right: Overall performance of Video-LMMs on the CVRR-ES benchmark. Results for each Video-LMM are averaged across 11 video dimensions. across diverse real-world contexts becomes essential. Video-LMMs with such capabilities will be more effective when integrated into our daily lives for solving perception tasks and will be a promising step towards building human-centric AI-assistive systems. Several attempts in literature have been made to benchmark Video-LMMs. SEED-Bench [Li et al., 2023a] curated a MCQ-based benchmarking dataset including 3 evaluation dimensions for videos. Similarly, MV-Bench [Li et al., 2023c] constructed the Video-LMM benchmark and assembled 20 challenging video tasks for evaluating the spatial and temporal understanding of these models. While these methods aim at benchmarking Video-LMMs, they predominantly evaluate video and/or temporal comprehension abilities and overlook the complex reasoning aspects of Video-LMMs for real-world context, and their robustness towards user input text queries; both of which are crucial to ensure their responsible engagement with humans in various real-world situations in the wild. While some studies have explored similar areas such as hallucinations in image-based LLMs [Liu et al., 2023a, Qian et al., 2024], no such comprehensive study exists for the case of Video-LMMs. Motivated by the wide-scale applications of Video-LMMs and the lack of world-centric complex video benchmarking efforts, we present a new benchmark, Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES), to comprehensively assess the performance of Video-LMMs. As shown in Tab. 1, CVRR-ES evaluates Video-LMMs on key aspects of robustness and reasoning in videos, encompassing video domains that more accurately test models in real-world scenarios such as videos having contextual dependency and in-the-wild aspects. CVRR-ES is an open-ended video QA benchmark comprising 11 real-world video category dimensions (Fig. 1, left) that encompass diverse evaluation aspects. These dimensions span from context-dependent (e.g., social, emotional, etc.) categories to ones that often take place in the wild such as videos containing physically anomalous activities. We comprehensively evaluate a representative set of 9 recent Video-LMMs (Fig. 1, right) including both open-source and closed-source models on the CVRR-ES benchmark using a LLM-assisted automatic evaluation framework [Maaz et al., 2023, Cai et al., 2023]. The performance of Video-LMMs on the CVRR-ES benchmark reveals that these models struggle to correctly comprehend complex videos indicating their weak reasoning and lack of robustness to the textual user queries (Fig. 2). For instance, state-of-the-art Video-LLaVA [Lin et al., 2023] achieves only 15.92% performance averaged across 11 video dimensions of CVRR-ES. In contrast, closed-source models including GPT4V(vision) [OpenAI, 2023] and Gemini-Vision-Pro [Google, 2023] exhibit relatively stronger performance but still lag behind the performance of humans. Using CVRR-ES benchmark, we extensively perform quantitative and qualitative analysis formulating important insights into these Video-LMMs based on their failure cases and individual performances across the diverse video dimensions. 2 Based on the visual cues provided in the sequence of images, it appears that four individuals exit the vehicle. Correct Response: Three people exit the car to retrieve black bags from the trunk. 2. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. In the video, we can see a young man and woman exiting the car \u2026 How many individuals exit the vehicle to retrieve items from the trunk? Based on \u2026, one person is performing a prank on another individual. The prank seems to involve pouring a yellow liquid from a bowl, which is sneakily positioned behind the unsuspecting person's back at waist level. Correct Response: one man uses his back muscles to crack eggs placed by another man into a bowl behind him. The process is amusing and ends with a celebratory turn by the man holding the eggs. \u2026 Another man is holding a bowl of food behind the first man \u2026 the second man appears to be pouring the contents of the bowl down the first man's shorts. A man is seen standing in a kitchen with his hands on his hips. Another man is standing behind him and pouring something into a bowl. a guy is standing with his back facing someone else. The person standing behind him pours water on his back, causing him to shiver. In the video, a man is seen standing in a kitchen and washing dishes. He then puts on a white robe and continues washing dishes. In the video, we see a young man pouring water from a bucket to a plastic bowl. What is happening in the video? Why is the golfer celebrating in the video? Based on \u2026 the golfer is celebrating because they successfully hit the golf ball from a sand bunker onto the green and possibly close to the hole\u2026 Correct Response: The golfer is not celebrating and instead he is expressing his anger and frustration by hitting the golf stick into the ground. The golfer is celebrating because he made a hole-in-one. A hole-in-one is \u2026 The golfer is celebrating because he hit the ball into hole on his first attempt. The golfer is celebrating because he has just hit the ball into the hole. The golfer is celebrating because he has just made a hole in one, \u2026 The golfer is celebrating in the video because he is successfully teeing off and \u2026 Before the mug was moved, a tennis ball became visible underneath it. Correct Response: No other object was visible before the mug was moved in the video. A yellow tennis ball was visible in the scene before the mug was moved. A tennis ball. Before the mug was moved, a tennis ball was visible in the scene. In the beginning of the video, the person was holding a mug with a tennis ball on a table. Before the mug was moved, a white vase was visible in the scene. Before the mug was moved, what other object was visible in the scene? A tennis ball was visible in the scene before the mug was moved. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Figure 2: We observe that most Video-LMMs struggle to reason over complex videos (rows 1-3) and exhibit weak robustness and rectification capabilities when prompted to generate answers for user questions that can sometimes be confusing (row 4). The QA pairs in Comprehensive Video Reasoning and Robustness Evaluation Suite (CVRR-ES) benchmark assess the performance of Video-LMMs beyond general video comprehension. Based on our analysis, we observe that standard prompting of Video-LMMs struggles in steering their focus for complex video understanding. Additionally, their limitations in reasoning and robust video understanding of real-world scenarios are dominantly driven by the quality of textual inputs (i.e., user questions). Based on these insights, we develop a training-free Dual-Step Contextual Prompting (DSCP) technique, which effectively steers the model\u2019s behavior during inference to elicit video-specific reasoning and improved robustness within Video-LMMs. With DSCP, Video-LMMs show substantial improvements on our benchmark, suggesting the potential of prompting techniques for Video-LMMs. Our main contributions can be summarised as follows: \u2022 We present the Complex Video Robustness and Reasoning Evaluation suite (CVRR-ES), a Video Question Answering benchmark designed to assess the reasoning and robustness capabilities of Video-LMMs across 11 diverse world-centric complex video dimensions. \u2022 We comprehensively evaluate both open-source and closed-source Video-LMMs on the CVRR-ES benchmark and find that most models exhibit weak performance, highlighting their limited reasoning in complex videos and lack of robustness towards user text queries. \u2022 We conduct extensive analysis and formulate important conclusions about Video-LMMs based on their failure cases and performance on the CVRR-ES benchmark. Our findings provide valuable insights for building the next generation of human-centric AI systems with improved robustness and reasoning capabilities. \u2022 To improve Video-LMMs\u2019 reasoning and robustness abilities, we formulate a model-agnostic and training-free prompting technique that effectively enhances their performance. 3",
15
+ "main_content": "Video Large Multi-modal models (Video-LMMs). Video-LMMs [Lin et al., 2023, Li et al., 2023d, Zhang et al., 2023] are advanced visual chatbots capable of performing a wide range of video understanding tasks, including video comprehension and captioning, video question-answering, and action grounding. These models accept both video and textual inputs and generate textual responses. From an architectural perspective, Video-LMMs typically combine pre-trained vision backbones [Radford et al., 2021, Fang et al., 2023, Wang et al., 2022b] with large language models [Touvron et al., 2023, Zheng et al., 2023] using connector modules such as MLP adapters, Q-former [Dai et al., 2023], and gated attention [Alayrac et al., 2022]. VideoChat [Li et al., 2023b] and VideoChat-GPT [Li et al., 2023d] presented initial open-source efforts in this direction and were trained with two stages of alignment and video-instruction following objectives. Recently, more advanced Video-LMMs have emerged in the field, with some models focusing on improving model architectures [Li et al., 2023d], expanding to new tasks [Munasinghe et al., 2023], and enabling support for long videos [Song et al., 2023, Ren et al., 2023]. In this work, we aim to develop a comprehensive benchmarking evaluation framework to assess the reasoning and robustness capabilities of Video-LMMs and develop a training-free prompting technique to improve their performance on these fronts. Benchmarking Video-LMMs. With the growing number of Video-LMMs emerging in the research community, several works have presented evaluation frameworks to assess and quantify these models for benchmarking and analysis purposes. SEED-Bench [Li et al., 2023a] evaluates the visual capabilities in both image and Video-LMMs across 12 unique dimensions. MV-Bench [Li et al., 2023c] curates 20 challenging video tasks to evaluate spatial and temporal understanding of VideoLMMs. Video-ChatGPT [Maaz et al., 2023] develops a quantitative evaluation framework to assess model understanding across five aspects of general video comprehension, such as the correctness and consistency of model captions. While these evaluation frameworks provide effective insights, their assessments do not extend beyond general video-comprehension metrics to more advanced aspects of reasoning and robustness, particularly for real-world context cases. In contrast, our work focuses on providing a complex video reasoning and robustness benchmark across 11 diverse real-world-centric evaluation types and offers a more thorough assessment of Video-LMMs in practical applications. Training-free Prompting Techniques. Steering model behavior at inference time using prompting has become a common paradigm in the NLP domain. Prompting [Wei et al., 2022b, Wang et al., 2022a] refers to the set of instructions given as a prefix to the language model to better align model responses with human intent without the need for task-specific fine-tuning. Prompting techniques can be as simple as a single sentence (e.g., \"Let\u2019s think step by step\") such as zero-shot chain of thought [Wei et al., 2022b] prompting, to more detailed techniques such as combining chain-ofthought prompting with few-shot learning [Brown et al., 2020] and self-consistency chain of thought prompting [Wang et al., 2022a]. Surprisingly, training-free prompting techniques for Video Large Multi-modal Models (Video-LMMs) have been minimally explored. In this work, we develop a dual-step prompting technique based on principled prompt instructions specifically designed to steer the model\u2019s behavior for improved reasoning and robustness over complex videos. 3 Complex Video Reasoning and Robustness Evaluation Suite As Video-LMMs are touching new real-world applications, it is essential to ensure that they robustly handle the user inputs, comprehend the visual world, and exhibit human-like reasoning capabilities. In this work, our goal is to establish a comprehensive benchmark that specifically assess the robustness and reasoning capabilities of Video-LMMs in a variety of complex and contextual videos covering diverse scenarios. To this end, we present Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES). We first provide a holistic overview of CVRR-ES benchmark below and detail the video evaluation dimensions in Sec. 3.1. Subsequently, we present the CVRR-ES creation process in Sec. 3.2. We provide details on the dataset quality and human evaluation in Appendix B. Overview of CVRR-ES Benchmark. CVRR-ES encompasses evaluation dimensions that cover diverse video categories related to real-world scenarios, ranging from context-dependent (e.g., social, emotional) categories to video types that often take place in the wild (e.g., anomalous activities). Specifically, we have compiled 11 video evaluation dimensions and curated 2,400 high-quality openended question-answer (QA) pairs, spanning 217 high-quality videos. The average video duration is 22.3 seconds, with maximum and minimum durations of 183 and 2 seconds, respectively. In Fig. 4 Figure 3: CVRR-ES Benchmark Statistics. Left: Frequency distribution of the type of questions. Right: Illustration of the most frequent keywords in the answer-set of CVRR-ES benchmark. 3 (left), we quantify the distribution of different question types present in our benchmark. This diverse set of questions aims to comprehensively capture the model\u2019s answering capabilities based on reasoning and robustness criteria. We show the word cloud plot based on the frequency of key words in the answer set of CVRR-ES in Fig. 3 (right). The frequent words correspond to objects and attributes with which Video-LMMs could most likely interact when deployed in practical scenarios. 3.1 CVRR-ES Video Category definitions. To assess the robustness and reasoning capabilities of Video-LMMs in the CVRR-ES benchmark, we carefully curate 11 diverse benchmark evaluation categories. As shown in Fig. 1 (left), these categories encompass a wide range of real-world complex and contextual videos within each category. Below, we define each video evaluation dimension of the CVRR-ES benchmark in detail. 1) Multiple actions in a single video. This category includes videos that contain multiple activities within a single video. The number of activities varies from 2 to 4 in these videos, mostly featuring humans performing multiple activities. We curate QA pairs in this category aiming to identify whether the model can reason over challenging questions concerning multiple actions and understand the interrelation between different actions within a video. 2) Fine-grained action understanding. We gather video samples with fine-grained actions. These actions encompass various fine-grained activities performed by humans, including pushing, opening, closing, spreading, sitting, etc. This category presents a challenge to the model\u2019s comprehension of subtle and fine-grained actions through carefully crafted questions. 3) Partial actions. Based on our observations that Video-LMMs predominantly generate content that may be contextually relevant and likely to co-occur with the depicted scene in the video, we compile videos featuring actions that have a high probability of being followed by subsequent actions but are not executed in the video. For instance, an action such as cracking an egg in a kitchen setting often anticipates the subsequent action of frying/cooking the egg. 4) Time order understanding. Accurately recognizing the temporal sequence of activities in videos is crucial for distinguishing between atomic actions, such as pushing and pulling. We collect videos of fine-grained actions occurring in a particular temporal direction and curate challenging questions. 5) Non-existent actions with existent scene depictions. This category examines the model\u2019s robustness and reasoning behavior in scenarios where we introduce non-existent activities into the video without altering the physical and spatial scenes or environmental details in it. 6) Non-existent actions with non-existent scene depictions. In this evaluation category, we make the QA task more challenging by creating questions that include both non-existent activities and non-existent scene comprehension. Non-existent scene comprehension involves changing the objects, attributes of objects, and background scene description. This evaluates the model\u2019s reliability to correct misleading questions and avoid generating imaginary content. 7) Continuity and object instance count. This category contains videos (both real and simulations) designed to test the models\u2019 ability to accurately recognize the number of instances of objects, people, etc., and distinguish between existing objects and new ones introduced in the same video scene. 8) Unusual and physically anomalous activities. This category consists of videos with unconventional activities and physical phenomena that seemingly defy the laws of physics. We meticulously 5 collect relevant videos from various sources on the internet, focusing on capturing unusual activities such as a person floating in the air or driving a motorbike on a running river. We believe that assessing Video-LMMs in such scenarios is crucial, as it allows us to determine whether they can generalize to understand actions in out-of-distribution videos that can occur in practical situations. 9) Interpretation of social context. In the real world, human actions are often influenced by social context in their surroundings. For instance, a person might be helping an elderly individual cross the road. This category evaluates Video-LMMs on such scenarios to determine their ability to accurately infer the rationale behind actions based on the depicted social context. We gather diverse videos from the internet and create challenging questions that encompass the social context dimension. 10) Understanding of emotional context. Similar to social context, humans can accurately understand and interpret each other\u2019s actions by considering the emotional context. For example, a person being emotionally moved and crying in a gathering could be a happy moment if it is one stemming from success/joy. We collect videos and curate challenging reasoning questions aimed at recognizing the nature of actions solely based on emotional context for evaluating Video-LMMs. 11) Interpretation of visual context. This dimension focuses on assessing the model\u2019s reasoning abilities to recognize the actions by leveraging the overall visual contextual cues in the video. We curate specific videos containing actions where activity identification and reasoning require visual contextual cues. For example, to identify the number of people present based on the presence of shadows, one must utilize the visual context from the shadows to reason about the question. Qualitative Examples. Fig. 2 shows examples of collected videos for the CVRR-ES benchmark. The curated videos are carefully selected to be diverse and contain rich spatio-temporal content, aligned with the proposed video evaluation dimensions. 3.2 Building CVRR-ES Benchmark After defining the video evaluation dimensions, we now proceed toward building the CVRR-ES benchmark which consists of three stages. We present each stage in detail below. Stage 1: Data collection and Annotation. We first collect high-quality videos and annotate each video using human assistance. To ensure that each evaluation dimension captures the relevant attributes and information, we meticulously select videos that are representative of specific characteristics associated with that dimension. Across the 11 dimensions, 214 unique videos are selected for the benchmark with around 20 videos per evaluation category. Around 60% of these videos are collected from public academic datasets. To introduce diversity in the benchmark distribution, we incorporate video samples from multiple academic datasets including Something-Something-v2 [Goyal et al., 2017], CATER [Girdhar and Ramanan, 2020], Charades [Sigurdsson et al., 2016], ActivityNet [Caba Heilbron et al., 2015], HMDB51 [Kuehne et al., 2011], YFCC100M [Thomee et al., 2016]. The remaining 40% of videos are collected from the internet. Following the video collection process, two experienced human annotators are assigned to generate captions for each video. For videos where initial captions or metadata are available from academic datasets, the captions are generated by the annotators based on them. For videos collected from the internet, captions are entirely generated by human annotators. To ensure consistency and high quality, we provide annotation instructions to annotators, who generate captions accordingly. Personalized annotation guidelines are used for each video category. Refer to additional details in Appendix B. Stage 2: Question-Answer Generation. The first challenge is to select an evaluation setting to assess Video-LMMs. Humans typically engage in free-form conversation to interact with each other in day-to-day life. Inspired by this, we aim to simulate a similar style of interaction with Video-LMMs by curating open-ended QA pairs to evaluate these models for robustness and reasoning. We feed detailed ground-truth video captions to GPT-3.5 LLM, which are utilized to generate open-ended questions covering both reasoning and robustness aspects. Reasoning QA pairs: With Video-LMMs beginning to interact more directly with humans in our lives, it\u2019s crucial to validate the reasoning abilities of Video-LMMs for more reliable Human-AI interaction. When evaluating the reasoning capabilities of Video-LMMs, we aim to determine whether these models can understand the input video not only by analyzing spatial content but also by grasping the underlying rationale behind the occurring activities and their relationships with the surrounding context. This involves creating questions that go beyond simple video comprehension and scene 6 description and require the model to engage in complex logical inference, contextual understanding, and reasoning about counterfactual and hypothetical scenarios. Robustness QA pairs: In addition to evaluating the reasoning capabilities of LLMs, it is important to assess Video-LMMs to ensure their robust and responsible performance in real-world scenarios. In the context of Video-LMMs, robustness can be evaluated from both visual (video input) and textual interfaces. Our focus in this work lies on textual interface robustness by particularly testing the model\u2019s comprehension when posed with misleading or confusing questions. This scenario mirrors realistic situations where users, based on their expertise levels, may pose irrelevant, misleading, or confusing questions. It is crucial for models to demonstrate reliability and robustness in handling such queries and avoid generating unreal or hallucinated content for input videos. We curate specific prompts for each evaluation dimension to instruct LLM in generating QA pairs. Example prompts used as an instruction to LLMs for curating QA pairs for robustness and reasoning aspects are provided in Fig. 14 in the Appendix D. Stage 3: QA Pairs Filtration. After generating QA pairs, a manual filtration step is employed, with human assistance to verify each generated QA pair. Approximately 30% of the QA pairs generated by GPT-3.5 are found to be noisy, containing questions that are unrelated to the video evaluation dimensions or unanswerable based on the provided ground-truth captions. Additionally, many questions contain answers within the question itself. Therefore, an exhaustive filtering process is conducted which involves QA rectification and removing those samples which are not relevant to the video or evaluation type. This process results in a final set of 2400 high-quality QA pairs for the CVRR-ES benchmark. Examples of QA pairs are shown in Tab. 4 in the Appendix. Stage 4: Evaluation Procedure. Previous methods in the literature [Maaz et al., 2023, Cai et al., 2023, Liu et al., 2023a, Qian et al., 2024] have explored using LLM models as judges for quantifying results in open-ended QA benchmarks. We adopt a similar approach and instruct LLMs to act as teachers to assess the correctness of predicted responses from Video-LMMs compared to ground-truth answers. We generate open-ended predictions from Video-LMMs by providing video-question pairs as inputs and then present the model predictions and their corresponding ground-truth responses to the LLM Judge alongside the evaluation prompt. The Judge determines whether the prediction is correct or incorrect through a binary judgment, assigns a score from 1 to 5 representing the quality of the prediction, and provides a reasoning to explain its decision. Our ablative analysis in the Appendix. D demonstrates that reasoning-constrained LLM-based evaluation aligns well with human-based judgment. The evaluation prompt is shown in Fig. 13 in the Appendix D. 4 Dual-Step Contextual Prompting for Video-LMMs. Given their wide-scale potential in practical downstream applications, new Video-LMMs are frequently introduced by the research community. Despite the availability of numerous Video-LMMs, the majority of them are trained using only positive examples and video-conversational templates that are primarily limited to tasks such as video-captioning and video question answering. This leads to highly over-affirmative behavior and a lack of self-rectification abilities in these models (Sec. 5.4). Dual Step Contextual Prompting for Video-LMMs Retrieving Contextual reasoning information (Step 1) As an intelligent video comprehension model, focus on these guidelines: 1. Differentiate recurring objects, count accurately, and identify movements and poses. 2. Understand directional movements and temporal order. 3. Pay attention to fine-grained actions with precision. 4. Assess incomplete actions without assuming completion. 5. Detect emotional, social, and visual cues. 6. Capture and analyze all relevant actions. 7. Identify unusual actions accurately. 8. Disagree with incorrect information given in question. 9. If you do not find the evidence in the frames, you can give a definite answer by assuming that the asked action/attribute is not present. 10. Provide to the point and concise response. Now, proceed with answering the following question faithfully while keeping above guidelines in mind: Question: What is happening in the video? Context conditioned question-answering (Step 2) Context for the given video is: {step 1 response}. Now answer a question truthfully based on the video and the provided context. Question: {User question} Figure 4: Principled prompt instructions in our DSCP method for improving reasoning and robustness in Video-LMMs. Additionally, the templates have minimal focus on enhancing reasoning and robustness capabilities through reasoning-based instruction-tuning pairs, resulting in weak performance of such models against robustness and reasoning QA evaluations in the CVRR-ES benchmark. Furthermore, curating reasoning-based instruction fine-tuning datasets requires meticulous data curation steps, and retraining these models is computationally expensive [Li et al., 2023d, Ren et al., 2023]. Alternatively, training-free prompting techniques in NLP literature have shown effectiveness in eliciting reasoning abilities in LLMs such as chain of thought and self-consistency prompting [Wei et al., 2022b, Wang et al., 2022a]. Inspired by these approaches, we introduce a prompting technique called Dual Step Contextual Prompting (DSCP), which aims to steer VideoLMM focus for enhanced reasoning while simultaneously encouraging the models to provide robust and 7 Video-LLaVA LLaMA-VID MovieChat How many different wigs with different colors did the young girl try before settling on the black one? I can see the young girl wearing a black wig, and a brunette wig is in the background. The girl is wearing a pink shirt in the beginning of the video The young girl tried on three different wigs with different colors before settling on the black one. The young girl tried on three different wigs with different colors before settling on the black one. Improved video reasoning and robustness with our DSCP Method The video does not provide information about the number of different wigs the young girl tried before settling on the black one. I cannot give an accurate answer as the video does not show the different wigs that the young girl tried. The video does not provide information about the number of wigs the young girl tried before settling on the black one Is the atmosphere in the dressing room tense and uncomfortable as the girl receives her new hair? Yes, the atmosphere in the dressing room is tense and uncomfortable as the girl receives her new hair. The video does not provide any information about the atmosphere in the dressing room. The video does not provide any information about the atmosphere in the dressing room. However, it is possible \u2026 Improved video reasoning and robustness with our DSCP Method No, the atmosphere in the dressing room is not tense and uncomfortable. The woman brushing the hair is smiling, and the woman in the chair is smiling as \u2026 \u2026, but the video does not suggest that the girl is uncomfortable or tense while receiving her new hair. No, the atmosphere in the dressing room is not tense or uncomfortable. The girl is laughing and smiling \u2026, indicating that she is enjoying the experience. Can we see visitors petting the white tiger through the cage bars? Sure, in this video we can see a baby panda inside a cage with bars. There are also other \u2026 Yes, we can see visitors petting the white tiger through the cage bars in the video. Yes, we can see visitors petting the white tiger through the cage bars. Improved video reasoning and robustness with our DSCP Method No, we cannot see visitors petting the white tiger through the cage bars in the video. No, we do not see visitors petting the white tiger through the cage bars in the video. No, we cannot see visitors petting the white tiger through the cage bars in the video. Does the chimpanzee use tools to repair the zoo vehicle in the video? Yes, the chimpanzee in the zoo is seen using tools to fix the zoo vehicle. Yes, the chimpanzee is seen using a tool to fix the vehicle in the video. Yes, the chimpanzee is seen using a tool to repair the zoo vehicle in the video. Improved video reasoning and robustness with our DSCP Method No, the chimpanzee does not use tools to repair the zoo vehicle in the video. \u2026 the chimpanzee driving the car \u2026 The chimpanzee does not use tools to repair the zoo vehicle in the video. It can be seen driving the car \u2026 No, the chimpanzee does not use tools to repair the zoo vehicle in the video. Figure 5: Qualitative results of DSCP prompting method. Using our DSCP approach, Video-LMMs demonstrate enhanced robustness and reasoning capabilities over complex videos. grounded answers. DSCP is a two-step prompting method that 1) ensures that the model comprehends the video while reasoning over crucial aspects of complex video understanding such as contextual information and decoding the complex relationships between objects and motions, etc., and 2) encourages robustness by generating the response against the question while conditioning both on video and the context retrieved in the first step. Below we discuss each step of DSCP in detail. Step 1: Reasoning over the video. We first guide Video-LMMs using principled prompts to interpret video content from a reasoning perspective. As shown in Fig. 4 (in blue), we formulate ten principled reasoning-based instructions for prompting, Preason, which directs Video-LMMs to not only comprehend the general video content but also steers them to reason over the rationale behind occurring activities and their relationships with the surrounding context. These prompt instructions include specific considerations like contextual priors, the temporal order of actions, instance count, and attributes. Additionally, the prompting technique incorporates instructions to ensure conciseness and factuality, aiming to mitigate hallucinations. Given a Video-LMM F and input video V, we retrieve contextual reasoning information Icontext by providing principled reasoning prompt Preason along with the video to the LMM, Icontext = F(Preason|V). The contextual information is utilized in the second step of DSCP to generate a more grounded response to the user question. Step 2: Context conditioned question answering. As discussed earlier, Video-LMMs are primarily trained with positive examples to answer questions, with limited emphasis on reasoning and robustness aspects. Consequently, enabling direct interaction of Video-LMMs with users in real-world scenarios can result in undesired responses when the user question is confusing and deceiving due to their extreme over-affirmative behavior. To address these challenges, we propose incorporating an additional inference step in Video-LMMs before answering the user\u2019s question. We note that Video-LMMs often possess factual knowledge about the video content but may become distracted and produce hallucinations when prompted with confusing or misleading questions (more details in Appendix C). Specifically, we devise a prompting method that conditions the model to first comprehend the video in detail without attending to the user question, thereby eliminating the influence of the question. The complex video comprehension information refers to Icontext formulated in step 1. Subsequently, we pose the user question in the second step using prompt Puser which combines user question and the contextual reasoning information (Fig. 4, in green) while conditioning the model on both the video and the contextual reasoning information Icontext. Concretely, Final response = F(Puser|V), where Puser = [question; Icontext]. 8 Table 2: Evaluation results of Video LLMs across various video-evaluation categories on the CVRR-ES benchmark. We present results for both open-source and closed-source models, alongside human evaluation results which serves as the upper bound on the benchmark. Benchmark Category Video-LLaMA-2 VideoChat Video-ChatGPT Video-LLaVA MovieChat LLaMA-VID TimeChat Gemini-V Pro GPT4V Human Multiple Actions in 16.98 23.90 27.67 15.72 12.58 17.92 28.30 43.08 57.55 93.40 single video. Fine-grained action 29.57 33.48 26.96 25.22 23.48 26.09 39.13 51.61 77.39 95.65 understanding. Partial 24.76 33.01 22.82 13.59 21.36 14.56 49.51 67.48 73.79 98.54 actions. Time order 16.45 31.58 27.63 21.05 16.45 19.74 34.21 45.39 57.89 97.37 understanding. Non-existent actions with 10.14 15.22 23.19 5.07 5.07 2.90 23.19 57.25 71.01 97.10 existent scene. Non-existent actions with 13.19 14.58 17.36 3.47 11.81 6.94 13.89 49.64 75.00 100.00 non-existent scene. Continuity and Object 28.25 24.29 28.41 21.47 19.77 24.86 34.46 36.16 62.71 96.49 instance Count. Unusual and Physically 18.95 18.42 18.95 15.79 17.89 16.32 27.37 60.00 74.74 96.84 Anomalous activities. Interpretation of 25.00 31.07 32.50 18.93 17.14 13.93 39.29 64.29 79.64 97.51 social context. Understanding of 21.92 23.63 21.23 15.07 13.70 14.73 27.40 47.26 66.44 95.55 emotional context. Interpretation of 32.60 34.43 27.84 19.78 21.25 23.08 45.05 63.00 82.42 94.87 visual context. Average 21.62 25.78 24.96 15.92 16.41 16.46 32.89 53.20 70.78 96.67 Intuitively, the factual content generated in the first step will guide the model towards a robust response in the second step to produce factual and correct responses, even in the presence of noisy/misleading user questions. We illustrate the qualitative results of the DSCP method in Fig. 5. This approach leads to responses that are better grounded with the actual video content and are robust against potential lesser-quality user queries. As we will later show, the DSCP technique effectively enhances the performance of Video-LMMs on the CVRR-ES benchmark. 5 Evaluation Experiments on CVRR-ES. Video-LMMs. Both open-source and closed-source models are selected for the evaluation. Among the open-source models, we evaluate 7 recent Video-LMMs, including Video-LLaVA [Lin et al., 2023], TimeChat [Ren et al., 2023], MovieChat [Song et al., 2023], LLaMA-ViD [Li et al., 2023d], VideoChat [Li et al., 2023b] Video-ChatGPT [Maaz et al., 2023], and Video-LLaMA-2 [Zhang et al., 2023]. For evaluating closed-source models, we use Gemini-Pro-Vision [Google, 2023] and GPT-4V(vision) [OpenAI, 2023]. Refer to the Appendix A for implementation details. 5.1 Main Experiments on CVRR-ES. In Tab. 2, we present the evaluation results of Video-LMMs on the 11 dimension categories of the CVRR-ES benchmark. Below, we present several key findings. Open Source Video-LMMs struggles on CVRR-ES benchmark. All open-source LMMs show inferior performance across the different evaluation dimensions of CVRR-ES. Interestingly, some of the earlier developed open-source Video-LMMs, like Video-LLaMA, VideoChat, and Video-ChatGPT, exhibit higher performance compared to more recent models such as Video-LLaVA, MovieChat, and LLaMA-VID. Overall, TimeChat achieves the highest performance of 32.89% averaged across the 11 evaluation dimensions among open-source LMMs, followed by VideoChat with a score of 25.78%. Humans rank highest in CVRR-ES benchmark. Human studies achieve the highest performance on the CVRR-ES benchmark, with over 95% accuracy across all evaluation dimensions. Furthermore, these results suggest that the CVRR-ES QA pairs are answerable and suitable for benchmarking. Closed source models perform competitively on CVRR-ES. As shown in Tab. 2, both Gemini and GPT4V surpass the performance of open-source models and achieve high gains across all evaluation dimensions. The competitive results of GPT4V and Gemini on complex video evaluation dimensions such as partial actions, non-existent action/scene depiction, and context-dependent categories show 9 Prompting Method VideoChat Video-LLaVA MovieChat LLaMA-VID TimeChat Standard prompting 25.78 15.92 16.41 16.46 32.89 Chain of Thought (CoT) prompting 22.44 25.87 15.89 29.68 39.57 DSCP (Stage 1) 38.07 32.12 28.05 25.13 33.04 DSCP (Both stages) 47.92 37.93 35.87 46.85 39.45 Table 3: Prompting methods. DSCP stage 1 uses only the principled instructions designed in step 1, while DSCP (Both stages) uses the complete dual-step prompting technique. that these models have a more sophisticated understanding of the complex visual contents of videos and have strong capabilities to rectify misleading and confusing user questions. Overall, GTP4V improves over Gemini by 17.58% and provides an average accuracy of 70.78% on CVRR-ES. 5.2 Effectiveness of DSCP method for improving Video-LMMs performance 0 10 20 30 40 50 60 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro Video LMMs with DSCP +22.01 +19.46 +30.39 +16.15 +8.93 +22.14 +6.56 +5.02 Figure 6: Video-LMMs with DSCP technique effectively improves their performance (gains are shown in green) on CVRR-ES benchmark. We next integrate DSCP technique with VideoLMMs and present results on the CVRR-ES benchmark in Fig. 6. The results indicate that DSCP improves the model\u2019s performance compared with models that use standard prompting (i.e., using only the question itself). These results suggest that prompting techniques in Video-LMMs can better guide models for improved reasoning and robustness. With DSCP, initially low-performing Video-LMMs such as Video-LLaVa, MovieChat, and LLaMA-Vid show much better relative gains and become competitive with other models. The highest relative gain of 184% is achieved by LLaMA-ViD, which moves from 7th place in the leaderboard to 2nd among the open-source models after utilizing DSCP prompting. We observe similar overall positive trends of using DSCP with closed-source model Gemini, which improves on the benchmark by an absolute overall gain of 5.02%. We provide more detailed results comparisons in Appendix C. 5.3 Different prompting techniques. We study the contribution of each step of DSCP and compare it with chain-of-thought prompting [Wei et al., 2022b]. The results for the top 5 performing Video-LMMs are shown in Tab. 3. Chainof-thought prompting improves over the standard prompting technique in 3 out of 5 Video-LMMs, suggesting that prompting techniques from NLP literature can effectively guide multi-modal VideoLMMs to enhance reasoning and robustness. Next, we ablate on the first step of DSCP prompting, which uses the principled instructions of DSCP step 1 as a prefix alongside the actual user question. Using the first step prompting technique of DSCP substantially improves model performance on all Video-LMMs, suggesting the effectiveness of the principled prompt instructions designed specifically for Video models. DSCP with both steps, which integrates an additional thinking step in the prompting step, further improves the results and provides the highest results on 4 out of 5 Video-LMMs. 5.4 Main findings and Qualitative Results Based on the results of Video-LMMs on CVRR-ES, we draw key findings and show qualitative results. These insights can serve as valuable guidance for developing the next generation of Video-LMMs, aiming to make them more robust and reliable when deployed in real-world applications. Models excelling at standard VQA benchmarks struggle on CVRR-ES benchmark. Our analysis in Sec. 5.1 reveals that the latest open-source Video-LMMs, such as Video-LLaVA, MovieChat, and LLaMA-VID, perform less effectively on the CVRR-ES benchmark compared to Video-LMMs that were introduced earlier in the community, such as VideoChat and Video-ChatGPT. Interestingly, the same recent models demonstrate superior performance on general video comprehension benchmarks. This discrepancy suggests that current VQA benchmarks, like ActivityNet-QA [Yu et al., 2019] and MSRVTT [Xu et al., 2017], do not adequately correlate with the complex video reasoning and robustness scenarios highlighted in our benchmark. Consequently, this also indicates that most newer Video-LMMs are heavily trained to excel on the general video comprehension benchmarks while reducing their generalizability, reasoning, and robustness capabilities. Over-affirmative behavior of open-source Video-LMMs. Another important observation about open-source models is their tendency to exhibit excessively positive and affirmative responses. As shown in Fig. 7, open-source Video-LMMs consistently respond with \"Yes\" even when faced with 10 confusing questions that describe non-existent actions and objects. This highlights the vulnerability of these models when interacting with users in real-world scenarios. In our CVRR-ES benchmark, opensource models are particularly vulnerable to our evaluation dimensions of \"Non-existent actions with the existent scene\" and \"Non-existent actions with the non-existent scene\" compared to closed-source models. These models lack negation and self-rectification capabilities, especially when users provide misleading or confusing questions. We conjecture that such behavior arises due to the absence of negative instruction tuning pairs during the training of Video-LMMs. Tendency towards activity completion. Most open-source Video-LMMs have shown weak performance on the evaluation dimension of partial actions in CVRR-ES, which contains videos focusing on incomplete or atomic actions. To further analyze the models\u2019 behavior, we show qualitative results on such videos in Fig. 8. It can be observed that most open-source models tend to complete actions, even when only part of the action is provided in the video. For instance, Video-LLaVA struggles to reason over the video and describes the man as kicking the soccer ball, while the action in the video stops at the point of the man placing his foot beside the ball. We observe similar behavior in other Video-LMMs. Upon examining the fine-tuning strategies [Maaz et al., 2023, Liu et al., 2023b], we find that almost all models are trained on end-to-end actions-based instruction-tuning data, causing them to generate complete action descriptions at inference. This tendency highlights the vulnerability of Video-LMMs after deployment, as real-world scenarios often involve atomic, sub-atomic, and general actions alike. To improve the performance of Video-LMMs, it is crucial to incorporate diverse action types during training, including partial and incomplete actions. Weak Generalization to extreme OOD videos. The evaluation dimension of unusual and physically anomalous activities in CVRR-ES resembles extreme out-of-distribution video examples. With the exception of GPT4V and Gemini, Video-LMMs struggle with this dimension, indicating weak generalizability towards OOD videos containing the coexistence of unusual objects and activities that are extremely rare in typical videos. For instance, Video-LLaVA in Fig. 9 describes a person falling on the street, while the video actually shows the person performing an optical illusion. To be responsibly deployed in real-world applications, where OOD actions occur more frequently, Video-LMMs need to be trained to perform more robustly on OOD samples. This may involve incorporating diverse and atypical examples in the training data to improve the model\u2019s ability to handle unusual situations. Limited understanding of temporal order in complex videos. The CVRR-ES benchmark results show that Video-LMMs perform relatively better on the fine-grained action dimension compared to the time-order understanding dimension. While these models can accurately identify fine-grained actions, they struggle with comprehending the correct temporal order of these actions within a video. This limitation can lead to misinterpretations of the underlying information depending on temporal order. We present failure cases of this dimension in Fig. 10. For building more advanced world-centric Video-LMMs, it is crucial to enhance their ability to process and interpret event sequences accurately. Video-LMMs struggles in understanding the emotional and social context. For more reliable interaction between Video-LMMs and humans in practical scenarios, these models should comprehend the spatio-temporal scenes with social and contextual reasoning capabilities similar to humans. The lower performance of Video-LMMs on social and emotional contextual dimensions in CVRR-ES highlights their limitations and lack of understanding of scenes based on contextual cues. For instance, as shown in Fig. 11 (bottom row), GPT-4V struggles to comprehend a scene where a worker is attempting to prevent shoes from getting wet due to the rain by moving them under the shade. Instead, GPT-4V provides a response that contradicts the social cues present in the video. 6 Conclusion Given the expanding role of Video-LMMs in practical world-centric applications, it is vital to ensure that these models perform robustly and exhibit human-like reasoning and interaction capabilities across various complex and real-world contexts. In this work, we present the CVRR-ES benchmark for Video-LMMs, aiming to evaluate Video-LMMs on these very fronts. Through extensive evaluations, we find that Video-LMMs, especially open-source ones, exhibit limited robustness and reasoning capabilities over complex videos involving real-world contexts. Based on our analysis, we formulate a training-free prompting technique that effectively improves the performance of Video-LMMs across various evaluation dimensions of the CVRR-ES benchmark. Furthermore, we analyze and investigate the failure cases of Video-LMMs on the CVRR-ES benchmark and deduce several important findings. We hope that the CVRR-ES benchmark, accompanied by our extensive analysis, will contribute towards building the next generation of advanced world-centric video understanding models. 11 Are there other players playing in the background while the person picks up the ball? No, there are no other players shown playing in the background while the person picks up the ball. No, there are no other players shown playing in the background.. yes. Yes, there is a football field in the background where other players can be seen \u2026 GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, there are other players playing in the background... Yes, there are other players shown playing in the background while the person picks up the ball \u2026 Yes, there are other players shown playing in the background ... Does the video feature a hand filling the cup with liquid? The video does not feature the hand filling the cup with liquid. a liquid is visible on the surface \u2026 liquid was spilled from the cup rather than poured into it. No. Yes. Yes, the video shows a man using a plastic bottle to fill a red glass cup with a pink liquid. Yes, the video features the hand filling the cup with liquid. Yes, the video shows the hand filling the cup with liquid. Yes, the video shows the hand pouring water into the cup and then pouring it out. Video-ChatGPT Figure 7: Over affirmative behaviour. Most open-source Video-LMMs exhibit overly affirmative behavior by consistently agreeing with user questions, even when the questions are confusing or inaccurate. Does the woman mount the camel after caressing it? \u2026 woman is seen interacting with the camel, where she appears to be caressing or patting it. However, none of the images show her mounting the camel. No, the woman does not mount the camel after caressing it. Yes. Yes, at the end of the video, the woman is mounting the camel after caressing it. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the woman mounts the camel after caressing it. Yes, after caressing the camel, the woman mounts it and rides it around the pen. Yes, after caressing the camel, the woman mounts it. What is happening in the video? The video appears to be a tutorial demonstrating the technique of kicking \u2026 approaching the ball from an angle and planting their foot beside the ball just before making the kick. A man is demonstrating how to place your plant foot beside the ball when shooting a soccer ball. In the video, a man is showing off his ball control skills by doing various moves on a field of grass. In the given video, we see a young man playing soccer with an orange soccer ball on a green field. There is a green soccer ball visible, and we can also see a man wearing a black shirt playing soccer. In the video, a man is seen kicking a soccer ball on a field. \u2026 and is seen kicking \u2026 \u2026 a woman is seen standing on a field and kicking a ball around. She then proceeds to kick the ball In the video, a man is seen kicking a soccer ball on a field while wearing a red shirt and black shorts. Video-ChatGPT Figure 8: Action completion tendency. Most open-source Video-LMMs tend to generate captions corresponding to complete actions and struggle with determining incomplete or partial actions. 12 What is happening in the video? \u2026 performing a lifting exercise known as the bench press. The individual is lying on a bench, pushing a barbell with weights upward and then lowering it back towards their chest in a controlled manner. A woman is lying on a yoga mat with a barbell resting on her lower abdomen. She then lifts her legs and places her feet on the barbell and proceeds to do a series of leg and hip exercises. A woman is lying on the floor and performing squats while holding a heavyweight barbell. we see a young woman working out on a mat with a black bar and a gym. She lifts the bar over her head and continues to do so for several minutes. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat a woman is seen doing a workout on a weight machine. She is seen lifting the weight and performing a variety of \u2026 a woman is seen lying on the floor with a barbell on her stomach. She then lifts the barbell up and down several times before lying back down. a woman is seen lying on the floor with her arms and legs crossed. She then lifts her head and legs simultaneously and holds them up for a few seconds before dropping them back down. What is the unusual aspect shown in the video? The unusual aspect shown in the video appears to be a person performing a \\\"levitating\\\" trick where they seem to be floating in mid-air while holding an umbrella and a hat has fallen onto the ground. \u2026 The unusual aspect of the video is that the man appears to be defying gravity by leaning backwards at an impossible angle. The unusual aspect shown in the video is a man doing an elaborate dance routine using a large and heavy cart. The video shows a group of people dressed in black suits and white shirts doing a dance on a busy city street. The unusual aspect shown in the video is the man doing a handstand on a skateboard. \u2026 in the video is the man's ability to breakdance while holding an umbrella. \u2026 The unusual aspect of the video is that the man is dancing and jumping on the sidewalk while holding an umbrella. Video-ChatGPT Figure 9: Weak generalization on OOD videos. Open-source Video-LMMs struggle to correctly reason over videos containing rare and unusual actions. Is the hand movement from left to right in the video? It appears that the hand movement shown is from right to left, rather than from left to right. yes. yes. Yes, the hand movement in the video is from left to right. The hand appears to be pointing at the food. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the hand is moving from left to right in the video.. Yes, the hand movement in the video is from left to right in the \u2026 Yes, the hand movement in the video is from left to right. The .... Is the person running towards the camera or running in the direction away from the camera? The person in the images is running backwards, away from the camera. You can tell because they are \u2026 The person is running backwards toward the camera. running towards the camera In the given video, the person is running towards the camera. The person is running towards the camera. The person is running towards the camera. The person is running towards the camera. Video-ChatGPT Figure 10: Limited temporal understanding. Most Video-LMMs struggle to accurately determine the temporal order of actions in videos. The bottom video shows a man running backward along a track. 13 Identify if the man calmly walk away after putting the ball in the hole? the man does not calmly walk away after putting the ball in the hole. The sequence shows him experiencing frustration after the putt; \u2026 No, he takes the flag out of the hole and throws it. He then walks away. yes. The man is walking calmly after putting the ball into the hole during a golf game. The scene shows \u2026 GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Yes, the man calmly walks away after putting the ball \u2026 Yes, the man in the video calmly walks away after putting the ball in the hole. Yes, the man calmly walks away after putting the ball in the hole. What is the person in orange uniform doing and why? The person in the orange uniform appears to be a worker involved in some sort of maintenance or cleaning activity. Looking at the \u2026 The person in orange uniform is a street sweeper. She is sweeping up the sidewalk and putting the trash into a cart. The person in orange uniform is sweeping the street and removing trash. \u2026 \u2026 It is not clear what the person in the orange uniform is doing, but it appears to be standing and observing the street corner. The person in the orange uniform is standing on a street corner and appears to be talking to someone. \u2026 The person in orange uniform is cleaning the sidewalk with a small white cloth. \u2026 The person in the orange uniform is picking up trash on the side of the road. Video-ChatGPT Figure 11: Limited contextual understanding. Most Video-LMMs exhibit a weak understanding of complex videos that contain emotional (e.g., an angry player in the top video) and social cues (e.g., a person saving shoes from getting wet due to rain in the bottom video)."
16
+ }
intro_8K/test_introduction_short_2405.03894v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03894v1",
3
+ "title": "MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View",
4
+ "abstract": "Generating consistent multiple views for 3D reconstruction tasks is still a\nchallenge to existing image-to-3D diffusion models. Generally, incorporating 3D\nrepresentations into diffusion model decrease the model's speed as well as\ngeneralizability and quality. This paper proposes a general framework to\ngenerate consistent multi-view images from single image or leveraging scene\nrepresentation transformer and view-conditioned diffusion model. In the model,\nwe introduce epipolar geometry constraints and multi-view attention to enforce\n3D consistency. From as few as one image input, our model is able to generate\n3D meshes surpassing baselines methods in evaluation metrics, including PSNR,\nSSIM and LPIPS.",
5
+ "authors": "Emmanuelle Bourigault, Pauline Bourigault",
6
+ "published": "2024-05-06",
7
+ "updated": "2024-05-06",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Consistent and high-quality novel view synthesis of real- world objects from a single input image is a remaining chal- lenge in computer vision. There is a myriad of applications in virtual reality, augmented reality, robotic navigation, con- tent creation, and filmmaking. Recent advances in the field of deep learning such as diffusion-based models [2, 13, 22, 36, 37] significantly improved mesh generation by denois- ing process from Gaussian noise. Text-to-image generation has shown great progress with the development of efficient approaches as generative adversarial networks [3, 11, 16], autoregressive transformers [9, 28, 39], and more recently, diffusion models [12, 14, 27, 32]. DALL-E 2 [27] and Im- agen [32] are such models capable of generating of photo- realistic images with large-scale diffusion models. Latent diffusion models [31] apply the diffusion process in the la- tent space, enabling for faster image synthesis. Although, image-to-3D generation has shown impres- sive results, there is still room for improvement in terms of consistency, rendering and efficiency. Generating 3D rep- resentations from single view is a difficult task. It requires extensive knowledge of the 3D world. Although diffusion models have achieved impressive performance, they require expensive per-scene optimization. Zero123 [18] proposes a diffusion model conditioned on view features and camera parameters trained on persepec- tive images [6]. However, the main drawback is the lack of multiview consistency in the generation process imped- ing high-quality 3D shape reconstruction with good cam- era control. SyncDreamer [19] proposes a 3D feature vol- ume into the Zero123 [18] backbone to improve the mul- tiview consistency. However, the volume conditioning sig- nificantly reduces the speed of generation and it overfits to some viewpoints, with 3D shapes displaying distortions. In this paper, we present MVDiff, a multiview diffusion model using epipolar geometry and transformers to gener- ate consistent target views. The main idea is to incorpo- rate epipolar geometry constraints in the model via self- attention and multi-view attention in the UNet to learn the geometry correspondence. We first need to define a scene transformation transformer (SRT) to learn an implicit 3D representation given a set of input views. Then, given an input view and its relative camera pose, we use a view- conditioned diffusion model to estimate the conditional dis- tribution of the target view. We show that this framework presents dual improve- ments compared to existing baselines in improving the 3D reconstruction from generated multi-view images and in terms of generalization capability. In summary, the paper presents a multi-view generation framework from single image that is transferable to various datasets requiring little amount of changes. We show high performance on the GSO dataset for 3D mesh generation. The model is able to extrapolate one view image of a 3D arXiv:2405.03894v1 [cs.CV] 6 May 2024 object to 360-view with high fidelity. Despite being trained on one dataset of natural objects, it can create diverse and realistic meshes. We summarise our contributions as fol- lows: \u2022 Implicit 3D representation learning with geometrical guidance \u2022 Multi-view self-attention to reinforce view consistency \u2022 Scalable and flexible framework",
16
+ "main_content": "2.1. Diffusion for 3D Generation Recently, the field of 3D generation has demonstrated rapid progress with the use of diffusion models. Several studies showed remarkable performance by training models from scratch on large datasets to generate point clouds [21, 24], meshes [10, 20] or neural radiance fields (NeRFs) at inference. Nevertheless, these models lack generalizability as they are trained on specific categories of natural objects. DreamFusion [26] explored leveraging 2D priors to guide 3D generation. Inspired by DreamFusion, several studies adopted a similar pipeline using distillation of a pretrained 2D text-to-image generation model for generating 3D shapes [1, 4, 5, 23, 43]. The per-scene optimisation process typically lacks in efficiency with times ranging from minutes to hours to generate single scenes. Recently, 2D diffusion models for multi-view synthesis from single view have raised interest for their fast 3D shape generation with appealing visuals [17, 18, 34]. However, they generally do not consider consistency of multi-view in the network design. Zero123 proposes relative viewpoint as conditioning in 2D diffusion models, in order to generate novel views from a single image [18]. However, this work does not consider other views in the learning process and this causes inconsistencies for complex shapes. One2-3-45 [17] decodes signed distance functions (SDF) [25] for 3D shape generation given multi-view images from Zero123 [18], but the 3D reconstruction is not smooth and artifacts are present. More recently, SyncDreamer [19] suggests a 3D global feature volume, in order to tackle inconsistencies in multiview generation. 3D volumes are used with depth-wise attention for maintaining multi-view consistency. The heavy 3D global modeling tend to reduce the speed of the generation and quality of the generated meshes. MVDream [35] on the other hand incorporates 3D self-attention with improved generalisability to unseen datasets. 2.2. Sparse-View Reconstruction Sparse-view image reconstruction [15, 45] is a challenging task where only a limited number of images, generally less than 10, are given. Traditional 3D reconstruction methods start by estimating camera poses, then as a second step perform dense reconstruction with multi-view stereo [38, 46] or NeRF [40]. Estimating camera poses in the context of sparse-view reconstruction is a challenging task as there is little or no overlap between views. [45] aimed at addressing this challenge by optimising camera poses and 3D shapes simultaneously. In the same line of research, PF-LRM [42] suggests a pose-free approach to tackle the uncertainty in camera poses. In our work, we learn the relative camera poses of the 3D representation implicitly via a transformer encoder-decoder network and a view-conditioned diffusion model capable of generating consistent multi-view images directly. We then employ a reconstruction system Neus [41] to recover a mesh. 3. Methodology 3.1. Multi-view Conditional Diffusion Model The rationale behind multi-view conditioning in diffusion models is to infer precisely the 3D shape of an object with the constraint that regions of the 3D object are unobserved. Direct 3D predictions for sequential targets as in Zero123 [18] might lead to implausible novel views. To control the uncertainty in novel view synthesis, we choose to enforce multi-view consistency during training. Given an input image or sparse-view input images of a 3D object, denoted as xI, with known camera parameters \u03c0I, and target camera parameters \u03c0T, our aim is to synthesize novel views that recover the geometry of the object. Our framework can be broken down into two parts: (i) first a scene representation transformer (SRT) [33] that learns the latent 3D representation given a single or few input views, and (ii) second a view-conditioned diffusion model to generate novel views. 3.2. Novel View Synthesis via Epipolar Geometry To perform novel view synthesis, we employ a scene representation transformer (SRT) [33]. In the work of [33], a transformer encoder-decoder architecture learns an implicit 3D latent representation given a set of images with camera poses (xI, \u03c0I). First, a CNN extracts features from xI and feeds them as tokens to the transformer encoder fE. The transformer encoder then outputs a set-latent scene representation z via self-attention. For novel view rendering, the decoder transformer of SRT queries the pixel color via cross-attention between the ray associated to that pixel r and the set-latent scene representation z. The aim is to minimize the pixel-level reconstruction loss in Eq. (1), \\lab e l {e l {e q : rec_l o s s} \\ m a t h c h c al {L}_{\\mathrm {recon}} =\\sum _{\\mathbf {r} \\in \\mathcal {R}}\\left \\|C(\\mathbf {r})-\\hat {C}(\\mathbf {r})\\right \\|_2^2, (1) Figure 1. Pipeline of MVDiff. From a single input or few input images, the transformer encoder translates the image(s) into latent scene representations, implicitely capturing 3D information. The intermediate outputs from the scene representation transformer are used as input by the view-conditioned latent diffusion UNet, generating multi-view consistent images from varying viewpoints. where C(r) is the ground truth color of the ray and R is the set of rays sampled from target views. We aim to leverage cross-interaction between images through relative camera poses using epipolar geometrical constraints. For each pixel in a given view i, we compute the epipolar line and the epipolar distance for all pixels in view j to build a weighted affinity matrix A\u2032 i,j = Ai,j+Wi,j where Wi,j is the weighted map obtained from the inverse epipolar distance. View-Conditioned Latent Diffusion. The outputs from SRT do not recover fine details with simple pixel-level reconstruction loss. We employ a view-conditioned diffusion model LDM from [29] to estimate the conditional distribution of the target view given the source view and the relative camera pose: p (xT | \u03c0T, xI, \u03c0I). First, the SRT predicts a low-resolution 32 \u00d7 32 latent image \u02dc xT based on the target view \u03c0T for computationally efficiency. The latent image from SRT is concatenated with the noisy image y and fed into the latent diffusion UNet E\u03b8. In addition, we condition E\u03b8 on the latent scene representation z via cross-attention layers (see Fig. 1). The generated images \u02c6 \u03f5t can be denoted as \\ h at {\\ b old sy mbol {\\mathcal {E}}_t} &= \\boldsymbol {\\mathcal {E}}_\\theta (\\boldsymbol {y}, \\tilde {\\boldsymbol {x}}_{\\mathrm {\\textit {I}}}, \\boldsymbol {z}, t), (2) where t is the timestep. We optimize a simplified variational lower bound, that is \\ma t h c al { L}_{\\ m ath rm {VLD M}}=\\mathbb {E}\\left [\\left \\| \\boldsymbol {\\mathcal {E}}_t \\boldsymbol {\\mathcal {E}}_\\theta (\\boldsymbol {y}, \\tilde {\\boldsymbol {x}}_{\\mathrm {\\textit {T}}}, \\boldsymbol {z}, t) \\right \\|^2\\right ]. (3) Multi-View Attention. As previously stated, in Zero123 [18], multiple images are generated in sequence from a given input view based on camera parameters. This approach can introduce inconsistencies between generated views. To address this issue, we apply modifications to the UNet in order to feed multi-view images. This way, we can predict simultaneously multiple novel views. We employ self-attention block to ensure consistency for different viewpoints. 4. Experiments This section presents the novel view synthesis experiments in Sec. 4.1, and the 3D generation experiments in Sec. 4.2. We present ablation experiments in Sec. 4.3 and ethical considerations in Sec. 4.4. Training Data. For training our model for novel view synthesis, we use 800k 3D object models from Objaverse [6]. For a fair comparison with other 3D diffusion baselines, we use the same training dataset. Input condition views are chosen in a similar way as Zero123 [18]. An azimuth angle is randomly chosen from one of the eight discrete angles of the output cameras. The elevation angle is randomly selected in the range [\u221210\u25e6, 45\u25e6]. For data quality purposes, we discard empty rendered images. This represents about one per cent of the training data. 3D objects are centered and we apply uniform scaling in the range [-1,1] so that dimensions matches. Input images to our pipeline are RGB images 256x256. Test Data. We use the Google Scanned Object (GSO) [8] as our testing dataset, and use the same 30 objects as SyncDreamer [19]. There are 16 images per 3D object, with a fixed elevation of 30\u25e6and every 22.5\u25e6for azimuth. Implementation Details. Our model is trained using the AdamW optimiser [24] with a learning rate of 10\u22124 and weight decay of 0.01. We reduce the learning rate to 10\u22125 for a total of 100k training steps. For our training batches, we use 3 input views and 3 target views randomly sampled with replacement from 12 views for each object, with a batch size of 356. We train our model for 6 days on 4 A6000 (48GB) GPUs. Evaluation Metrics. For novel view synthesis, we report the PSNR, SSIM [44], and LPIPS [47]. For 3D reconstruction from single-view or few views, we use the Chamfer Distances (CD) and 3D IoU between the ground-truth and reconstructed volumes. 4.1. Novel View Synthesis We show in Tab. 1 the performance of MVDiff compared to baselines for novel view synthesis on an unseen dataset [8]. Qualitative results are shown in Fig. 2. Our model surpasses baseline Zero-123XL by a margin and benefits from additional views. Given the probabilistic nature of the model, it is able to generate diverse and realistic shapes given a single view (see Fig. 3). Training Sample # Ref. Views GSO NeRF Synthetic PSNR\u2191SSIM\u2191LPIPS\u2193Runtime\u2193PSNR\u2191SSIM\u2191LPIPS\u2193Runtime\u2193 Zero123 800K 1 18.51 0.856 0.127 7s 12.13 0.601 0.421 7s Zero123-XL 10M 1 18.93 0.856 0.124 8s 12.61 0.620 0.381 8s MVDiff 800k 1 20.24 0.884 0.095 9s 12.66 0.638 0.342 9s MVDiff 800k 2 22.91 0.908 0.064 9s 13.42 0.685 0.321 10s MVDiff 800k 3 24.09 0.918 0.052 10s 13.58 0.741 0.301 11s MVDiff 800k 5 25.09 0.927 0.043 11s 14.55 0.833 0.288 12s MVDiff 800k 10 25.90 0.935 0.036 12s 14.51 0.657 0.215 13s Table 1. Novel view synthesis performance on GSO and NeRF Synthetic datasets. MVDiff outperforms Zero-123XL with significantly less training data. Additionally, MVDiff performance exhibits further improvement with the inclusion of more reference views. 4.2. 3D Generation We showed in Sec. 4.1 that our model can generate multiple consistent novel views. In this section, we perform single and few-images 3D generation on the GSO dataset. We generate 16 views with azimuths uniformly distributed in the range 0\u25e6to 360\u25e6. For a fixed elevation angle of 30\u25e6, SyncDreamer may fail to recover the shape of 3D objects at the top and bottom since the camera angle does not cover those regions. Therefore, we also use different elevation angles from \u221210\u25e6to 40\u25e6. Then, we adopt NeuS [40] for 3D reconstruction. The foreground masks of the generated images are initially predicted using CarveKit. It takes around 3 minutes to reconstruct a textured mesh. We compare our 3D recontructions with SoTA 3D generation models, including One-2-3-45 [17] for decoding an SDF using multiple views predicted from Zero123, and SyncDreamer [19] for fitting an SDF using NeuS [40] from 16 consistent fixed generated views. Given two or more reference views, MVDiff outperforms all other baselines (see Tab. 2). MVDiff generates meshes that are visually consistent and resembles the ground-truth (see Fig. 4). # Input Views Chamfer Dist. \u2193 Volume IoU \u2191 Point-E 1 0.0561 0.2034 Shape-E 1 0.0681 0.2467 One2345 1 0.0759 0.2969 LGM 1 0.0524 0.3851 SyncDreamer 1 0.0493 0.4581 MVDiff 1 0.0411 0.4357 MVDiff 2 0.0341 0.5562 MVDiff 3 0.0264 0.5894 MVDiff 5 0.0252 0.6635 MVDiff 10 0.0254 0.6721 Table 2. 3D reconstruction performance on GSO dataset. MVDiff outperforms other image-to-3D baselines in generating high-quality 3D objects, with improved performance for multiple input views. PSNR\u2191 SSIM\u2191 LPIPS\u2193 MVDiff 20.24 0.884 0.095 w/o epipolar att. 19.14 0.864 0.118 w/o multi-view att. 19.92 0.871 0.113 Table 3. Effect of Self-Attention Mechanisms. We report PSNR, SSIM [44], and LPIPS [47] for novel view synthesis from single view on GSO dataset. Results show that epipolar attention and multi-view attention lead to superior performance. 4.3. Ablation Study Multi-View Consistency. The generated images may not always plausible and we need to generate multiple instances with different seeds and select a desirable instance for 3D reconstruction based on higher overall PSNR, SSIM and LPIPS for the view generated. Experiments show that we need 5 generations to obtain optimal reconstruction. Effect of Epipolar and Mult-View Attention. We evaluate the benefits of epipolar attention and multi-view attention on novel view synthesis performing ablation experiments on those components. In particular, we observe a significant drop in performance metrics when removing epipolar attention suggesting that the model is effectively able to implicitely learn 3D object geometry by enforcing geometrical guidance (see Tab. 3). Weight Initialisation. An alternative to initialising weights trained from Zero123 on view-dependent objects [7] is to use weights from Stable Diffusion [30]. We compare the performance of our model initializing weights from Stable Diffusion v2 [30] with a drop in performance of -2.58 PSNR compared to Zero123 [18] weight initialisation. This shows that initializing from Stable Diffusion v2 leads to poorer performance on the novel view task and worse generalisability. 4.4. Risks and Ethical Considerations There are several promising applications of synthetic data, notably in medicine. Synthetic data could make significant Figure 2. Zero-Shot Novel View Synthesis on GSO. MVDiff outperforms Zero123-XL for single view generation with greater camera control and generation quality. As more views are added, MVDiff resembles the ground-truth with fine details being captured such as elephant tail and turtle shell design. Input \u2190\u2212\u2212\u2212\u2212\u2212Generated \u2212\u2212\u2212\u2212\u2212\u2192 GT Figure 3. Diversity of Novel View Diffusion with MVDiff on NeRF-Synthetic Dataset. We show nearby views (top and bottom row) displaying good consistency, while more distant views (middle) are more diverse but still realistic. improvement in surgery planning and tailored patient diagnosis leveraging 3D information and its assets of quantitative parameters. Nevertheless, there are ethical considerations associated with the use of synthetic data in medicine. We should ensure the synthetic data is anonymised such that no particular features of the synthetic meshes could link back to a specific patient. In that light, there are transformations that can be applied to the meshes. We should also make sure that the synthetic data is not used in a way it could harm or be detrimental. Further validation on different cohorts of people is required before using these synthetic data in clinical settings. Despite important ethical considerations we shed light on, we believe these 3D representations of organs could be of great use, on hand for research purposes to run largescale statistical analysis on different cohorts and highlight associations with patient metadata. These cost effective synthetic data could be beneficial to improve the visualisations of bones and organs and be deployed widely. 4.5. Limitations A limitation of this work lies in its computational time and resource requirements. Despite advances in sampling approaches, our model still requires more than 50 steps to generate high-quality images. This is a limit of all diffusion based generation models. Moreover, the reconstructed meshes may not always be plausible. To increase the quality, we may need to use a larger object dataset like Objaverse-XL[7] and manually curate the dataset to filter out uncommon shapes such as point clouds, textureless 3D models and more complex scene representation. Figure 4. 3D reconstruction from single-view on GSO dataset. MVDiff produces consistent novel views and improves the 3D geometry compared to baselines. One-2-3-45 and SyncDreamer tend to generate overly-smoothed and incomplete 3D objects, in particular the sofa. 5. Conclusion In our work, we aimed to address the problem of inconsistencies in multi-view synthesis from single view. We specifically apply epipolar attention mechanisms as well as multiview attention to aggregate features from multiple views. We propose a simple and flexible framework capable of generating high-quality multi-view images conditioned on an arbitrary number of images. 5.1. Future Work Combining with graphics. In this study, we show that we can generate view consistent 3D objects by learning geometrical correspondences between views during training. We modified the latent diffusion U-Net model to feed multi view in order to generate consistent multi view for 3D reconstruction. Future work can explore utilising knowledge about lighting, and texture to generate more diverse range of 3D shapes with varying lighting and texture. Acknowledgements E.B is supported by the Centre for Doctoral Training in Sustainable Approaches to Biomedical Science: Responsible and Reproducible Research (SABS: R3), University of Oxford (EP/S024093/1). P.B. is supported by the UKRI CDT in AI for Healthcare http://ai4health.io (Grant No. P/S023283/1)."
17
+ }
intro_8K/test_introduction_short_2405.03958v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03958v1",
3
+ "title": "Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model",
4
+ "abstract": "Current state-of-the-art diffusion models employ U-Net architectures\ncontaining convolutional and (qkv) self-attention layers. The U-Net processes\nimages while being conditioned on the time embedding input for each sampling\nstep and the class or caption embedding input corresponding to the desired\nconditional generation. Such conditioning involves scale-and-shift operations\nto the convolutional layers but does not directly affect the attention layers.\nWhile these standard architectural choices are certainly effective, not\nconditioning the attention layers feels arbitrary and potentially suboptimal.\nIn this work, we show that simply adding LoRA conditioning to the attention\nlayers without changing or tuning the other parts of the U-Net architecture\nimproves the image generation quality. For example, a drop-in addition of LoRA\nconditioning to EDM diffusion model yields FID scores of 1.91/1.75 for\nunconditional and class-conditional CIFAR-10 generation, improving upon the\nbaseline of 1.97/1.79.",
5
+ "authors": "Joo Young Choi, Jaesung R. Park, Inkyu Park, Jaewoong Cho, Albert No, Ernest K. Ryu",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "In recent years, diffusion models have led to phenomenal advancements in image generation. Many cutting- edge diffusion models leverage U-Net architectures as their backbone, consisting of convolutional and (qkv) self-attention layers Dhariwal & Nichol (2021); Kim et al. (2023); Saharia et al. (2022); Rombach et al. (2022); Podell et al. (2024). In these models, the U-Net architecture-based score network is conditioned on the time, and/or, class, text embedding Ho & Salimans (2021) using scale-and-shift operations applied to the convolutional layers in the so-called residual blocks. Notably, however, the attention layers are not directly affected by the conditioning, and the rationale behind not extending conditioning to attention layers remains unclear. This gap suggests a need for in-depth studies searching for effective conditioning methods for attention layers and assessing their impact on performance. Meanwhile, low-rank adaptation (LoRA) has become the standard approach for parameter-efficient fine-tuning of large language models (LLM) Hu et al. (2022). With LoRA, one trains low-rank updates that are added to frozen pre-trained dense weights in the attention layers of LLMs. The consistent effectiveness of LoRA for LLMs suggests that LoRA may be generally compatible with attention layers used in different architectures and for different tasks Chen et al. (2022); Pan et al. (2022); Lin et al. (2023); Gong et al. (2024). In this work, we introduce a novel method for effectively conditioning the attention layers in the U-Net architectures of diffusion models by jointly training multiple LoRA adapters along with the base model. We call these LoRA adapters TimeLoRA and ClassLoRA for discrete-time settings, and Unified Compositional LoRA (UC-LoRA) for continuous signal-to-ratio (SNR) settings. Simply adding these LoRA adapters in a drop-in fashion without modifying or tuning the original model brings consistent enhancement in FID scores across several popular models applied to CIFAR-10, FFHQ 64x64, and ImageNet datasets. In particular, adding LoRA-conditioning to the EDM model Karras et al. (2022) yields improved FID scores of 1.75, 1.91, 2.31 for class-conditional CIFAR-10, unconditional CIFAR-10, and FFHQ 64x64 datasets, respectively, outperforming the baseline scores of 1.79, 1.97, 2.39. Moreover, we find that LoRA conditioning by itself is 2 Scale-Shift Group Norm SiLU Convolution Group Norm SiLU Convolution Input Conditioning Linear Input QKV Group Norm \u03c9-scale LoRA LoRA Dot Product Projection \u03c9-scale LoRA LoRA MLP MLP Conditioning A1 Am B1 Bm \u03c91(t) \u03c9m(t) W A\u2032 c B\u2032 c \u00b7 \u00b7 \u00b7 A1 Am B1 Bm \u03c91(cond) \u03c9m(cond) W \u00b7 \u00b7 \u00b7 cond. MLP Unified compositional LoRA TimeLoRA and ClassLoRA Attn. Block LoRA U-Net block LoRA conditioning of attention block Figure 2: Conditioning of U-Net Block: (left) scale-and-shift conditioning on the convolutional block (middle) LoRA conditioning on the attention block (right) top: TimeLoRA and ClassLoRA for the discrete-time setting, bottom: unified composition LoRA for the continuous-SNR setting. powerful enough to perform effectively. Our experiments show that only conditioning the attention layers using LoRA adapters (without the conditioning convolutional layers with scale-and-shift) achieves comparable FID scores compared to the baseline scale-and-shift conditioning (without LoRA). Contribution. Our experiments show that using LoRA to condition time and class information on attention layers is effective across various models and datasets, including nano diffusion Lelarge et al. (2024), IDDPM Nichol & Dhariwal (2021), and EDM Karras et al. (2022) architectures using the MNIST Deng (2012), CIFAR-10 Krizhevsky et al. (2009), and FFHQ Karras et al. (2019) datasets. Our main contributions are as follows. (i) We show that simple drop-in LoRA conditioning on the attention layers improves the image generation quality, as measured by lower FID scores, while incurring minimal (\u223c10%) added memory and compute costs. (ii) We identify the problem of whether to and how to condition attention layers in diffusion models and provide the positive answer that attention layers should be conditioned and LoRA is an effective approach that outperforms the prior approaches of no conditioning or conditioning with adaLN Peebles & Xie (2023). Our results advocate for incorporating LoRA conditioning into the larger state-of-the-art U-Net-based diffusion models and the newer experimental architectures.",
17
+ "main_content": "2.1 Diffusion models Diffusion models Sohl-Dickstein et al. (2015); Song & Ermon (2019); Ho et al. (2020); Song et al. (2021b) generate images by iteratively removing noise from a noisy image. This denoising process is defined by the reverse process of the forward diffusion process: given data x0 \u223cq0, progressively inject noise to x0 by \ufffd\ufffd \ufffd q(xt | xt\u22121) = N \ufffd\ufffd 1 \u03b2t is sufficiently smal \u223c \ufffd 1 \u2212\u03b2txt\u22121, \u03b2tI \ufffd ll, we can approx \ufffd\ufffd \ufffd for t = 1, . . . , T and 0 < \u03b2t < 1. If \u03b2t is sufficiently small, we can approximate the reverse process as q(xt\u22121 | xt) \u2248N (\u00b5t(xt), \u03b2tI) 3 where \u00b5t(xt) = 1 \u221a1 \u2212\u03b2t (xt + \u03b2t\u2207log pt(xt)). A diffusion model is trained to approximate the score function \u2207log pt(xt) with a score network s\u03b8, which is often modeled with a U-Net architecture Ronneberger et al. (2015); Song & Ermon (2019). With s\u03b8 \u2248\u2207log pt(xt), the diffusion model approximates the reverse process as p\u03b8(xt\u22121|xt) = N \u0012 1 \u221a1 \u2212\u03b2t (xt + \u03b2ts\u03b8(xt, t)), \u03b2tI \u0013 \u2248q(xt\u22121 | xt). To sample from a trained diffusion model, one starts with Gaussian noise xT \u223cN (0, (1 \u2212\u00af \u03b1T )I), where \u00af \u03b1t = Qt s=1(1\u2212\u03b2s), and progressively denoise the image by sampling from p\u03b8(xt\u22121|xt) with t = T, T \u22121, . . . , 2, 1 sequentially to obtain a clean image x0. The above discrete-time description of diffusion models has a continuous-time counterpart based on the theory of stochastic differential equation (SDE) for the forward-corruption process and reversing it based on Anderson\u2019s reverse-time SDE Anderson (1982) or a reverse-time ordinary differential equation (ODE) with equivalent marginal probabilities Song et al. (2021a). Higher-order integrators have been used to reduce the discretization errors in solving the differential equations Karras et al. (2022). Architecture for diffusion models. The initial work of Song & Ermon (2019) first utilized the CNN-based U-Net architecture Ronneberger et al. (2015) as the architecture for the score network. Several improvements have been made by later works Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021); Hoogeboom et al. (2023) incorporating multi-head self-attention Vaswani et al. (2017), group normalization Wu & He (2018), and adaptive layer normalization (adaLN) Perez et al. (2018). Recently, several alternative architectures have been proposed. Jabri et al. (2023) proposed Recurrent Interface Network (RIN), which decouples the core computation and the dimension of the data for more scalable image generation. Peebles & Xie (2023); Bao et al. (2023); Gao et al. (2023); Hatamizadeh et al. (2023) investigated the effectiveness of transformer-based architectures Dosovitskiy et al. (2021) for diffusion models. Yan et al. (2023) utilized state space models Gu et al. (2022) in DiffuSSM to present an attention-free diffusion model architecture. In this work, we propose a conditioning method for attention layers and test it on several CNN-based U-Net architectures. Note that our proposed method is applicable to all diffusion models utilizing attention layers. 2.2 Low-rank adaptation Using trainable adapters for specific tasks has been an effective approach for fine-tuning models in the realm of natural language processing (NLP) Houlsby et al. (2019); Pfeiffer et al. (2020). Low-rank adpatation (LoRA, Hu et al. (2022)) is a parameter-efficient fine-tuning method that updates a low-rank adapter: to fine-tune a pre-trained dense weight matrix W \u2208Rdout\u00d7din, LoRA parameterizes the fine-tuning update \u2206W with a low-rank factorization W + \u2206W = W + BA, where B \u2208Rdout\u00d7r, A \u2208Rr\u00d7din, and r \u226amin{din, dout}. LoRA and diffusion. Although initially proposed for fine-tuning LLMs, LoRA is generally applicable to a wide range of other deep-learning modalities. Recent works used LoRA with diffusion models for various tasks including image generation Ryu (2023); Gu et al. (2023); Go et al. (2023), image editing Shi et al. (2023), continual learning Smith et al. (2023), and distillation Golnari (2023); Wang et al. (2023b). While all these works demonstrate the flexibility and efficacy of the LoRA architecture used for fine-tuning diffusion models, to the best of our knowledge, our work is the first attempt to use LoRA as part of the core U-Net for diffusion models for full training, not fine-tuning. 4 2.3 Conditioning the score network For diffusion models to work properly, it is crucial that the score network s\u03b8 is conditioned on appropriate side information. In the base formulation, the score function \u2207xpt(x), which the score network s\u03b8 learns, depends on the time t, so this t-dependence must be incorporated into the model via time conditioning. When class-labeled training data is available, class-conditional sampling requires class conditioning of the score network Ho & Salimans (2021). To take advantage of data augmentation and thereby avoid overfitting, EDM Karras et al. (2022) utilizes augmentation conditioning Jun et al. (2020), where the model is conditioned on the data augmentation information such as the degree of image rotation or blurring. Similarly, SDXL Podell et al. (2024) uses micro-conditioning, where the network is conditioned on image resolution or cropping information. Finally, text-to-image diffusion models Saharia et al. (2022); Ramesh et al. (2022); Rombach et al. (2022); Podell et al. (2024) use text conditioning, which conditions the score network with caption embeddings so that the model generates images aligned with the text description. Conditioning attention layers. Prior diffusion models using CNN-based U-Net architectures condition only convolutional layers in the residual blocks by applying scale-and-shift or adaLN (see (left) of Figure 2). In particular, attention blocks are not directly conditioned in such models. This includes the stateof-the-art diffusion models such as Imagen Saharia et al. (2022), DALL\u00b7E 2 Ramesh et al. (2022), Stable Diffusion Rombach et al. (2022), and SDXL Podell et al. (2024). To clarify, Latent Diffusion Model Rombach et al. (2022) based models use cross-attention method for class and text conditioning, but they still utilize scale-and-shift for time conditioning. There is a line of research proposing transformer-based architectures (without convolutions) for diffusion models, and these work do propose methods for conditioning attention layers. For instance, DiT Peebles & Xie (2023) conditioned attention layers using adaLN and DiffiT Hatamizadeh et al. (2023) introduced time-dependent multi-head self-attention (TMSA), which can be viewed as scale-and-shift conditioning applied to attention layers. Although such transformer-based architectures have shown to be effective, whether conditioning the attention layers with adaLN or scale-and-shift is optimal was not investigated. In Section 5.5 of this work, we compare our proposed LoRA conditioning on attention layers with the prior adaLN conditioning on attention layers, and show that LoRA is the more effective mechanism for conditioning attention layers. Diffusion models as multi-task learners. Multi-task learning Caruana (1997) is a framework where a single model is trained on multiple related tasks simultaneously, leveraging shared representations between the tasks. If one views the denoising tasks for different timesteps (or SNR) of diffusion models as related but different tasks, the training of diffusion models can be interpreted as an instance of the multi-task learning. Following the use of trainable lightweight adapters for Mixture-of-Expert (MoE) Jacobs et al. (1991); Ma et al. (2018), several works have utilized LoRA as the expert adapter for the multi-task learning Caccia et al. (2023); Wang et al. (2023a; 2024); Zadouri et al. (2024). Similarly, MORRIS Audibert et al. (2023) and LoRAHub Huang et al. (2023) proposed using the weighted sum of multiple LoRA adapters to effectively tackle general tasks. In this work, we took inspiration from theses works by using a composition of LoRA adapters to condition diffusion models. 3 Discrete-time LoRA conditioning Diffusion models such as DDPM Ho et al. (2020) and IDDPM Nichol & Dhariwal (2021) have a predetermined number of discrete timesteps t = 1, 2, . . . , T used for both training and sampling. We refer to this setting as the discrete-time setting. We first propose a method to condition the attention layers with LoRA in the discrete-time setting. In particular, we implement LoRA conditioning on IDDPM by conditioning the score network with (discrete) time and (discrete) class information. 5 3.1 TimeLoRA TimeLoRA conditions the score network for the discrete time steps t = 1, . . . , T. In prior architectures, time information is typically injected into only the residual blocks containing convolutional layers. TimeLoRA instead conditions the attention blocks. See (right) of Figure 2. Non-compositional LoRA. Non-compositional LoRA instantiates T independent rank-r LoRA weights A1, A2, . . . , AT , B1, B2, . . . , BT . The dense layer at time t becomes Wt = W + \u2206W(t) = W + BtAt for t = 1, . . . , T. To clarify, the trainable parameters for each linear layer are W, A1, A2, . . . , AT , and B1, B2, . . . , BT . In particular, W is trained concurrently with A1, A2, . . . , AT , and B1, B2, . . . , BT . However, this approach has two drawbacks. First, since T is typically large (up to 4000), instantiating T independent LoRAs can occupy significant memory. Second, since each LoRA (At, Bt) is trained independently, it disregards the fact that LoRAs of nearby time steps should likely be correlated/similar. It would be preferable for the architecture to incorporate the inductive bias that the behavior at nearby timesteps are similar. Compositional LoRA. Compositional LoRA composes m LoRA bases, A1, . . . , Am and B1, . . . , Bm, where m \u226aT. Each LoRA basis (Ai, Bi) corresponds to time ti for 1 \u2264t1 < \u00b7 \u00b7 \u00b7 < tm \u2264T. The dense layer at time t becomes Wt = W + \u2206W(t) = W + m X i=1 (\u03c9t)i BiAi, where \u03c9t = ((\u03c9t)1 , . . . , (\u03c9t)m) is the time-dependent trainable weights composing the LoRA bases. To clarify, the trainable parameters for each linear layer are W, A1, A1, . . . , Am, B1, B1, . . . , Bm, and \u03c9t. Since the score network is a continuous function of t, we expect \u03c9t \u2248\u03c9t\u2032 if t \u2248t\u2032. Therefore, to exploit the task similarity between nearby timesteps, we initialize (\u03c9t)i with a linear interpolation scheme: for tj \u2264t < tj+1, (\u03c9t)i = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 tj+1 \u2212t tj+1 \u2212tj i = j t \u2212tj tj+1 \u2212tj i = j + 1 0 otherwise. In short, at initialization, \u2206W(t) uses a linear combination of the two closest LoRA bases. During training, \u03c9t can learn to utilize more than two LoRA bases, i.e., \u03c9t can learn to have more than two non-zeros through training. Specifically, (\u03c91, . . . , \u03c9T ) \u2208Rm\u00d7T is represented as an m \u00d7 T trainable table implemented as nn.Embedding in Pytorch. 3.2 ClassLoRA Consider a conditional diffusion model with C classes. ClassLoRA conditions the attention layers in the score network with the class label. Again, this contrasts with the typical approach of injecting class information only into the residual blocks containing convolutional layers. See (right) of Figure 2. Since C is small for CIFAR-10 (C = 10) and the correlations between different classes are likely not strong, we only use the non-compositional ClassLoRA: Wc = W + \u2206W(c) = W + B\u2032 cA\u2032 c for c = 1, . . . , C. In other words, each LoRA (A\u2032 c, B\u2032 c) handles a single class c. When C is large, such as in the case of ImageNet1k, one may consider using a compositional version of ClassLoRA. 6 4 Continuous-SNR LoRA conditioning Motivated by (Kingma et al., 2021), some recent models such as EDM Karras et al. (2022) consider parameterizing the score function as a function of noise or signal-to-noise ratio (SNR) level instead of time. In particular, EDM Karras et al. (2022) considers the probability flow ODE Xt = \u2212\u02d9 \u03c3(t)\u03c3(t)s\u03b8(x; \u03c3(t)) dt, where s\u03b8(x; \u03c3) is the score network conditioned on the SNR level \u03c3. We refer to this setting as the continuousSNR setting. The main distinction between Sections 3 and 4 is in the discrete vs. continuous parameterization, since continuous-time and continuous-SNR parameterizations of score functions are equivalent. We choose to consider continuous-SNR (instead of continuous-time) parameterizations for the sake of consistency with the EDM model Karras et al. (2022). Two additional issues arise in the present setup compared to the setting of Section 3. First, by considering a continuum of SNR levels, there is no intuitive way to assign a single basis LoRA to a specific noise level. Second, to accommodate additional conditioning elements such as augmentations or even captions, allocating independent LoRA for each conditioning element could lead to memory inefficiency. 4.1 Unified compositional LoRA (UC-LoRA) Consider the general setting where the diffusion model is conditioned with N attributes cond1, . . . , condN, which can be a mixture of continuous and discrete information. In our EDM experiments, we condition the score network with N = 3 attributes: SNR level (time), class, and augmentation information. Unified compositional LoRA (UC-LoRA) composes m LoRA bases A1, . . . , Am and B1, . . . , Bm to simultaneously condition the information of cond1, . . . condN into the attention layer. The compositional weight \u03c9 = (\u03c91, . . . , \u03c9m) of the UC-LoRA is obtained by passing cond1, . . . condN through an MLP. Prior diffusion models typically process cond1, . . . , condN with an MLP to obtain a condition embedding v, which is then shared by all residual blocks for conditioning. For the j-th residual block, v is further processed by an MLP to get scale and shift parameters \u03b3j and \u03b2j: v = SharedMLP(cond1, . . . , condN) (\u03b3j, \u03b2j) = MLPj(v). The (\u03b3j, \u03b2j) is then used for the scale-and-shift conditioning of the j-th residual block in the prior architectures. In our UC-LoRA, we similarly use the shared embedding v and an individual MLP for the j-th attention block to obtain the composition weight \u03c9j(v): v = SharedMLP(cond1, \u00b7 \u00b7 \u00b7 , condN) \u03c9j(v) = MLPj(v). Then, the j-th dense layer of the attention block becomes W(cond1, . . . , condN) = W + \u2206W(cond1, . . . , condN) = W + m X i=1 \u03c9j,i(v)BiAi. To clarify, the trainable parameters for the j-th dense layer are W, A1, A2, . . . , Am, B1, B2, . . . , Bm, and the weights in MLPj. Shared across the entire architecture, the weights in SharedMLP are also trainable parameters. 7 5 Experiments In this section, we present our experimental findings. Section 5.1 describes the experimental setup. Section 5.2 first presents a toy, proof-of-concept experiment to validate the proposed LoRA conditioning. Section 5.3 evaluates the effectiveness of LoRA conditioning on attention layers with a quantitative comparison between diffusion models with (baseline) conventional scale-and-shift conditioning on convolutional layers; (only LoRA) LoRA conditioning on attention layers without conditioning convolutional layers; and (with LoRA) conditioning both convolutional layers and attention layers with scale-and-shift and LoRA conditioning, respectively. Section 5.4 investigates the effect of tuning the LoRA rank and the number of LoRA bases. Section 5.5 compares our proposed LoRA conditioning with the adaLN conditioning on attention layers. Section 5.6 explores the robustness of ClassLoRA conditioning compared to conventional scale-and-shift conditioning in extrapolating conditioning information. 5.1 Experimental Setup Diffusion models. We implement LoRA conditioning on three different diffusion models: nano diffusion Lelarge et al. (2024), IDDPM Nichol & Dhariwal (2021), and EDM-vp Karras et al. (2022). With nano diffusion, we conduct a proof-of-concept experiment. With IDDPM, we test TimeLoRA and ClassLoRA for the discrete-time setting, and with EDM, we test UC-LoRA for the continuous-SNR setting. Datasets. For nano diffusion, we use MNIST. For IDDPM, we use CIFAR-10 for both unconditional and class-conditional sampling, and ImageNet64, a downsampled version of the ImageNet1k, for unconditional sampling. For EDM-vp, we also use CIFAR-10 for both unconditional and class-conditional sampling and FFHQ64 for unconditional sampling. Configurations. We follow the training and architecture configurations proposed by the baseline works and only tune the LoRA adapters. For IDDPM, we train the model for 500K iterations for CIFAR-10 with batch size of 128 and learning rate of 1 \u00d7 10\u22124, and 1.5M iterations for ImageNet64 with batch size of 128 and learning rate of 1 \u00d7 10\u22124. For EDM, we train the model with batch size of 512 and learning rate of 1 \u00d7 10\u22123 for CIFAR-10, and with batch size of 256 and learning rate of 2 \u00d7 10\u22124 for FFHQ64. For sampling, in IDDPM, we use 4000 and 4001 timesteps for the baseline and LoRA conditioning respectively, and in EDM, we use the proposed Heun\u2019s method and sample images with 18 timesteps (35 NFE) for CIFAR-10 and 40 timesteps (79 NFE) for FFHQ64. Here, NFE is the number of forward evaluation of the score network and it differs from the number of timesteps by a factor of 2 because Heun\u2019s method is a 2-stage Runge\u2013Kutta method. Appendix A provides further details of the experiment configurations. Note that the baseline works heavily optimized the hyperparameters such as learning rate, dropout probability, and augmentations. Although we do not modify any configurations of the baseline and simply add LoRA conditioning in a drop-in fashion, we expect further improvements from further optimizing the configuration for the entire architecture and training procedure. LoRA. We use the standard LoRA initialization as in the original LoRA paper Hu et al. (2022): for the LoRA matrices (A, B) with rank r, A is initialized as Aij \u223cN(0, 1/r) and B as the zero matrix. Following Ryu (2023), we set the rank of each basis LoRA to 4. For TimeLoRA and ClassLoRA, we use 11 and 10 LoRA bases, and for UC-LoRA we use 18 and 20 LoRA bases for CIFAR-10 and FFHQ. Due to our constrained computational budget, we were not able to conduct a full investigation on the optimal LoRA rank or the number LoRA bases. However, we experiment with the effect of rank and number of LoRA bases to limited extent and report the result in Section 5.4. 5.2 Proof-of-concept experiments We conduct toy experiments with nano diffusion for both discrete-time and continuous-SNR settings. Nano diffusion is a small diffusion model with a CNN-based U-Net architecture with no skip connections with about 500, 000 trainable parameters. We train nano diffusion on unconditional MNIST generation with 8 3 different conditioning methods: conventional scale-and-shift, TimeLoRA, and UC-LoRA. As shown in Figure 3, conditioning with TimeLoRA or UC-LoRA yields competitive result compared to the conventional scale-and-shift conditioning. Figure 3: MNIST samples generated by nano diffusion trained with (1st row) conventional scale-and-shift conditioning; (2nd row) TimeLoRA with linear interpolation initialization; (3rd row) UC-LoRA; and (4th row) TimeLoRA with random initialization. Initialization of \u03c9i(t) for TimeLoRA. As shown in Figure 3 the choice of initialization of \u03c9i(t) for TimeLoRA impacts performance. With randomly initialized \u03c9i(t), nano diffusion did not converge after 100 epochs, whereas with \u03c9i(t) initialized with the linear interpolation scheme, it did converge. Moreover, Figure 4 shows that even in UC-LoRA, \u03c9(t) shows higher similarity between nearby timesteps than between distant timesteps after training. This is consistent with our expectation that \u03c9i(t) \u2248\u03c9i(t\u2032) if t \u2248t\u2032. 250 500 750 1000 t1 200 400 600 800 1000 t2 250 500 750 1000 t1 1.0 0.5 0.0 0.5 1.0 Figure 4: Cosine similarity between \u03c9(t1) and \u03c9(t2) for UC-LoRA applied to nano diffusion (left) at initialization and (right) after training. At initialization, the cosine similarity between \u03c9(t1) and \u03c9(t2) has no discernible pattern. After training, however, the cosine similarity between \u03c9(t1) and \u03c9(t2) for t1 \u2248t2 is close to 1, implying their high similarity. 5.3 Main quantitative results Simply adding LoRA conditioning yields improvements. To evaluate the effectiveness of the drop-in addition of LoRA conditioning to the attention layers, we implement TimeLoRA and ClassLoRA to IDDPM and UC-LoRA to EDM, both with the conventional scale-and-shift conditioning on the convolutional layers unchanged. We train IDDPM with CIFAR-10, ImageNet64 and EDM with CIFAR-10, FFHQ64. As reported in Table 1, the addition of LoRA conditioning to the attention layers consistently improves the image generation quality as measured by FID scores Heusel et al. (2017) across different diffusion models and datasets with only (\u223c10%) addition of the parameter counts. Note these improvements are achieved without tuning any hyperparameters of the base model components. 9 Initializing the base model with pre-trained weights. We further test UC-LoRA on pre-trained EDM base models for unconditional CIFAR-10 and FFHQ64 generations. As reported in Table 1, using pre-trained weights showed additional gain on FID score with fewer number of interations (\u223c50%). To clarify, although we initialize the base model with pre-trained weights, we fully train both base model and LoRA modules rather than finetuning. LoRA can even replace scale-and-shift. We further evaluate the effectiveness of LoRA conditioning by replacing the scale-and-shift conditioning for the convolutional layers in residual blocks with LoRA conditioning for the attention blocks. The results of Table 1 suggest that solely using LoRA conditioning on attention layers achieves competitive FID scores while being more efficient in memory compared to the baseline score network trained with scale-and-shift conditioning on convolutional layers. For IDDPM, using LoRA in place of the conventional scale-and-shift conditioning consistently produces better results. Significant improvement is observed especially for class-conditional generation of CIFAR-10. For EDM, replacing the scale-and-shift conditioning did not yield an improvement, but nevertheless performed comparably. We note that in all cases, LoRA conditioning is more parameter-efficient (\u223c10%) than the conventional scale-and-shift conditioning. 5.4 Effect of LoRA rank and number of LoRA bases We investigate the effect of tuning the LoRA rank and the number of LoRA bases on the EDM model for unconditional CIFAR-10 generation and report the results in Table 2. Our findings indicate that using more LoRA bases consistently improves the quality of image generations. On the other hand, increasing LoRA rank does not guarantee better performance. These findings suggest an avenue of further optimizing and improving our main quantitative results of Section 5.3 and Table 1, which we have not yet been able to pursue due to our constrained computational budget. # basis rank FID # Params Varying # basis 9 4 1.99 57185519 18 4 1.96 57745499 36 4 1.95 58865459 Varying rank 18 2 1.93 57192539 18 4 1.96 57745499 18 8 1.96 58851419 Table 2: Effect of the number of LoRA bases and the LoRA rank on unconditional CIFAR-10 sampling of EDM with LoRA 5.5 Comparison with adaLN We compare the effectiveness of our proposed LoRA conditioning with adaLN conditioning applied to attention layers. Specifically, we conduct an experiment on EDM with scale-and-shift conditioning on convolutional layers removed and with (i) adaLN conditioning attention layers or (ii) LoRA conditioning attention layers. We compare the sample quality of unconditional and class-conditional CIFAR-10 generation and report the results in Table 3. We find that LoRA conditioning significantly outperforms adaLN conditioning for both unconditional and conditional CIFAR-10 generation. This indicates that our proposed LoRA conditioning is the more effective mechanism for conditioning attention layers in the U-Net architectures for diffusion models. Type uncond. cond. adaLN conditioning 2.16 2.0 LoRA conditioning 1.99 1.82 Table 3: Comparison of adaLN conditioning and LoRA conditioning on attention layers on EDM (without conditioning convolutional layers). We consider both unconditional and conditional CIFAR-10 generation. 10 5.6 Extrapolating conditioning information We conduct an experiment comparing two class-conditional EDM models each conditioned by scale-and-shift and ClassLoRA, for the CIFAR-10 dataset. During training, both models receive size-10 one-hot vectors (ci)j = \u03b4ij representing the class information. First, we input the linear interpolation \u03b1ci +(1\u2212\u03b1)cj (0 \u2264\u03b1 \u22641) of two class inputs ci and cj (corresponding to \u2018airplane\u2019 and \u2018horse\u2019, respectively) to observe the continuous transition between classes. As shown in the top of Figure 5, both the scale-and-shift EDM and ClassLoRA EDM models effectively interpolate semantic information across different classes. However, when a scaled input \u03b2ci is received, with \u03b2 ranging from -1 to 1, scale-and-shift EDM generates unrecognizable images when \u03b2 < 0, while ClassLoRA EDM generates plausible images throughout the whole range, as shown in the bottom of Figure 5. This toy experiment shows that LoRA-based conditioning may be more robust to extrapolating conditioning information beyond the range encountered during training. Appendix C provides further details. Figure 5: Results of (Top) interpolation of class labels in class-conditional EDM with (row1) ClassLoRA; (row2) scale-and-shift; (bottom) extrapolation of class labels in class-conditional EDM with (row1) ClassLoRA; (row2) scale-and-shift 6 Conclusion In this work, we show that simply adding Low-Rank Adaptation (LoRA) conditioning to the attention layers in the U-Net architectures improves the performance of the diffusion models. Our work shows that we should condition the attention layers in diffusion models and provides a prescription for effectively doing so. Some prior works have conditioned attention layers in diffusion models with adaLN or scale-and-shift operations, but we find that LoRA conditioning is much more effective as discussed in Section 5.5. Implementing LoRA conditioning on different and larger diffusion model architectures is a natural and interesting direction of future work. Since almost all state-of-the-art (SOTA) or near-SOTA diffusion models utilize attention layers, LoRA conditioning is broadly and immediately applicable to all such architectures. In particular, incorporating LoRA conditioning into large-scale diffusion models such as Imagen Saharia et al. (2022), DALL\u00b7E 2 Ramesh et al. (2022), Stable Diffusion Rombach et al. (2022), and SDXL Podell et al. (2024), or transformer-based diffusion models such as U-ViT Bao et al. (2023), DiT Peebles & Xie (2023), and DiffiT Hatamizadeh et al. (2023) are interesting directions. Finally, using LoRA for the text conditioning of text-to-image diffusion models is another direction with much potential impact. 11"
18
+ }
intro_8K/test_introduction_short_2405.03962v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03962v1",
3
+ "title": "AdsorbDiff: Adsorbate Placement via Conditional Denoising Diffusion",
4
+ "abstract": "Determining the optimal configuration of adsorbates on a slab (adslab) is\npivotal in the exploration of novel catalysts across diverse applications.\nTraditionally, the quest for the lowest energy adslab configuration involves\nplacing the adsorbate onto the slab followed by an optimization process. Prior\nmethodologies have relied on heuristics, problem-specific intuitions, or\nbrute-force approaches to guide adsorbate placement. In this work, we propose a\nnovel framework for adsorbate placement using denoising diffusion. The model is\ndesigned to predict the optimal adsorbate site and orientation corresponding to\nthe lowest energy configuration. Further, we have an end-to-end evaluation\nframework where diffusion-predicted adslab configuration is optimized with a\npretrained machine learning force field and finally evaluated with Density\nFunctional Theory (DFT). Our findings demonstrate an acceleration of up to 5x\nor 3.5x improvement in accuracy compared to the previous best approach. Given\nthe novelty of this framework and application, we provide insights into the\nimpact of pre-training, model architectures, and conduct extensive experiments\nto underscore the significance of this approach.",
5
+ "authors": "Adeesh Kolluru, John R Kitchin",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "physics.chem-ph"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Heterogenous catalysis plays an important role in devel- oping chemicals in industries, environmental protection through converters, and the synthesis of alternative fuels (Liu & Li, 2017; Zitnick et al., 2020). Modeling these chem- ical reactions involve an intermediate adsorbate on a catalyst slab which determines the efficacy of the catalyst for that particular reaction. Discovering a novel catalyst computa- tionally involves screening through billions of candidates and finding the lowest energy configuration. 1Department of Chemical Engineering, Carnegie Mel- lon University. Correspondence to: Adeesh Kolluru <[email protected]>, John R. Kitchin <jk- [email protected]>. Finding the lowest energy configuration for an adsorbate and slab requires a global optimum (which is non-convex) search across different sites on the slab. Conventional ap- proaches solve this in two steps - (1) heuristically place the adsorbate on certain important sites and (2) perform optimization with quantum mechanical calculators like Den- sity Functional Theory (DFT) on each of these sites. The lowest energy site out of these is considered for calculating adsorption energy, which is a thermodynamic descriptor for how good that catalyst is. With recent advances in machine learning methods for predicting forces, it has become possi- ble to perform optimization with ML force fields (MLFFs) instead of Density Functional Theory (DFT) making this process faster and easier to test many sites and find better minima. These ML force fields are trained on DFT data to predict energies and forces corresponding to different adslab configurations. The recent release of the OC20-Dense dataset (Lan et al., 2023) signifies a significant advancement in the computation of the lowest energy adslab configuration. This work em- ploys a blend of heuristic and random adsorbate placements across 100 sites, with subsequent optimizations across each site using Density Functional Theory (DFT) to calculate adsorption energy. The study further introduces AdsorbML, a paradigm characterized by a brute-force exploration of initial adsorbate placements. Employing pre-trained ma- chine learning (ML) force fields from OC20, AdsorbML streamlines the optimization process, culminating in the determination of the lowest energy adsorbate-slab (adslab) configuration. The predictive accuracy of these configura- tions is rigorously validated against DFT single-points or complete DFT optimization. This hybrid approach results in a computational acceleration of 2000-fold in adsorption energy calculations compared to the sole reliance on DFT calculations. Recent developments in graph neural network (GNN) based ML architectures have increased the accuracies of adsorp- tion energy prediction significantly by encoding geomet- ric information of atoms in more explicit ways. However, there\u2019s little to no work done on improving the adsorption site prediction which could help us get away with the cur- rently used brute-force approach. In this work, we develop a novel conditional denoising diffu- 1 arXiv:2405.03962v1 [cs.LG] 7 May 2024 Adsorbate placement via conditional denoising diffusion sion framework for adsorbate placement. We first formulate a diffusion framework over the space of the 2D translation and 3D rigid rotation of an adsorbate molecule over the slab considering periodic boundary conditions (PBC) of the slab. Through the learned diffusion process, we sam- ple the most stable site by iteratively updating the center of mass of adsorbate and rigid orientation. Performing a naive unconditional diffusion framework on the most opti- mal adsorbate site and orientation \u2014 corresponding to the lowest energy adslab configuration out of 100 densely sam- pled calculations in OC20-Dense \u2014 leads to throwing away 99% of DFT optimal energy data. Therefore, we modify the diffusion training to be conditional on relative energies (rela- tive across densely sampled sites of an adslab combination). This leads to significant improvements in accuracies and sample efficiency during diffusion training. After sampling for the optimal site and orientation of adsorbate on the slab, we perform ML force field (MLFF) optimization and DFT single-point verification similar to AdsorbML. This compre- hensive end-to-end evaluation helps in robust assessment of the practical impact of the learned diffusion model. There have been significant advances in diffusion generative models in molecular and material discovery, and analogous problems in molecular docking on proteins. However, this is the first work to frame the adsorbate placement problem considering all its symmetries with the slab in a diffusion framework. Intuitively, the reverse diffusion process of AdsorbDiff helps in skipping multiple minima sites due to its energy-based conditional sampling which is followed by a local optimization with a DFT-learned MLFF to find a global optimum. To facilitate further research on this problem, we provide comprehensive results on the importance of GNN architectures for the diffusion task, show the importance of pretraining, and demonstrate the success of our approach to in-distribution (ID) and out-of-distribution (OOD) splits. The summary of contributions of this work are - \u2022 We propose AdsorbDiff, a novel conditional denoising diffusion framework designed to leverage the transla- tion, rotation, and periodic symmetries inherent in ad- sorbate and slab interactions. Additionally, this frame- work is adept at efficiently predicting the lowest energy site by conditional training on relative energies. \u2022 We present our results in a comprehensive end-to-end evaluation framework, integrated with DFT, to accu- rately gauge the true capability of our approach in predicting optimal adsorption energies. \u2022 We achieve a 31.8% success rate, 3.5x higher than the naive AdsorbML baseline of 9.1% with a single site prediction. Alternatively, we demonstrate that a comparable level of accuracy could be achieved by AdsorbML by employing 5x more placements. \u2022 We demonstrate that pretraining on large-scale local optimization data can significantly improve the results on the search for global optima. \u2022 We show that diffusion results exhibit insignificant dependence on GNN architectures, in contrast to the notable differences observed for the same architectures when trained on DFT forces. \u2022 We highlight the model\u2019s generalization capabilities to previously unseen adsorbates and slabs.",
16
+ "main_content": "Force-fields: Energy and forces (as a gradient of energy with respect to positions) are calculated using ab initio quantum mechanical methods like Density Functional Theory (DFT). ML models can be trained to predict these energies and forces, and are called ML force-fields (MLFFs). These force fields can be utilized to perform structure optimization to get the lowest energy structures. Optimization: For adsorption energy prediction, we start with an optimized adsorbate and slab, place the adsorbate on a slab, and perform optimization to get an adslab configuration with the lowest energy. Usually, second-order optimizers like BFGS, L-BFGS, Conjugate gradient descent, etc are used to solve this optimization problem. Since this is non-convex, the initial guess of adsorbate placement or the strategy of optimization is critical to finding an adslab configuration corresponding to the global optimum. AdsorbML (Lan et al., 2023) method starts with combining heuristic and random initial placements which is a brute-force approach to finding better minima. \u201dEasy Potential\u201d from (Schaarschmidt et al., 2022) trains a simple harmonic potential to guess this initial placement. Learn2Hop (Merchant et al., 2021) also learns the optimization landscape to navigate through better and hop through local minima. There are approaches like minima hopping that help in navigating through the entire optimization landscape with a force-field (Jung et al., 2023) and help in finding better minima, but these could be computationally expensive. GNNs: Message-Passing Neural Networks (MPNN) are a class of graph neural networks (GNN) that are utilized across material property prediction tasks. Different architectures encode the geometric information in different ways. SchNet (Sch\u00a8 utt et al., 2018) only encodes the distance information. Including more explicit geometric features have improved the model prediction as DimeNet (Gasteiger et al., 2020b;a) incorporates triplets. SphereNet (Liu et al., 2021), GemNet (Gasteiger et al., 2021; 2022) incorporates complete geometric information explicitly by giving triplets and quadruplets information. PaiNN (Sch\u00a8 utt et al., 2021) incorporates directional information and applies only linear operations on those features. Equivariant models like NequIP (Batzner et al., 2022), Allegro (Musaelian et al., 2023), MACE (Batatia et al., 2022), SCN (Zitnick et al., 2 Adsorbate placement via conditional denoising diffusion Figure 1. Overview of AdsorbDiff: Random initial site and orientation for the adsorbate are selected, followed by sampling over 2D translation, 3D rigid rotations, and considering periodic boundary conditions (PBC) to predict the optimal site and orientation. MLFF optimization is then conducted from the predicted site with a fixed interstitial gap until convergence. The final prediction undergoes constraint verification, and DFT verification is performed on valid structures to calculate success rates. 2022), Equiformer (Liao & Smidt, 2022; Liao et al., 2023) utilize spherical harmonics in representing the geometric features. Diffusion Models: Diffusion models are a class of generative models that have shown impressive results across different domains starting from computer vision (Dhariwal & Nichol, 2021; Croitoru et al., 2023), language models (Gong et al., 2022), temporal data modeling, to applications in molecules (Xu et al., 2022; 2023; Arts et al., 2023; Hoogeboom et al., 2022; Jing et al., 2022), proteins (Wu et al., 2022; Trippe et al., 2022; Watson et al., 2022; 2023) and materials (Xie et al., 2021; Fu et al., 2023; Zeni et al., 2023; Merchant et al., 2023; Yang et al., 2023b). There are different kinds of formulations proposed for diffusion models like denoising diffusion probabilistic models (DDPMs), score-based generative models (SGMs), and stochastic differential equations (Score SDEs) (Yang et al., 2023a). Many of these formulations have been adapted to problems in molecular and material discovery. For example, CDVAE (Xie et al., 2021) adapts concepts from noise-conditioned score networks (NCSN) for bulk discovery. Conditional diffusion has also been recently utilized across proteins (Krishna et al., 2024), catalyst and materials (Zheng et al., 2023) for generating structures with required properties. Diffusion models have also been recently utilized for molecular docking on proteins (Corso et al., 2022). Although this problem is somewhat analogous to placing adsorbate on a slab, as far as we know there hasn\u2019t been previous work on formulating adsorbate placement in a diffusion framework. AdsorbDiff also differs from molecular docking in several key aspects \u2013 2D translation formulation, periodic boundary conditions, conditional denoising formulation, and the requirement of DFT level accuracy as opposed to simple force-fields for proteins making our end-to-end evaluation with DFT critical. 3. AdsorbDiff 3.1. Overview The objective of this research is to enhance the efficiency of adsorption energy calculation, representing the lowest energy configuration of an adsorbate on a slab. The methodology of this work involves the initial placement of an adsorbate on a random site within the 2D surface of the slab, followed by reverse diffusion to predict the optimal adsorption site and orientation. Employing machine learning force field optimization, the structure undergoes iterative updates with an optimizer until forces converge close to 0. Subsequently, the final structure is verified for compliance with constraints essential for defining adsorption energy. On the optimized structure, a single Density Functional Theory (DFT) calculation is conducted to obtain the predicted energy (EP red). A successful outcome is determined by the predicted energy being within 0.1 eV or lower than the DFT baseline of adsorption energy in OC20-Dense data, indicating the model\u2019s ability to provide a comparable or superior estimate of adsorption energy (shown in Figure 1). 3 Adsorbate placement via conditional denoising diffusion The code is open-sourced with MIT License1. 3.2. Adsorbate placement Various adsorbate placement strategies were explored for the OC20-Dense dataset, incorporating a combination of heuristic and random approaches. Specifically, 100 sites were selected for each adslab configuration, utilizing a blend of heuristic and random placements. The heuristic placement involved strategically situating the adsorbate\u2019s binding site on either an on-top site, hollow site, or bridge site, with a specified interstitial gap denoting the distance between the connecting atom of the slab and the corresponding adsorbate atom. Additional random sites are introduced through the random rotation of the adsorbate along the normal of the slab, accompanied by a slight translational wobble along the surface from the heuristic site. 3.3. Diffusion for adsorbate placement In this work, our objective is to develop a diffusion model aimed at predicting the adsorbate orientation and site corresponding to the lowest energy, as established through benchmarking with the OC20-Dense dataset. The adsorbate motion is constrained within a manifold (Mc) and utilizes the combined action group (A), as described in DiffDock (Corso et al., 2022). This manifold permits the adsorbate to navigate towards configurations with lowenergy adslab states through a combination of translations, rotations, and torsion angle adjustments. Note, for fair comparisons with our baselines, torsion angle alterations are disregarded in our analysis due to the smaller size of the adsorbate employed in this study. This approach aligns with the methodology of AdsorbML, which does not introduce randomness in torsion angles as part of its benchmark. In our framework, we specifically consider translations in the 2D plane parallel to the slab while accounting for periodic boundary conditions (PBC). The z-coordinate is meticulously aligned to denote the normal direction of the slab and the diffusion process is executed across the xycoordinates. Therefore, the adsorbate movements are associated with the 2D translation group T(2), and rigid rotations are modeled using the SO(3) group. The translation operation, denoted as Atr : T(2) \u00d7 R2n \u2192R2n, is defined as Atr(r, x)i = xi + r, employing the isomorphism T(2) \u223c = R2, where xi \u2208R2 represents the position of the i-th adsorbate atom. Similarly, the rotation operation, denoted as Arot : SO(3) \u00d7 R3n \u2192R3n, is defined by Arot(R, x)i = R(xi \u2212\u00af x) + \u00af x, where \u00af x = 1 n P i xi, signifying rotations around the center-of-mass of the adsorbate. For the initial coordinates of adsorbate, we select a random 1https://github.com/AdeeshKolluru/ AdsorbDiff point on the slab. This point is considered as the center-ofmass of the adsorbate in fractional coordinates. We then convert from fractional coordinates to real coordinates and perform a reverse diffusion process to get to the lowest energy site (as shown in Algorithm 1). The work conducted by De et al. (De Bortoli et al., 2022) and Corso et al. (Corso et al., 2022) has demonstrated the applicability of the diffusion framework to Riemannian manifolds. In this context, the score model constitutes the tangent space, and a geodesic random walk serves as the reverse stochastic differential equation (SDE) solver. The score model is trained using denoising score matching (Song & Ermon, 2019), wherein a score function s\u03b8(x) is learned to approximate the gradient of the probability density \u2207xp(x) at varying noise levels (as shown in Algorithm 2). The learned scores for translations and rotations are treated as independent entities, assuming the tangent space is a direct sum of individual tangent spaces, with contributions from torsion being neglected. The forward SDE for both translation and rotation is defined as dx = q d\u03c32(t) dt dw, 4 Adsorbate placement via conditional denoising diffusion where w represents the corresponding Wiener process. In the translational scenario within T(2), the model learns a score for a standard Gaussian distribution with variance \u03c32(t). For rotations in SO(3), the diffusion kernel is governed by the IGSO(3) distribution, which can be sampled in the axis-angle parameterization. This involves sampling a unit vector \u03c9\u2032 \u2208so(3) uniformly and a random angle \u03c9 from the interval [0, \u03c0], as outlined by Equations 1 and 2. The score of diffusion kernel is defined in Equation 3. The computation of R\u2032 = R(\u03c9\u02c6 \u03c9)R, where R is the result of applying the Euler vector \u03c9\u02c6 \u03c9 to R, has been established in prior work by Yim et al. (Yim et al., 2023). To efficiently carry out the score computation and sampling processes, it is feasible to precompute the truncated infinite series and interpolate the cumulative distribution function (CDF) of p(\u03c9). p(\u03c9) = 1 \u2212cos(\u03c9) \u03c0 f(\u03c9) (1) f(\u03c9) = \u221e X l=0 (2l + 1) exp \u0012 \u2212l(l + 1)\u03c32 2 \u0013 \u00d7 sin \u0012\u0012 l + 1 2 \u0013 \u03c9 \u0013 sin \u0010\u03c9 2 \u0011 (2) \u2207ln pt(R\u2032|R) = \u0012 d d\u03c9 log f(\u03c9) \u0013 \u02c6 \u03c9 (3) 3.4. Conditional denoising diffusion for adsorbate placement While the OC Challenge set provides densely calculated adsorption energies for 244 systems, a total of 244 * 100 DFT optimization benchmarks were conducted. This involved performing 100 different random placements for each configuration. Notably, the naive denoising diffusion setup was exclusively trained on the 244 lowest energy configurations. To leverage the entirety of the DFT optimization data, a conditional diffusion model is employed. In this model, the optimized position is conditioned on the relative energy, specifically relative to the energy of the lowest energy configuration (Ec rel-i = Ec min \u2212Ec i ). This approach allows for a more comprehensive utilization of the available DFT optimization data. 3.5. Graph Neural Network (GNN) architecture The inputs to the ML model are the 3D positions of all input atoms from the adslab configuration and their corresponding atomic numbers. The outputs predict per-atom 3D vectors. These vectors are forces in the case of force fields and the score function in the case of diffusion. To predict multiple score functions (for translation and rotation), multiple output heads are trained each predicting independent score functions. All architectures used in this work come under the messagepassing neural network (MPNN) framework of graph neural networks (GNNs). MPNNs operate by passing messages between nodes in the graph, allowing information to be exchanged and aggregated iteratively. The key components of an MPNN include message passing, updating node states, and global readout. In the message-passing step, nodes exchange information based on their local context, and this information is then used to update the states of the nodes (as shown in Equation 4). h(t+1) v = Update \u0010 h(t) v , Aggregate \u0010 {m(t) u\u2192v | u \u2208N(v)} \u0011\u0011 (4) Here, h(t) v represents embeddings of node v at iteration t, m(t) u\u2192v denotes the message from node u to v at iteration t, N(v) represents the neighborhood of node v, and Update and Aggregate are differentiable functions for updating node states and aggregating messages, respectively. In our study, we systematically investigate diverse architectures employed in the training of diffusion models to discern the significance of architectural decisions in this context. Specifically, we have chosen to assess the performance of PaiNN, GemNet-OC, and EquiformerV2, each distinguished by its treatment of explicit geometric information and rotational symmetries (Duval et al., 2023). This selection is grounded in the diverse characteristics they bring to the table. Furthermore, we employ these architectures in benchmarking against OC20 force-field evaluation, thereby facilitating comparative analysis of architectural significance in the realms of force-fields and diffusion. 4. Results In this section, we present results demonstrating the impact of AdsorbDiff in accelerating the search for adsorption energy or better global optima. Specifically, we demonstrate the impact of conditional denoising training over unconditional training and a randomly placed adsorbate baseline. This random baseline is equivalent to performing AdsorbML on a single site (Nsite=1). Additionally, we demonstrate the impact of pretraining, model architectures, and the generalization of this approach to new adsorbates and slabs. 4.1. Datasets We utilize two publicly available datasets for this work OC20-Dense (Lan et al., 2023) and OC20 (Chanussot et al., 2021). OC20: Open Catalyst 2020 (OC20) is a large-scale dataset that contains converged DFT optimization trajectories of 5 Adsorbate placement via conditional denoising diffusion 460k unique adslab configurations, encompassing 55 unique elements and 74 adsorbates. Note that these optimizations are local optimizations performed with a single heuristic placement. ML force field models are trained on the forces derived from these DFT trajectories. Additionally, the optimized structure from OC20 is utilized for pre-training the diffusion model. OC20-Dense: The OC20-Dense dataset serves as a DFT benchmark for adsorption energies, employing dense placement on 100 random sites per adslab configuration, followed by DFT optimization. This dataset releases both in-distribution (ID) and out-of-distribution (OOD) data, relative to OC20. The ID data incorporates adsorbates and slabs from OC20\u2019s training set but presents different combinations and configurations, while OOD introduces new adsorbates and/or slabs not found in the OC20 training set. A subset of OC20-Dense ID and OOD was utilized in the Open Catalyst Challenge 2023, hosted at the AI for Science Workshop during NeurIPS 2023 2. We split the ID data into 80/20 ratios for training the diffusion model and validating the sampling process. These smaller subsets make it computationally cheaper to perform end-to-end iterations. 4.2. Metric and constraints Our success metric is defined by the final energy calculated through DFT. For real-world applications, this energy (DDF T T otal) is used in calculating the adsorption energy EDF T Ads as EDF T Adsorption = EDF T T otal \u2212EDF T Slab \u2212EDF T Adsorbate, where EDF T Slab and EDF T Adsorbate are the independent energies of slab and adsorbate respectively. This adsorption energy acts as a thermodynamic description of how good a catalyst is for downstream application. The DFT Success Rate (SR) is defined as the percentage of valid structures within 0.1 eV or lower of the DFT computed adsorption energy benchmark in the OC20-Dense data (as described in AdsorbML). This is computationally expensive to calculate but is accurate. Metrics calculated from ML predictions are inexpensive but are also inaccurate, discussed further in Appendix C. Since we calculate adsorption energies, the adsorbate and slab must not change during optimization. Therefore, the structures are considered an anomaly due to (1) adsorbate desorption: adsorbate moves far away from the slab, (2) adsorbate dissociation: atoms in adsorbate dissociate into multiple adsorbates, (3) slab mismatch/reconstruction: slab reconstructs into a completely different structure during optimization (4) adsorbate intercalation: when any of the adsorbate atoms detaches and get into the slab. Experimental setup: All presented results are based on the DFT success rate metric as defined in the preceding 2https://opencatalystproject.org/ challenge.html section. Throughout the diffusion process, we employ the EquiformerV2 architecture, unless explicitly stated otherwise, owing to its state-of-the-art performance in AdsorbML. Additionally, for MLFF optimization, we utilize GemNetOC pre-trained on OC20, chosen for its lower inference cost. Further specifics regarding model and training hyperparameters are available in Appendix D. All results are shown on the val ID split apart from the OOD section. 4.3. Conditional vs Unconditional diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random Unconditional Conditional 9.1% 11.4% 31.8% Conditional vs Unconditional Diffusion (Nsite=1) Figure 2. Comparison of conditional and unconditional diffusion with a baseline of random placement. Conditional diffusion training on relative energies of configurations of adslab significantly improves success rates over unconditional training and AdsorbML baseline. We demonstrate the importance of conditional training on relative energies (as shown in Section 3.4) over unconditional diffusion training in Figure 2. We compare both of these approaches to a naive baseline of AdsorbML with a single site (Nsite=1) where MLFF optimization is performed on a random adsorbate placement. It is noteworthy that the performance of unconditional training is suboptimal, this may be ascribed to the unexploited potential of additional data made available through conditional training. 4.4. AdsorbDiff vs AdsorbML AdsorbML conducts MLFF optimization and DFT evaluations on adsorption sites randomly placed within the system. A comparative analysis is drawn with AdsorbDiff, where the prediction of adsorption sites is facilitated through the utilization of diffusion models. As depicted in Figure 3, it is evident that AdsorbDiff exhibits notably superior performance, particularly at lower Nsites. However, as the number of adsorption sites (Nsites) increases, AdsorbDiff tends to either converge to or underperform in comparison to the brute force approach employed by AdsorbML. Adsorbate sites sampled from AdsorbDiff have less diversity by design as it\u2019s trained to predict the global optima. We calculate the average across the standard deviation of the points sampled at 10 Nsites and get 8.1 \u02da A for AdsorbML and 2.7 \u02da A for AdsorbDiff. AdsorbML\u2019s brute force placements have more randomness which leads to fewer anomalies post the MLFF 6 Adsorbate placement via conditional denoising diffusion 2 4 6 8 10 Number of Sites 10 15 20 25 30 35 40 45 DFT Success Rate (%) 9.1% 31.8% 20.5% 34.1% 34.1% 36.3% 47.7% 41.0% AdsorbDiff vs AdsorbML AdsorbML AdsorbDiff AdsorbDiff (Nsite=1) Figure 3. DFT Success Rates (%) for AdsorbDiff and AdsorbML across a varying number of site predictions. AdsorbDiff performs 3.5x better than AdsorbML utilizing a single site prediction. At higher sites, AdsorbML performs better due to the brute-force nature of site prediction that reduces anomalies. 2 4 6 8 10 Number of Sites 10 15 20 25 30 Anomalies 31.8% 25.0% 18.2% 20.5% 11.4% 22.7% 6.8% 13.6% AdsorbML AdsorbDiff Figure 4. Anomalies in AdsorbDiff and AdsorbML with respect to Nsites. A system is labeled as anomalous if all its predicted sites result in anomalies. AdsorbML has fewer anomalies than AdsorbDiff at higher Nsites due to more randomness in initial sites. optimization process shown in Figure 4. 4.5. Impact of pretraining Conditional diffusion benefits from training on a dataset that is 100 times more extensive than the unconditional approach, a consequence of leveraging multiple local optima within a unique adslab configuration. The substantial increase in training data size manifests in a notable enhancement in the success rate for the conditional approach. The OC20 IS2RE dataset, containing optimization data for 460,000 distinct adslab combinations, serves as a valuable resource for pretraining the diffusion model. It is important to acknowledge that this pretraining process results in a model that learns the local optima of an adslab combination, with the caveat that the model may not capture global optima for an adslab combination. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random PT Zero-shot PT Conditional 9.1% 29.6% 31.8% Impact of Pre-training (Nsite=1) Figure 5. Impact of pretraining on 460k OC20 local optima data on DFT Success Rate. PT Zero-shot measures zero-shot generalization of OC20 pre-trained model to OC20-Dense data. PT Conditional is finetuned on OC20 Dense data conditionally on relative energies of adslab configurations. Random baseline corresponds to randomly placed adsorbate. IS2RS Pretraining (PT) Zero-shot: Taking advantage of the diffusion model pre-trained on OC20 IS2RE data, we conduct a zero-shot validation on the OC20-Dense ID val split. This experimental setup allows us to assess the model\u2019s ability to predict better global optima having trained on a large dataset of local optima. Notably, we observe a substantial increase in DFT success rate in the zero-shot setting (as shown in Figure 5). IS2RS Pretraining (PT) Conditional: In this approach, we utilize the pre-trained model using the OC20-Dense data as described in Section 3.4. We observe that although this gives a 2% improvement over zero-shot, it converges to the same results as just training conditionally on OC20-Dense (shown in Figure 5). 4.6. Impact of architectures Architectures characterized by richer geometric information and extensive many-body interaction capabilities, such as eSCN and EquiformerV2, have demonstrated superior performance in force evaluations within the OC20 dataset compared to simpler models like PaiNN, which primarily encode directional information and apply linear transformations. Our benchmarking involves the evaluation of three architectures that exhibit progressively improved performance in OC20 Force MAE, revealing significant differences among them. This evaluation is specifically conducted in the context of the zero-shot assessment following pretraining (PT zeroshot) on an extensive dataset encompassing 460,000 OC20 instances. This choice is inspired by insights from the GemNet-OC paper (Gasteiger et al., 2022), suggesting that certain architectural choices manifest optimal performance only at higher data scales. 7 Adsorbate placement via conditional denoising diffusion 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) PaiNN GemNet-OC EquiformerV2 27.3% 27.3% 29.6% Impact of GNN architectures on diffusion Figure 6. Impact of Graph Neural Network (GNN) architectures on the diffusion process for DFT Success Rate keeping other parts of the framework same. Different architectures perform similarly on the task of diffusion sampling. Interestingly, in the realm of the diffusion task, we note that the disparity in success rates among these architectures is marginal (as shown in Figure 6) which has been recently demonstrated in applications of molecular generation tasks as well (Wang et al., 2023). The intuition behind this result is that the diffusion model\u2019s score function can be thought of as learning a harmonic potential (Xie et al., 2021). Harmonic potentials are simpler force-fields than ab-initio DFT calculations involved in OC20 forces. This could result in simpler architectures being able to capture the underlying complexity of the diffusion task defined in our work. 4.7. OOD generalization We measure the success of AdsorbDiff in out-of-distribution (OOD) cases where the model hasn\u2019t seen the adsorbate or the slab even during the pre-training on OC20. We pick a random 50 samples out of 200 validation OOD split defined in Open Catalyst Challenge 2023. We observe a marginal decrease of only 3.8% in results for the OOD case compared to the ID scenario and consistently observe significant improvement over the AdsorbML (Nsite=1) baseline. 0 5 10 15 20 25 30 35 40 DFT Success Rate (%) Random AdsorbDiff 8.4% 28% OOD Results Figure 7. Comparison of DFT Success Rate for In-Distribution (ID) and Out-of-Distribution (OOD) splits using the AdsorbDiff method. Random baseline corresponds to randomly placed adsorbate. 4.8. Inference cost In the case of conditional diffusion, our approach maintains a maximum step limit of 100, with adsorbate placement converging, on average, within 98 steps. In contrast, for MLFF optimization with a maximum step limit of 300 and Fmax criteria of 0.01 eV/A (consistent with AdsorbML), the convergence occurs in approximately 286 steps. Consequently, for scenarios with a single adsorption site (Nsite 1), AdsorbDiff incurs approximately 34% more inference cost than AdsorbML, given the GNN architecture for diffusion and MLFF optimization is the same. This end-to-end ML framework is O(104) times faster than the conventional DFT pipelines (Lan et al., 2023). In Section 4.6, we illustrate that simpler and faster models such as PaiNN yield comparable performance to more intricate and slower models like EquiformerV2. This enhances the efficiency of our diffusion-based approach, as its computational burden becomes negligible in comparison to MLFF optimization, which would require more computationally intensive ML architectures (details in Appendix B). 5. Conclusion This work introduces AdsorbDiff, a novel conditional denoising diffusion framework adept at leveraging inherent symmetries in adsorbate and slab interactions, enabling efficient prediction of the lowest energy site. The proposed end-to-end evaluation framework, coupled with Density Functional Theory (DFT), provides a robust assessment of our approach\u2019s capability to predict optimal adsorption energies. Notably, AdsorbDiff achieves a remarkable 31.8% success rate with a single site prediction, surpassing the naive AdsorbML baseline (9.1%) by 3.5x. We demonstrate the benefits of pretraining on large-scale local optima of adsorption sites. Interestingly, we find the diffusion method\u2019s performance to be not significantly dependent on the GNN architecture choice. Furthermore, our model\u2019s demonstrated generalization to previously unseen adsorbates and slabs underscores its adaptability and robustness. 6. Limitations and Future Work Our findings emphasize that anomalies play a substantial role in diminishing success rates, particularly in the context of multiple site predictions. While certain works have successfully employed constraints, such as Hookean constraints, to mitigate these anomalies, their implementation in a computationally efficient manner for larger adsorbates remains non-trivial. Addressing this challenge stands out as a crucial avenue for future research. Furthermore, the incorporation of torsion angles presents a promising direction for further improvement, especially when dealing with larger adsorbates. 8 Adsorbate placement via conditional denoising diffusion Impact statement This work\u2019s goal is to accelerate catalyst discovery using machine learning. AdsorbDiff substantially accelerates catalyst search which has a positive impact in the field of developing renewable energy technologies and various chemicals. However, there\u2019s a possibility of utilizing this work to accelerate the search for catalysts for hazardous chemicals. Acknowledgements We thank Minkai Xu, Muhammed Shuaibi, Nima Shoghi, Abhishek Das, and the FAIR Chemistry team at Meta for their valuable feedback and discussions."
17
+ }
intro_8K/test_introduction_short_2405.03989v2.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.03989v2",
3
+ "title": "A Method for Parsing and Vectorization of Semi-structured Data used in Retrieval Augmented Generation",
4
+ "abstract": "This paper presents a novel method for parsing and vectorizing\nsemi-structured data to enhance the functionality of Retrieval-Augmented\nGeneration (RAG) within Large Language Models (LLMs). We developed a\ncomprehensive pipeline for converting various data formats into .docx, enabling\nefficient parsing and structured data extraction. The core of our methodology\ninvolves the construction of a vector database using Pinecone, which integrates\nseamlessly with LLMs to provide accurate, context-specific responses,\nparticularly in environmental management and wastewater treatment operations.\nThrough rigorous testing with both English and Chinese texts in diverse\ndocument formats, our results demonstrate a marked improvement in the precision\nand reliability of LLMs outputs. The RAG-enhanced models displayed enhanced\nability to generate contextually rich and technically accurate responses,\nunderscoring the potential of vector knowledge bases in significantly boosting\nthe performance of LLMs in specialized domains. This research not only\nillustrates the effectiveness of our method but also highlights its potential\nto revolutionize data processing and analysis in environmental sciences,\nsetting a precedent for future advancements in AI-driven applications. Our code\nis available at https://github.com/linancn/TianGong-AI-Unstructure.git.",
5
+ "authors": "Hang Yang, Jing Guo, Jianchuan Qi, Jinliang Xie, Si Zhang, Siqi Yang, Nan Li, Ming Xu",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.DB",
9
+ "cats": [
10
+ "cs.DB"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
14
+ "gt": "Large Language Models (LLMs) present substantial benefits in various specialized fields, particularly due to their proficiency in processing and deriving insights from extensive volumes of unstructured text. These models excel in converting intricate, unstructured data into organized formats, which is crucial for tasks such as predicting reaction conditions in scientific studies or isolating pertinent legal clauses from extensive documents. This capability is invaluable, especially for augmenting experimental databases and melding computational and experimental data, with notable applications in environmental science(Rillig et al., 2023). In the medical sector, LLMs have shown remarkable efficacy in named entity recognition (NER) tasks, facilitating the extraction and categorization of biomedical information from expansive data sets(Lee et al., 2020). This has significantly contributed to both research and clinical practice. Similarly, in the legal realm, LLMs have proven effective in analyzing complex legal documents, pinpointing crucial legal terms, and enhancing contract analysis(L. Yue et al., 2024). These applications underscore the transformative impact of LLMs in processing large and complex datasets into actionable insights, thus optimizing operations in specialized domains such as healthcare and law. However, the integration of LLMs in specialized domains still faces challenges(Peng et al., 2023.). A notable issue is the generation of 'hallucinations' (L. Yang et al., 2024),which means the creation of factually incorrect, yet seemingly plausible information. This problem is compounded when addressing highly specialized or nuanced queries within professional contexts. This limitation predominantly originates from the generalized nature of the datasets used to train these models, which often lack the depth and specificity required for particular legal and medical scenarios(S. Pan et al., 2024). Consequently, this underscores the critical need for a strategic integration of LLMs with domain-specific expertise. Such a fusion, complemented by continuous evaluation and refinement, is essential to ensure the accuracy and relevance of the models' outputs, especially in fields where precision is paramount. In the realm of ecological environmental management, the Retrieval-Augmented Generation (RAG) approach is highly relevant for LLMs applications. RAG integrates the capabilities of LLMs with external databases, enabling access to and incorporation of essential data during generation. This enhances the model's ability to provide accurate, context-specific information, crucial in environmental management's complex domain. However, implementing RAG faces significant challenges, notably in developing a vector-based knowledge base essential for accurate data retrieval. The complexity of creating this base from vast, unstructured environmental data is compounded by a lack of efficient structuring methods. Addressing these data processing challenges is imperative to fully utilize RAG's potential, thereby improving LLMs' effectiveness in ecological environmental governance. In this study, we present an efficient method for processing documents in the `.docx` format and constructing a vector database, leveraging an unstructured open-source toolkit, the function calling capacity of OpenAI and the vector database platform of Pinecone. This paper details the method and their application in processing professional books for wastewater treatment plant operation and constructing a vector database for use with Retrieval-Augmented Generation (RAG), aiming to improve the expertise of large language models in the domain of wastewater treatment plant operation.",
15
+ "main_content": "Retrieval Augmented Generation (RAG) within large language models (LLMs) marks a significant stride in AI research, blending advanced knowledge retrieval with the generation capabilities of LLMs. This approach aims to boost the accuracy and relevance of the models' responses while preserving their contextual depth. Current research focuses on fine-tuning the retrieval process, ensuring that the information fetched aligns closely with user queries and enhances the quality of the model's output(Lewis et al., 2021.). A key challenge lies in integrating this retrieved information smoothly into the generation process, creating responses that are both coherent and contextually appropriate(Rohde et al., 2021). A significant area of exploration is in improving the retrieval phase to filter out irrelevant information or 'noise', ensuring that the data used by the model is of high quality and relevance(Karpukhin et al., 2020). Researchers are also working on making LLMs more adaptable in using this retrieved data across various topics, enhancing the algorithms that control how the model accesses and uses this information(Kalyan et al., 2021). Central to RAG's function in LLMs is the creation of vector databases from unstructured or semi-structured data like texts and web pages. These databases store information in a format that LLMs can easily access and use. Current research, including work on Transformer-based models, is pivotal in developing methods to efficiently transform vast amounts of data into these useful vector formats (Devlin et al., 2019). However, a noticeable gap in this area is the lack of simple, efficient methods for creating these vector databases. Existing techniques, while effective, tend to be complex and resource-heavy, limiting their broader application. Addressing this challenge with more user-friendly vectorization methods is crucial. Such advancements would significantly widen the scope and effectiveness of LLMs, enabling them to process and generate more nuanced, context-rich language responses in a range of fields, thus enhancing the practical utility and reach of LLMs in various applications. 3 Core Functions However, a noticeable gap in this area is the lack of simple, efficient methods for creating these vector databases. Existing techniques, while effective, tend to be complex and resource-heavy, limiting their broader application. Addressing this challenge with more user-friendly vectorization methods is crucial. Such advancements would significantly widen the scope and effectiveness of LLMs, enabling them to process and generate more nuanced, context-rich language responses in a range of fields, thus enhancing the practical utility and reach of LLMs in various applications. Fig. 1 Parsing and Vectorization of Semi-structured Data process framework 3.1 Data Preparation In this phase, a diverse array of sources including books, reports, scholarly articles, and data tables is compiled.These data largely consists of semi-unstructured data, encompassing a variety of file formats such as `.html`, `pdf`, `xml`, `docx`, `xlsx` and etc. Considering the substantial volume of data to be processed, the `.docx` format stands out due to its uniform standardization, high-quality text, ease of editing, broad compatibility, and rich metadata content, making it highly advantageous for efficient bulk processing and structured data extraction.In this project, API functionalities are employed to integrate open-source tools for the purpose of converting diverse data formats into the .docx format. For the assurance of effective post-processing, it is imperative that the content in the transformed `.docx` files, including headings, textual elements, and tables, be conformed to a standardized format. This standardization process involves harmonizing the font type, font size, inter-paragraph spacing, and line spacing across all headings, main text, and table contents. 3.2 Automated parsing and splitting During the parsing process, the `.docx` files are divided into multiple elements including titles, texts, images, tables, headers and footers with the partitioning function, utilizing detectron2, a deep learning-based object detection system (Unstructured, 2023). This partition function uses a combination of the styling information in the document and the structure of the text to determine the type of a text element. As part of data preparation for an NLP model, these elements require further filtering, to mitigate potential detrimental impacts on model efficiency caused by superfluous content. This ensuing phase entails a deliberate omission of specific components, particularly 'Headers' and 'Footers'. As a result, this refinement process retains only four core elements: 'Title', 'Text', 'Image', and 'Table', thereby ensuring a concise and targeted dataset for advanced analysis.. For the \"Title\" and \"Text\" elements, prior to integration into NLP models, rigorous data cleaning is essential to avoid efficiency losses caused by extraneous information. To tackle this issue, specialized functions within the 'Unstructured Documentation' cleaning framework are utilized (Unstructured, 2023). These functions effectively merge paragraphs separated by newlines, remove initial bullets and dashes, and eliminate surplus whitespace. This process significantly enhances the textual data's clarity and structural integrity, which is crucial for effective model performance. For the \"Table\" elements, the core textual information is retained in the element's 'text attribute'. To preserve the formatting fidelity of these tables, their HTML representation is also stored, specifically within 'element.metadata.text_as_html'. This dual-storage approach is critical for ensuring that the table's structural and visual integrity is maintained in its rendered form. For the \"Image\" elements, the 'vision_completion' approach leverages the capabilities of the 'gpt-4-vision-preview' API. This method involves generating specific queries that prompt GPT to provide detailed textual descriptions of images. Once these descriptions are obtained, they are inserted back into the data collection, replacing the positions originally occupied by the images. This process ensures a seamless transition from visual to textual data representation in the dataset.. 3.3 Chunking In the 'Unstructured Core Library,' essential for document processing in RAG contexts, the 'chunk_by_title' function is noteworthy for its methodical segmentation of documents into distinct subsections, identifying titles as section markers (Unstructured, 2023). Notably, it treats elements like tables and images as separate sections. The inclusion of the 'multi-page_sections' parameter is significant, facilitating the formation of multi-page sections that maintain thematic continuity. Unlike common practices, the 'combine_text_under_n_chars' parameter set to zero allows each text piece, regardless of length, to be recognized as an individual section, preserving the document's detailed structure. The default 'new_after_n_chars' parameter relies on the function\u2019s internal logic for starting new sections. The 'max_characters' parameter, adjusted to 4096, accommodates larger sections, tailored to the specific requirements of the document structure and content 3.4 Vector Database construction By leveraging OpenAI's \"text-embedding-ada-002\" model via API, embedding vectors are generated that correspond to specific content. This involves transforming data, initially partitioned into chunks through a preceding chunking process, into vector formats. The utilization of the \"text-embedding-ada-002\" model is pivotal in enabling large language models to locate content in our dataset that aligns with the given input prompt. The resultant vector data are then stored in Pinecone's vector database, where the feature vectors maintain a dimensionality of 1536. This strategic configuration significantly enhances the database's ability to conduct similarity searches and offers notable advantages in data storage capacity. The application of the \"text-embedding-ada-002\" model thus integrates OpenAI's advanced natural language processing prowess with Pinecone's efficient vector data management, providing a powerful and versatile solution for text search and analysis purposes. 4 Experiments and Discussion In this segment of the research, we have selected one scholarly papers in Chinese and another in English, along with one book in each language, to evaluate the efficacy of the methodologies employed in this study and the performance of the Retrieval-Augmented Generation (RAG) technique. These papers and books include textual, pictorial, and tabular elements. These two categories represent the predominant forms of publicly released documents at present. Papers are commonly available in an editable PDF format, whereas publicly released books are often found in scanned or image-based PDF formats.The specifics of the documents and books utilized for testing are detailed in Table 1. 4.1 Data Processing Results 4.1.1 Results of Text Processing Results The processing results for text information are displayed in Figure 2 and 3, featuring four distinct text blocks from the test papers and books: two in Chinese and two in English. The outcomes are evident in the \"Title\" and \"Cleaned Text\" sections. Upon converting all documents to the `.docx` format and applying the prescribed process, the methodology proficiently identifies \"Title\" across various text types and performs comprehensive text cleaning and organization. This underscores the method's robustness in managing different data structures and multiple languages. Table 1 Information of papers and books Type Title Page Count Language Paper Full-scale upgrade activated sludge to continuous-flow aerobic granular sludge Implementing microaerobic-aerobic configuration with internal separators 12 English \u63d0\u8d28\u589e\u6548\u80cc\u666f\u4e0b\u6392\u6c34\u7ba1\u7f51\u68c0\u6d4b\u6280\u672f\u7684 \u5e94\u7528\u4e0e\u603b\u7ed3 8 Chinese Book Modelling plastic flows in the European Union value chain 132 English \u6c61\u6c34\u5904\u7406\u8bbe\u5907\u64cd\u4f5c\u7ef4\u62a4\u95ee\u7b54 369 Chinese Fig. 2 Text Processing Results Instances of papers: (a) and (c) are instances of original texts from English and Chinese papers, respectively,while (b) and (d) represent the results of the segmentation into chunks. Fig. 3 Text Processing Results Instances of books: (a) and (b) are instances of original texts from English and Chinese books, respectively,while (c) and (d) represent the results of the segmentation into chunks. 4.1.2 Results of Image Processing Results The results of transforming images into textual descriptions using LLM are presented in Table 2. This research employs an embedding method that leverages the GPT 4.0 LLM to convert images into text, thereby preserving the completeness of the information. The findings indicate that the key information in both English and Chinese images can be effectively extracted. However, due to the model's limited support for Chinese elements, images containing Chinese require additional inputs such as captions or related information to improve the model\u2019s recognition accuracy and efficacy, preventing ineffective identifications. Table 2 Image processing results NO. Original Image Cleaned Text in Chunks 1 2 3 4 4.1.3 Results of Table Processing Results In the process of data handling, table processing presents significant challenges as tables often contain extensive parameter and comparative analysis information. Such information significantly enhances a LLM's capabilities in data understanding, pattern recognition, and knowledge integration, thereby improving the accuracy and relevance of text generation. In this study, we employed the \"text_as_html\" method to handle tabular data, with the results displayed in table 3.The corresponding text, rendered as an HTML document, appears as demonstrated in Figure 4.Our analysis indicates that the sections of tables within chunks are expressed in HTML syntax, allowing the saved HTML files to accurately restore the original structure and hierarchy of the tables when opened, ensuring the correct identification and extraction of information. Table 3 Table processing results NO. Original Table Cleaned text in Chunks 1 2 3 Fig. 4 Results of tables elements in chunks converted to html file 4.2 Zero-shot Question Answering Results under RAG To evaluate the effectiveness of vector knowledge bases constructed using the methodologies outlined in this study for enhancing the expertise of large language models, GPT 4.0 was employed to process the papers and books utilized in this research. A set of fifty questions was randomly generated, focusing on the content of the selected documents. Subsequently, three questions in English and two in Chinese were randomly chosen for testing purposes. GPT 4.0 was then tasked with scoring the responses obtained from these tests, providing an objective measure of the effectiveness of the vector knowledge bases in augmenting the domain-specific knowledge of the language model across different languages. The results of the English and Chinese assessments are presented in Tables 4 and 5, respectively, offering a clear overview of the performance of the vector knowledge bases in enhancing the expertise of GPT 4.0. Table 4 Zero-shot question answer results in English NO. Question and answer Scores 1 Question1\uff1aExplain how the \"Transfer Coefficients\" (TCs) are used to simulate plastic flows in the form of a paragraph? Answer by GPT 4.0 75/100 Answer by RAG 95/100 NO. Question and answer Scores 2 Question2\uff1aWhich predefined scenarios showed the greatest potential improvement when assessing the 2025 plastic recycling targets? Answer by GPT 4.0 60/100 Answer by RAG 95/100 NO. Question and answer Scores 3 Question3\uff1aHow did the microaerobic-aerobic configuration impact the microbial community structure and pollutant removal pathways? Answer by GPT 4.0 85/100 Answer by RAG 95/100 Table 5 Zero-shot question answer results in Chinese NO. Question and answer Scores 1 Question1\uff1a\u51e0\u79cd\u5e38\u7528\u6811\u8102\u518d\u751f\u5242\u7684\u9002\u7528\u5bf9\u8c61\u3001\u6d53\u5ea6\u8303\u56f4\u53ca\u76f8\u5bf9\u7528\u91cf\u662f\u591a \u5c11\uff1f Answer by GPT 4.0 80/100 Answer by RAG 95/100 NO. Question and answer Scores 2 Question2\uff1a\u5728\u6392\u6c34\u7ba1\u7f51\u68c0\u67e5\u4e2d\u7535\u78c1\u68c0\u67e5\u6cd5\u6709\u54ea\u4e9b\u5e94\u7528\u6848\u4f8b\uff1f Answer by GPT 4.0 75/100 Answer by RAG 90/100 The results presented in this study provide compelling evidence that vector knowledge bases constructed using the methodologies described herein can significantly enhance the ability of large language models to acquire and apply domain-specific information. This improvement is manifested across several critical dimensions, including clarity, specificity, accuracy, technical depth, and comprehensiveness. By effectively augmenting the knowledge acquisition process, these vector knowledge bases enable language models to generate responses of substantially higher quality, demonstrating their efficacy in improving the performance of large language models in specialized domains. These findings underscore the potential of vector knowledge bases as a powerful tool for enhancing the accuracy and relevance of language model outputs in domain-specific contexts, paving the way for more effective and efficient natural language processing applications in various specialized fields. Conclusion The methodologies developed in this study significantly enhance the capability of LLMs to leverage domain-specific knowledge through the construction of vector knowledge bases. Our experiments demonstrate the effectiveness of the RAG approach, where LLMs, equipped with these bases, show substantial improvements in generating precise, relevant, and contextually rich responses. This advancement is particularly evident in the environmental science and wastewater treatment sectors, where the integration of vector databases enables the detailed understanding and management of complex data. The successful application of these methods promises a broader utility of LLMs, paving the way for more sophisticated natural language processing applications in various specialized fields. This research not only validates the feasibility of enhancing LLMs performance with structured vector databases but also sets a foundation for future innovations in AI-driven data processing and analysis in environmental engineering."
16
+ }
intro_8K/test_introduction_short_2405.04003v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04003v1",
3
+ "title": "High Energy Density Radiative Transfer in the Diffusion Regime with Fourier Neural Operators",
4
+ "abstract": "Radiative heat transfer is a fundamental process in high energy density\nphysics and inertial fusion. Accurately predicting the behavior of Marshak\nwaves across a wide range of material properties and drive conditions is\ncrucial for design and analysis of these systems. Conventional numerical\nsolvers and analytical approximations often face challenges in terms of\naccuracy and computational efficiency. In this work, we propose a novel\napproach to model Marshak waves using Fourier Neural Operators (FNO). We\ndevelop two FNO-based models: (1) a base model that learns the mapping between\nthe drive condition and material properties to a solution approximation based\non the widely used analytic model by Hammer & Rosen (2003), and (2) a model\nthat corrects the inaccuracies of the analytic approximation by learning the\nmapping to a more accurate numerical solution. Our results demonstrate the\nstrong generalization capabilities of the FNOs and show significant\nimprovements in prediction accuracy compared to the base analytic model.",
5
+ "authors": "Joseph Farmer, Ethan Smith, William Bennett, Ryan McClarren",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "physics.comp-ph",
9
+ "cats": [
10
+ "physics.comp-ph",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Marshak waves, a common type of driven supersonic radiative heat waves, play a key part in the physics of internal confinement fusion (ICF) [1\u20134], astrophysics [5\u20137] and other high energy density phenomena [8]. In most cases, a full description of the radiative transfer process is not required. Therefore, approximations are in order. The diffusion approximation is one of these and is considered the simplest [9]. In some cases, analytic solutions to the radiation diffusion equation can be useful in understanding experiments [10\u201316]. These analytic or semi-analytic models can be thought of as a reduced order approximation of the full system, which is itself a simpli- fication. As examples, [10] reduces a two dimensional diffusion system via asymptotic expansion. The diffusion system is an approximation to higher order radiation trans- port equations. Marshak, the namesake of these waves, reduced a partial differential equation (PDE) into an ordinary differential equation (ODE) [13, 14]. Reduced order solutions have the benefit of simpler calculation, as solving an ODE is usually preferable to solving a PDE, and they can be interrogated to clarify physical relationships between parameters. However, coming to a semi-analytic or analytic solution often involves invoking simplifications which may debase the accuracy of the prediction. Thus, the motive for this inquiry is to take a widely used and appreciated semi-analytic diffusion model, the Hammer and Rosen Marshak wave model (HR) [11], and provide a correction to the model\u2019s limiting assumptions in a computationally efficient manner. Classical numerical solvers such as finite difference, finite element, or finite volume methods discretize continuous equations into a finite set of algebraic equations [17\u2013 22]. These numerical solvers can be computationally expensive for high dimensional problems and for domains with complex geometries. In recent years, approaches that leverage ML have garnered support to alleviate these challenges [23\u201325]. In particular, neural operators, a class of ML models, have emerged as a promising solution to these challenges. These operators learn mappings between infinite-dimensional function spaces, effectively approximating differential or integral operators that govern PDEs in a data driven manner [26, 27]. One of the key advan- tages of neural operators is that they only need to be trained once to learn a family of PDEs, and obtaining a solution for a new instance of a PDE parameter requires only a forward pass of the network. Furthermore, neural operators are discretization- invariant as they share network parameters across discretizations, allowing for the transfer of solutions between meshes. The Fourier neural operator (FNO) [28] is a seminal neural operator that learns network parameters in Fourier space. The FNO uses fast Fourier transform (FFT) for spectral decomposition of the input and computation of the convolution integral kernel in the Fourier space. This approach has shown promising results in learning the underlying physics of various PDEs including Burgers, Darcy, and Navier-Stokes equations. In this work, we propose to use FNO to learn the physics of Marshak waves for various input-output pairs. We develop two models: a base model which takes the physical parameters of the Marshak wave problem as input and outputs the time dependent wavefront position and temperature distribution as given by the HR model, 2 and a hybrid approach which corrects the analytic HR solution to output the numerical solution to the full flux-limited diffusion equation. The structure of this paper is as follows. The diffusion model for Marshak waves is introduced in Section 2. Hammer and Rosen\u2019s approximation is summarized in Section 3. The neural network that is employed to correct the HR model is discussed in Section 4. Finally, results and conclusions are offered in Sections 5 and 6.",
16
+ "main_content": "We study radiation diffusion in planar geometry, which assumes variation of the dependent variables only in a single direction, x. The evolutions of the radiation and material energy density are governed by [29], \u2202er \u2202er \u2202t = \u2202 \u2202x \u2202 \u2202x c 3\u03ba(\u03c1, c 3\u03ba(\u03c1, T) \u2202er \u2202x \u2202er \u2202x + c\u03ba(aT 4 \u2212er), (1) \u2202e \u2202e \u2202t = c\u03ba(e \u2212aT 4) (2) where, er is the energy density of the radiation and e is the energy density of the material. c is the speed of light, \u03ba is the opacity with units of inverse length, a is the radiation constant, defined a \u22614\u03c3 c where \u03c3 is the Stefan-Boltzmann constant. T is the material temperature and \u03c1 is the material density. radiation constant, defined a \u22614\u03c3 c where \u03c3 is the Stefan-Boltzmann constant. T is the material temperature and \u03c1 is the material density. A Marshak boundary condition will specify the incoming radiation flux [29], er(x = 0, t) \u2212 \ufffd2 3\u03ba nt flux on the surf 2 3\u03ba \u2202er \u2202x \u2202x \ufffd\ufffd \ufffd \ufffd \ufffd x=0 = 4 c at x = 0. Th 4 c Finc. (3) \ufffd \ufffd\ufffd \ufffd \ufffd \ufffd where Finc is the incident flux on the surface at x = 0. The material energy density is found via integration of the specific heat, e = \ufffdT 0 T 0 dT \u2032 Cv(T \u2032). (4) Solutions to Eq. (1) in the optically thick limit are recognizable by sharp drops in temperature near the wavefront and gradual temperature variation behind the front. This is because the radiation temperature and material temperature are in equilibrium behind the wavefront. Thus, is often valid to assume equilibrium between the radiation temperature and and material temperature, i.e. er = aT 4. This assumption simplifies Eqs. (1) and (2) to a single equation for the material temperature, \u2202e \u2202e \u2202t = 4 3 4 3 \u2202 \u2202x \u2202 \u2202x 1 \u03ba(\u03c1, \u03ba(\u03c1, T) \ufffd\u2202 \u2202x \u2202 \u2202x\u03c3T 4 \ufffd (5) with the boundary condition at the surface, T(x = 0, t) = Ts(t). (6) 3 Furthermore, the equation of state is specified so that, e = fT \u03b2\u03c1\u2212\u00b5, (7) This is the formulation given in [11]. The parameters f, \u03b2, \u00b5 are found by fitting experimental data, as in [30]. 3 Hammer and Rosen approximation The Hammer and Rosen model for supersonic thermal radiation diffusion is a perturbative, semi-analytic, one dimensional solution to the diffusion equation under mild limiting assumptions. In particular, this model assumes planar geometry, power law representations for the opacity, 1 K = gT \u03b1\u03c1\u2212\u03bb, and material internal energy, e = fT \u03b2\u03c1\u2212\u00b5, and a constant density. These assumptions transform Eq. (5) into, \u03c1\u2202e \u2202t = 4 3 \u2202 \u2202x \u0012 1 K\u03c1 \u2202 \u2202x\u03c3T 4 \u0013 , (8) where \u03c1 is the material density, e is the internal energy, \u03c3 is the Stefan-Boltzmann constant, and T is the radiation temperature. The application of these assumptions and some simplification leads to the expression \u2202T \u03b2 \u2202t = C \u22022 \u2202x2 T 4+\u03b1 (9) where our constants are collected into the term C = 4 4 + \u03b1 4 3 1 f g\u03c1\u00b5\u22122\u2212\u03bb (10) This model predicts the position of the wave front as a function of time as the solution to an integral expression, then provides an explicit expression for the temperature profile in the material. The model can accommodate an arbitrary radiation temperature boundary condition. The Hammer and Rosen model gives the position of the wavefront, xf, as x2 f (t) = 2 + \u03f5 1 \u2212\u03f5CT \u2212\u03b2 s Z t 0 T 4+\u03b1 s d\u02c6 t (11) where Ts is the boundary temperature, \u03f5 = \u03b2 4+\u03b1 is a combination of terms from the power laws, and xf is the heat front position as a function of time, t. With knowledge of the wavefront position a simple expression can be evaluated for the temperature profile: T 4+\u03b1 T 4+\u03b1 s (x, t) = \u0014\u0012 1 \u2212x xf \u0013 \u0012 1 + \u03f5 2 \u0012 1 \u2212 x2 f CH2\u2212\u03f5 dH dt \u0013 x xf \u0013\u00151/(1\u2212\u03f5) . (12) Here H = T 4+\u03b1 s . One hallmark of this approximate solution is that it is very inexpensive to evaluate. In practice, and when compared to computing a numerical solution, 4 this method is effectively immediate. For this reason, it has proven to be particularly helpful for rapid iteration during the design process. 4 Fourier neural operator model We now turn to the consideration of producing a machine learning model to compute Marshak wave solutions. For this task we turn to the Fourier Neural Operator. In this section we use standard notation from the ML literature; regrettably, this overlaps with the standard notation for Marshak waves at times. g f \u00c6 \u00d8 \u220f \u00b5 \u03a9 Parameters 1.0 \u00a3 10\u00b04 1.0 \u00a3 10\u00b02 1.0 \u00a3 100 1.0 \u00a3 102 Values 0 1 2 3 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) 0.00 0.02 0.04 0.06 xf (cm) 0 1 2 T (HeV) 0 1 2 3 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) P Fourier layer 1 Fourier layer 2 Fourier layer l Q a(x) u(x) v(x) F R F\u22121 Fourier layer W + \u03c3 Fig. 1: Fourier neural operator architecture for solving the Marshak wave problem. The input function a(x) is projected to a higher representation v0(x) by the projection layer P. This is then processed through l iterations of Fourier layers. Each Fourier layer consists of a Fourier transform F that maps vi(x) to the Fourier domain, multiplication with the weight tensor R and filtering of higher Fourier modes, and an inverse Fourier transform F\u22121 to return to the spatial domain. The output is linearly transformed by W and passed through a nonlinear activation function \u03c3. This is added to the previous Fourier layer\u2019s output to produce the updated representation vi+1(x). After l layers, the final representation vl(x) is mapped to the output solution u(x). The boundary temperature drive (top left) and parameters (bottom left) represent the input functions and the front position (top right) and temperature distribution (bottom right) represent the output functions for the Marshak wave problem The primary goal of an operator G is to establish a mapping between infinitedimensional spaces from a finite collection of input-output pairs, denoted as A = A(Rda) \u2282Rda and U = U(Rdu) \u2282Rdu, respectively. Following from [28, 31], consider a partial differential equation (PDE) which maps input function spaces to an output solution space. For a given domain D \u2282Rd with boundary \u2202D, and x \u2208D, an operator would map source terms, f(x, t) : D \u2192R, boundary conditions, u(\u2202D, t) : D \u2192R, and initial conditions u(x, 0) : D \u2192R, to the solution space u(x, t) : D \u2192R, where t is time. In the present work, we aim to learn the nonlinear differential operator G : A \u2192U for various sets of input parameters a \u2208A in the Marshak wave problem. 5 By constructing a parametric map G : A \u00d7 \u0398 \u2192U, the optimal parameter \u03b8 \u2208\u0398 can be approximated with data-driven methods to adjust \u03b8 such that G(\u00b7, \u03b8) approaches the target map G. Classical numerical solvers, be it finite elements, finite differences, or many modern data-driven and physics-informed neural networks attempt to learn the output function u(x, t) which satisfies G for a single instance of input parameter a and can be computationally prohibitive, especially when the solution for the PDE is required for many instances of the parameter. On the other hand, Fourier neural operators (FNO) have been developed to approximate G directly so that solutions to a family of PDEs are realized for different sets of a, thereby enhancing computational efficiency and practical utility. In general, input and output functions a and u are continuous, however, we assume to know only point-wise evaluations. To that end, the problem at hand can be described using the n-point discretization of D, Dj = {x1, . . . , xn} \u2282D with observations of input-output pairs indexed by j \b aj \u2208Rn\u00d7da, uj \u2208Rn\u00d7du\tN j=1, and uj = G(aj). The neural operator to learn the input-output mapping is an iterative architecture. First, the input a(x, t) is transformed to a higher dimensional representation by v0(x) = P(a(x)) where the transformation P(a(x)) : Rda 7\u2192Rdv. In this framework, a shallow fully connected network can achieve this desired transformation. Next a series of l updates vi 7\u2192vi+1 are performed vi+1(x) := \u03c3 (Wvi(x) + (K(a; \u03d5)vi) (x)) , \u2200x \u2208D. (13) with nonlinear activation function \u03c3(\u00b7) : R 7\u2192R and a linear transformation W : Rdv 7\u2192Rdv. Each vi is a dv-dimensional real vector in Rdv. For a vector input x = [x1, x2, . . . , xdv]T \u2208Rdv, \u03c3(x) is applied element-wise, resulting in [\u03c3(x1), \u03c3(x2), . . . , \u03c3(xdv)]T . The integral kernel operator K : A \u00d7 \u03b8 \u2192L(U, U) is parameterized by \u03d5 \u2208\u0398K (K(a; \u03d5)vi) (x) := Z D \u03ba\u03d5(x, y, a(x), a(y); \u03d5)vi(y)dy, \u2200x \u2208D. (14) where \u03ba\u03d5 : R2(d+da) \u2192Rdv\u00d7dv is a neural network parameterized by \u03d5 \u2208\u0398K. After all iterations, a transformation function u(x) = Q (vl(x)) moves vl(x) into the solution space Q (vl(x)) : Rdv 7\u2192Rdu. This approach extends the idea of neural networks to operate on infinite-dimensional function spaces, enabling the learning of mappings between such spaces from finite data samples. By leveraging neural operators, it becomes possible to approximate the nonlinear operators that govern the relationships between infinite-dimensional input and output function spaces, such as those arising in the context of partial differential equations. The FNO is a specific neural operator architecture designed for such nonlinear mappings. It replaces the kernel integral operator in by a Fourier convolution operator F\u22121 (F (\u03ba\u03d5) \u00b7 F (vi)) (x), and applying the convolution theorem. The Fourier kernel integral operator becomes (K(\u03d5)vi) (x) = F\u22121 (R\u03d5 \u00b7 (Fvi)) (x), \u2200x \u2208D, 6 where F is the Fourier transform of a function and F\u22121 is its inverse transform, R\u03d5 is the Fourier transform of a periodic function \u03ba parameterized by \u03d5 \u2208\u0398K. Given that \u03ba is periodic and can be represented by a Fourier series expansion, only discrete modes are considered k \u2208Zd. To create a finite dimensional representation, the Fourier series is truncated at a maximum number of modes kmax = |{k \u2208Zd : |kj| \u2264kmax,j for j = 1, . . . , d}|. In a discretized domain D with n \u2208N points, vi \u2208Rn\u00d7dv and F(vi) \u2208Cn\u00d7dv is obtained, here C represents the complex space. A convolution of vi with a function that has kmax Fourier modes gives F(vi) \u2208Ckmax\u00d7dv . Then the multiplication with the weight tensor R \u2208Ckmax\u00d7dv\u00d7dv is (R \u00b7 (Fvi))k,l = X j=1 Rk,l,j (Fvi)k,j , k = 1, . . . , kmax, j = 1, . . . , dv (15) With uniform discretization and resolution s1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 sd = n, Fast Fourier Transform (FFT) can replace F. For f \u2208Rn\u00d7dv, k = (k1, . . . , kd) \u2208Zs1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Zsd, and x = (x1, . . . , xd) \u2208D, the FFT \u02c6 F and its inverse \u02c6 F\u22121 are defined as ( \u02c6 Ff)l(k) = s1\u22121 X x1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X xd=0 fl (x1, . . . , xd) e \u22122i\u03c0 Pd j=1 xj kj sj , (16) \u0010 \u02c6 F\u22121f \u0011 l (x) = s1\u22121 X k1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X kd=0 fl (k1, . . . , kd) e 2i\u03c0 Pd j=1 xj kj sj . (17) Finally, since Eq. (13) follows standard neural network structures training a network training is done with an appropriate loss function L = U \u00d7 U \u0398 = arg min \u0398 (L(G(a), G(a, \u0398)). (18) A schematic representation of the Fourier Neural Operator model for the Marshak wave problem is provided in Figure 1. 5 Results 5.1 Problem description and parameter space The Marshak waves we consider concern the propagation of heat waves through lowdensity foam cylinders or other materials driven by a hohlraum similar to those described in [30, 32]. Key parameters in these experiments include density, drive energy and radiation temperature, which typically can range from 100 to 300 eV. Xray imaging is used to track the heat wave, while diagnostic tools measure the flux breaking through the foam edge. The experiments cover a wide range of temperatures, materials, and densities. 7 Table 1, adapted from [30], presents material properties used in various Marshak wave experiments. The first ten rows contain parameters for the foams, while the last two rows provide parameters for coating materials. For each material, the numerical parameters were fitted in relevant experimental regimes. Further details about the experiments can be found in [30] and references cited therein. Table 1: Material properties for various Marshak wave experiments Experiment Foam g \u0000g/cm2\u0001 f (MJ) \u03b1 \u03b2 \u03bb \u00b5 \u03c1 \u0000g/cm3\u0001 Massen C11H16Pb0.3852 1/3200 10.17 1.57 1.2 0.1 0 0.080 Xu pure C6H12 1/3926.6 12.27 2.98 1 0.95 0.04 0.05 Xu with copper C6H12Cu0.394 1/7692.9 8.13 3.44 1.1 0.67 0.07 0.05 Back, Moore SiO2 1/9175 8.77 3.53 1.1 0.75 0.09 0.05 Back Ta2O5 1/8433.3 4.78 1.78 1.37 0.24 0.12 0.04 Back low energy SiO2 1/9652 8.4 2.0 1.23 0.61 0.1 0.01 Moore C8H7Cl 1/24466 14.47 5.7 0.96 0.72 0.04 0.105 Keiter Pure C15H20O6 1/26549 11.54 5.29 0.94 0.95 0.038 0.065 Keiter with Gold C15H20O6Au0.172 1/4760 9.81 2.5 1.04 0.35 0.06 0.0625 Ji-Yan C8H8 1/2818.1 21.17 2.79 1.06 0.81 0.06 0.160 Au 1/7200 3.4 1.5 1.6 0.2 0.14 0.160 Be 1/402.8 8.81 4.89 1.09 0.67 0.07 0.160 Numerical approximations for solving the Marshak wave problem can be computationally expensive, especially when exploring a wide range of material properties. To overcome this challenge, we propose using the Fourier Neural Operator (FNO) to learn the mapping between material properties and their corresponding Marshak wave solutions. FNOs have shown success in solving partial differential equations by learning the solution operator from a dataset of input-output pairs. To train the FNO model, we generate a dataset that spans the parameter space defined by the material properties in Table 1. The input consists of a set of material properties, (g, f, \u03b1, \u03b2, \u03bb, \u00b5, \u03c1), while the output corresponds to the solution of the Marshak wave problem in terms of the temperature profile and wave front position at a given time. We create a uniformly spaced grid of values for each material property, covering the range of values found in the experiments: In Table 2, N is the number Table 2: Parameter ranges for generating training data Parameter Range Number of grid points g [min(g), max(g)] N (log-spaced) f [min(f), max(f)] N \u03b1 [min(\u03b1), max(\u03b1)] N \u03b2 [min(\u03b2), max(\u03b2)] N \u03bb [min(\u03bb), max(\u03bb)] N \u00b5 [min(\u00b5), max(\u00b5)] N \u03c1 [min(\u03c1), max(\u03c1)] N 8 of grid points for each parameter. For the g parameter, we use logarithmically spaced values to better capture its wide range, while the other parameters are linearly spaced. In addition to the material properties, the Marshak wave problem also depends on the boundary temperature (i.e., the drive temperature). We parameterize the drive with a function Tb(t, a, b, c, d), measured in HeV, defined as follows Tb(t, a, b, c, d) = a + (b(t \u2265c)(t \u2212c))(t < d) + (t \u2265d)(b(d \u2212c)). (19) Here t is time (in ns), and a \u2208[1, 3], b \u2208[0, 1], c \u2208[0.1, 2], and d \u2208[2, 5]. The function consists of a constant term a, and a piecewise function that takes different values based on the conditions involving t, c, and d. We generate a set of boundary temperature functions by sampling the parameters a, b, c, and d from their respective ranges. To create the training set, we take the Cartesian product of the material property values and the boundary temperature function parameters and obtain a set of input parameter combinations that cover the entire parameter space. For each input combination, we solve the Marshak wave problem using a numerical solver to obtain the corresponding output solution. These input-output pairs form our training dataset, which we use to train the FNO model. As will be seen, by learning from this diverse set of input-output pairs, the FNO can effectively capture the underlying physics of the Marshak wave problem across the entire parameter space, including the dependence on the boundary temperature function. This allows the trained model to quickly and accurately predict solutions for new, unseen combinations of material properties and boundary temperature functions within the specified ranges. 5.2 Base model As a starting point, we introduce a base model that takes all material properties and boundary temperature function parameters as inputs and uses the Hammer and Rosen approximation as the output. The Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, which serves as a useful benchmark for evaluating the performance of our FNO model. Figure 2 compares the temperature solutions of the Marshak wave in space for three different boundary temperature functions. The boundary temperature functions, shown in Figure 2a, are generated by varying the parameters a, b, c, and d in Equation 19. The corresponding temperature solutions, obtained using both the Hammer and Rosen approximation and the FNO model, are presented in Figure 2b. The results demonstrate good agreement between the FNO model and the Hammer and Rosen approximation for all three boundary temperature functions. This indicates that the FNO model is capable of accurately capturing the physics of the Marshak wave problem and reproducing the analytical solutions provided by the Hammer and Rosen approximation. 5.3 Hammer and Rosen Correction model While the Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, it suffers from inaccuracies due to the assumptions made in 9 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) Tb1 Tb2 Tb3 (a) Temperature Drive 0.00 0.25 0.50 0.75 1.00 x (cm) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 T (HeV) Tb1 Tb2 Tb3 HR FNO (b) Temperature profile at 3 ns Fig. 2: Comparison of the Hammer and Rosen approximation and the FNO model for a representative material under different boundary temperature drives (a) are characterized by a constant temperature followed by a linear ramp at different times and rates. The corresponding temperature solutions (b) obtained from the Hammer and Rosen approximation (solid lines) and the FNO model (dashed lines) show close agreement. its derivation, Section 3. These inaccuracies become apparent when comparing the Hammer and Rosen solution to more accurate numerical solvers, such as diffusion based methods, and experimental results. To address this issue, we introduce the Hammer and Rosen Correction model, which aims to improve the accuracy of the Hammer and Rosen approximation using FNO. The Hammer and Rosen Correction model is built similarly to the base model but takes the Hammer and Rosen solution for the temperature and the front position as additional inputs. The outputs are generated using a more accurate diffusion solution, and the FNO learns to map the Hammer and Rosen solution to the diffusion solution. By doing so, the Hammer and Rosen Correction model effectively corrects the inaccuracies of the Hammer and Rosen approximation and provides a more accurate prediction of the Marshak wave behavior. Figure 3 illustrates in a parallel axis plot the input parameter values for four different test cases used to evaluate the Hammer and Rosen Correction model. Each line represents a specific test case, with the values of the parameters plotted along the y-axis for each parameter on the x-axis. The boundary temperature drive is given with parameters a = 1.2, b = 0.8, c = 1, and d = 2 for Eq. (19). The output values are produced by a numerical solver we developed to solve radiation diffusion in planar geometry. The solver assumes equilibrium between the radiation temperature and material temperature, reducing Eq. (1) and Eq. (2) to a single equation for the material temperature Eq. (5). The solver employs finite difference method to discretize the spatial domain into a uniform grid. Time integration is performed by the backward differentiation formula, an implicit multi-step method. The spatial derivatives in Eq. (5) are approximated using a second order central difference scheme. The left boundary at the surface (x = 0), Eq. (3), is prescribed as a 10 g f \u03b1 \u03b2 \u03bb \u00b5 \u03c1 Parameters 1.0 \u00d7 10\u22124 1.0 \u00d7 10\u22122 1.0 \u00d7 100 1.0 \u00d7 102 Values Case 1 Case 2 Case 3 Case 4 Fig. 3: Parameter values from the test set for four different cases to evaluate the performance of the Hammer and Rosen Correction model function of time and the solver assumes equation of state given by Eq. (7). At each time step, the solver computes the temperature profile across a one-dimensional spatial grid consisting of 100 spatial cells and tracks the position of the wavefront. The Hammer and Rosen correction model is trained and tested using the dataset generated by the numerical solver and the Hammer and solution, paired with the input parameter values. The dataset is split into standard training and testing sets. It is important to note that the testing set contains parameter combinations that may not represent physically realistic scenarios, as they are generated by uniformly sampling the parameter space defined in Table 2. The model is trained using 1.05M input-output pairs, with 58k trainable parameters and is trained over 30 epochs. Figure 4 presents a comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution. The subfigures 4a, 4b, 4c, and 4d show the results for different sets of input parameters. It is evident from the figures that the Hammer and Rosen approximation deviates noticeable from the diffusion solution over time. In contrast, the Hammer and Rosen Correction model accurately predicts the diffusion solution, demonstrating its ability to correct the inaccuracies of the Hammer and Rosen approximation. Figure 5 provides a comparison of the temperature solutions for the same three models. Subfigures 5a, 5b, 5c, and 5d show the temperature profiles at the same time instance. Once again, the Hammer and Rosen Correction model closely matches the diffusion solution, while the Hammer and Rosen approximation exhibits discrepancies. The Hammer and Rosen Correction model both improves the accuracy of the Marshak wave Hammer and Rosen solution and provides a framework for integrating analytical approximations with data-driven approaches. This hybrid approach combines benefits of both analytical and machine learning methods by giving a physical solution to simplify the inference. 11 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) Di\ufb00usion HR HR Correction (a) Case 1 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.07 0.14 0.21 0.28 0.35 0.42 xf (cm) Di\ufb00usion HR HR Correction (b) Case 2 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.012 0.024 0.036 0.048 0.060 0.072 xf (cm) Di\ufb00usion HR HR Correction (c) Case 3 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.0 0.3 0.6 0.9 1.2 1.5 1.8 xf (cm) Di\ufb00usion HR HR Correction (d) Case 4 front position solution Fig. 4: Comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution for different sets of input parameters. The Hammer and Rosen approximation (orange lines), deviates from the diffusion solution (blue lines) over time, while the Hammer and Rosen Correction (dashed green lines) accurately predicts the diffusion solution. 5.4 Model generalization and performance In the previous sections, we demonstrated the effectiveness of the Hammer and Rosen Correction model in accurately predicting the Marshak wave behavior for unseen data. It is important to note that these tests were performed on collocation points of the spacing grid shown in Table 2. To validate generalization capabilities of FNO, we present additional tests on specific physical materials from Table 1. Figure 6 compares the front position solutions obtained from the diffusion solver and the Hammer and Rosen Correction model for four different materials: C15H20O6Au0.172, Be, C15H20O6, and C6H12 with properties as specified in [30]. These materials were not explicitly included in the training data grid but represent realistic physical scenarios. The subfigures 6a, 6b, 6c, and 6d show excellent agreement between diffusion solutions and the Hammer and Rosen Correction model predictions for all four materials. This demonstrates that the FNO has successfully learned the mapping 12 0.0 0.2 0.4 0.6 0.8 1.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (a) Case 1 temperature solution 0.00 0.01 0.02 0.03 0.04 0.05 0.06 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (b) Case 2 temperature solution 0.0 0.2 0.4 0.6 0.8 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (c) Case 3 temperature solution 0.0 0.5 1.0 1.5 2.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (d) Case 4 temperature solution Fig. 5: Comparison of the temperature profiles for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution at the same time instance for different sets of input parameters. The Hammer and Rosen approximation (orange line) exhibits discrepancies compared to the diffusion solution (blue line), while the Hammer and Rosen Correction (dashed green lines) closely match the diffusion solution. in the entire parameter space and can accurately predict the Marshak wave behavior for arbitrary material properties within the considered ranges. To quantitatively asses the performance and computational efficiency of the Hammer and Rosen Correction model, we compare it with the base model in Table 3. Both models are trained with the same number of trainable parameters, training data, and epochs to ensure a fair comparison. The mean squared error (MSE) is used as the evaluation metric for both temperature and front position predictions. The results in Table 3 show that the Hammer and Rosen Correction model significantly outperforms the base model in terms of prediction accuracy. The Hammer and Rosen Correction model achieves a 56.16% improvement in temperature MSE and a 13 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 xf (cm) Di\ufb00usion HR HR Correction (a) C15H20O6Au0.172 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.06 0.12 0.18 0.24 0.30 0.36 xf (cm) Di\ufb00usion HR HR Correction (b) Be 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.04 0.08 0.12 0.16 0.20 0.24 xf (cm) Di\ufb00usion HR HR Correction (c) C15H20O6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.08 0.16 0.24 0.32 0.40 0.48 xf (cm) Di\ufb00usion HR HR Correction (d) C6H12 Fig. 6: Comparison of the front positions obtained from the Hammer and Rosen approximation (orange lines), diffusion solver (blue lines), and the Hammer and Rosen Correction model (dashed green lines) for four different materials from the Table 1. Table 3: Prediction performance and computational costs of deep learning models (MSE is the mean squared error) Parameter HR Correction Base model % Improvement Temperature MSE 0.00081 0.00185 56.16 Front position MSE 0.00807 0.01220 33.93 Train data 1.05M 1.05M Trainable parameters 58k 58k Epochs 30 30 Inference time (s) 0.0032 0.0016 33.93% improvement in front position MSE compared to the base model. This superior performance can be attributed to the hybrid-nature approach of the Hammer and Rosen Correction model. 14 In terms of computational efficiency, the Hammer and Rosen Correction model has slightly slower inference time as compared to the base model. This is expected due to the additional complexity introduced by the correction step. However, it is important to note that both models have extremely fast inference times, with the Hammer and Rosen Correction model requiring only 0.0032 seconds per prediction and the base model requiring 0.0016 seconds. These fast inference time highlight the efficiency of the FNO-based approach, enabling real-time predictions of the Marshak wave behavior. 6 Conclusion In this work, we presented a novel approach for modeling Marshak wave experiments using Fourier Neural Operators (FNO). The primary objective was to develop an efficient and accurate method for predicting Marshak wave behavior across a wide range of material properties and boundary temperature functions. We introduced two FNO-based models: a base model and a Hammer and Rosen Correction model. The base model takes material properties and boundary temperature function parameters as inputs and uses a numerical approximation as the output. This model served as a foundation for exploring the capabilities of learning the underlying physics. To address innaccuracies of the Hammer and Rosen approximation, we developed a hybrid datadriven Hammer and Rosen Correction model. This model maps the Hammer and Rosen solution to a more accurate diffusion solution. The performance of these models were evaluated over a wide range of the parameter space. The results demonstrated strong generalization capabilities on unseen data. The Hammer and Rosen Correction model achieved 56.16% improvement in temperature MSE and a 33.93% improvement in front position MSE compared to the base model. These results pave the way for further exploration of more complex models and application to multidimensional problems in high energy density physics."
17
+ }
intro_8K/test_introduction_short_2405.04233v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04233v1",
3
+ "title": "Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models",
4
+ "abstract": "We introduce Vidu, a high-performance text-to-video generator that is capable\nof producing 1080p videos up to 16 seconds in a single generation. Vidu is a\ndiffusion model with U-ViT as its backbone, which unlocks the scalability and\nthe capability for handling long videos. Vidu exhibits strong coherence and\ndynamism, and is capable of generating both realistic and imaginative videos,\nas well as understanding some professional photography techniques, on par with\nSora -- the most powerful reported text-to-video generator. Finally, we perform\ninitial experiments on other controllable video generation, including\ncanny-to-video generation, video prediction and subject-driven generation,\nwhich demonstrate promising results.",
5
+ "authors": "Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, Jun Zhu",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.LG"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Diffusion models have obtained breakthrough progress on generating high-quality images, videos and other types of data, outperforming alternative approaches like auto-regressive networks. Previously, video generation models primarily relied on diffusion models [13, 9, 14] with the U-Net backbone [11], and focused on a single limited duration like 4 seconds [8, 5, 7, 4]. Our model, Vidu, demonstrates that a text-to-video diffusion model with U-ViT [1, 2] as its backbone can break this duration limitation by leveraging the scalability and the long sequence modeling ability of a transformer [15]. Vidu is capable of producing 1080p videos up to 16 seconds in a single generation, as well as images as videos of a single frame. Additionally, Vidu exhibits strong coherence and dynamism, and is capable of generating both realistic and imaginative videos. Vidu also has a preliminary understanding of some professional photography techniques, such as transitions, camera movements, lighting effects and emotional portrayal. We observe that to some extent, the generation performance of Vidu is comparable with that of Sora [6], which is currently the most powerful text-to-video generator, much better than the other text-to-video generators. Finally, we perform initial experiments on other controllable video generation, including canny-to-video generation [16], video prediction and subject-driven generation [12]. All of them demonstrate promising results.",
16
+ "main_content": "Vidu firstly employs a video autoencoder [10] to reduce both the spatial and temporal dimensions of videos for efficient training and inference. After that, Vidu employs a U-ViT [1] as the noise prediction network to model these compressed representations. Specifically, as shown in Figure 1, U-ViT splits the compressed videos into 3D patches, treats all inputs including the time, text condition \u2217Second authors listed alphabetically. \u2021The corresponding author. arXiv:2405.04233v1 [cs.CV] 7 May 2024 Transformer Block Transformer Block Transformer Block Transformer Block t c Embedding Layer Linear 0 1 2 3 4 5 6 L \u00b7\u00b7\u00b7 C \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 Transformer Block Embeddings Norm MLP Multi-Head Attention Norm + + + : Add C : Concatenate + Linear Transformer Block \ud835\udc99\ud835\udc61 C Rearrange to T\u00d73\u00d7H\u00d7W Predicted noise Figure 1: The U-ViT architecture for predicting the noise in videos. and noisy 3D patches as tokens, and employs long skip connections between shallow and deep layers in a transformer. By leveraging the ability of transformers to process variable-length sequences, Vidu can handle videos with variable durations. Vidu is trained on vast amount of text-video pairs, and it is infeasible to have all videos labeled by humans. To address it, we firstly train a high-performance video captioner optimized for understanding dynamic information in videos, and then automatically annotate all the training videos using this captioner. During inference, we apply the re-captioning technique [3] to rephrase user inputs into a form that is more suitable for the model. 2 2.1 Generating Videos of Different Lengths Since Vidu is trained on videos of various lengths, it can generate 1080p videos of all lengths up to 16 seconds, including images as videos of a single frame. We present examples in Figure 2. (a) 16 seconds. Prompt: A person clad in a space suit with a helmet and equipped with a chest light and arm device is seen closely examining and interacting with a variety of plants in a lush, indoor botanical setting. (b) 8 seconds. Prompt: A desolate lunar landscape with craters and a large moon in the sky transitions to a warmly lit interior of a spacecraft-like structure where a group of people are engaged in various activities. (c) Image. Prompt: An exquisite silverware piece, aesthetically adorned with intricate patterns and scenes, exhibits the detailed artisanship and metallic sheen. (d) Image. Prompt: Under the veil of nightfall, a rose reveals its subtle, exquisite beauty in the gentle moonlight. Figure 2: Vidu can generate videos of all lengths up to 16 seconds, including images. 3 2.2 3D Consistency The video generated by Vidu exhibits strong 3D consistency. As the camera rotates, the video presents projections of the same object from different angles. For instance, as shown in Figure 3, the hair of the generated cat naturally occludes as the camera rotates. (a) Prompt: This portrait depicts an orange cat with blue eyes, slowly rotating, inspired by Vermeer\u2019s \u2019Girl with a Pearl Earring\u2019. The cat is adorned with pearl earrings and has brown fur styled like a Dutch cap against a black background, illuminated by studio lighting. (b) Prompt: In a studio, there is a painting depicting a ship sailing through the rough sea. (c) Prompt: A red car is stuck in the snow, with the entire vehicle emitting green light and red signal lights flashing on the back. The camera slowly pans around the car. Figure 3: 3D consistency of Vidu. 4 2.3 Generating Cuts Vidu is capable of generating videos incorporating cuts. As shown in Figure 4, these videos present different perspectives of the same scene by switching camera angles, while maintaining consistency of subjects in the scene. (a) Prompt: A sculptor is intently working on a clay bust, meticulously refining its facial features with precise hand movements. (b) Prompt: Churning ocean waves at night with a lighthouse on the coast create an intense and somewhat foreboding atmosphere. The scene is set under an overcast sky, with the ocean\u2019s dark waters illuminated by natural light, highlighting the white foam of the waves. Figure 4: Vidu is capable of generating videos with cuts. 5 2.4 Generating Transitions Vidu is capable of producing videos with transitions in a single generation. As shown in Figure 5, these transitions can connect two different scenes in an engaging manner. (a) Prompt: An elderly man with glasses, dressed in formal attire, is deeply engrossed in examining a large, ornate pocket watch. As the video progresses, there is a cinematic transition to a fantastical mechanical cityscape, viewed through the openwork of the watch. This shift evokes a sense of wonder and transports the viewer into a steampunk-inspired world where buildings and structures are made of metal and gears. (b) Prompt: A person holding a dessert with a fluffy layer of whipped cream elegantly drizzled with smooth chocolate sauce. As a dollop of cream falls, a mini polar bear appears, with floating icebergs nearby, set against a serene blue backdrop. Figure 5: Vidu is capable of generating videos with transitions. 6 2.5 Camera Movements Camera movements involve the physical adjustments or movements of a camera during filming, enhancing visual narrative and conveying various perspectives and emotions within scenes. Vidu learned these techniques from the data, enhancing the visual experience of viewers. For instance, as shown in Figure 6, Vidu is capable of generating videos with camera movements including zoom, pan and dolly. (a) Zoom. Prompt: A large sailing ship sails slowly through the fog. (b) Pan. Prompt: An elderly man with a white beard is seated in a room filled with wooden bookshelves, brimming with old books. He is dressed in a dark suit and tie, and he is engrossed in reading a large book. The room is bathed in the warm glow of sunlight streaming through a window, creating a serene and contemplative atmosphere. (c) Dolly. Prompt: An animated hedgehog with distinctive spiky hair and large eyes is seen exploring a lush, grassy environment. Figure 6: Camera movements generated by Vidu. 7 2.6 Lighting Effects Vidu is capable of generating videos with impressive lighting effects, which help enhance the overall atmosphere. For example, as shown in Figure 7, the generated videos can evoke atmospheres of mystery and tranquility. Therefore, besides the entities within the video content, Vidu has the preliminary ability to convey some abstract feelings. (a) Prompt: A man wearing a hat and a dark suit walks from the corridor towards the room. The lighting casts a bluish tint over the scene, creating a suspenseful atmosphere. (b) Prompt: A rustic wooden cabin nestles by the shore of a clear, sunlit lake, surrounded by verdant trees and mountains. The water is calm, reflecting the sky above, with a few clouds scattered across it. Sailboats and kayaks are moored on the lake, inviting leisure and tranquility. Figure 7: Lighting effects generated by Vidu. 8 2.7 Emotional Portrayal Vidu is able to depict characters\u2019 emotions effectively. For example, as shown in Figure 8, Vidu can express emotions such as happiness, loneliness, embarrassment, and joy. (a) Prompt: A man and a woman are sharing a close and affectionate interaction in an indoor setting that suggests a romantic ambiance. (b) Prompt: An elderly woman with white hair and a lined face is seated inside an older model car, looking out through the side window with a contemplative or mildly sad expression. (c) Prompt: A couple about to get divorced sat awkwardly in the waiting room. (d) Prompt: Audience members in a theater are captured in a series of medium shots, with a young man and woman in formal attire centrally positioned and illuminated by a spotlight effect. Figure 8: Emotional portrayal of Vidu. 9 2.8 Imaginative Ability In addition to generating real-world scenes, Vidu also possesses a rich imagination. As shown in Figure 9, Vidu is able to generate scenes that do not exist in the real world. (a) Prompt: A painting of a boat on water comes to life, with waves crashing and the boat becoming submerged. (b) Prompt: An animated rabbit in a playful pink snowboarding outfit is carving its way down a snowy mountain slope under a clear blue sky. (c) Prompt: A model train with a blue engine is seen traveling through a meticulously crafted miniature landscape. The train is pulling several red and cream-colored passenger cars along a track that winds through a rural or suburban setting with small-scale houses, verdant trees, and miniature waterfalls. Figure 9: Imaginative ability of Vidu. 10 2.9 Comparison with Sora Sora [6] is currently the most powerful text-to-video generator, capable of producing high-definition videos with high consistency. However, as Sora is not publicly accessible, we compare them by inserting the example prompts released by Sora directly to Vidu. Figure 10 and Figure 11 illustrate the comparison between Vidu and Sora, indicating that to some extent, the generation performance of Vidu is comparable to Sora. (a) Sora (b) Vidu Figure 10: Prompt: The camera rotates around a large stack of vintage televisions all showing different programs \u2014 1950s sci-fi movies, horror movies, news, static, a 1970s sitcom, etc, set inside a large New York museum gallery. 11 (a) Sora (b) Vidu Figure 11: Prompt: The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it\u2019s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds. 12 3 Other Controllable Video Generation We also perform several initial experiments at 512 resolution on other controllable video generation, including canny-to-video generation [16], video prediction, and subject-driven generation [12]. All of them demonstrate promising results. 3.1 Canny-to-Video Generation Vidu can add additional control by using techniques similar to ControlNet [16], as shown in Figure 12. (a) Input canny. (b) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, realistic visual style. (c) Prompt: During the day, a red car drove towards me and splashed water as it passed by a pond, realistic visual style. (d) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, anime style. Figure 12: Canny-to-video generation examples of Vidu. 13 3.2 Video Prediction As shown in Figure 13, Vidu can generate subsequent frames, given an input image, or several input frames (marked with red boxes). (a) Prompt: A pink chrysanthemum flower with intricate petals is the focal point, resting on a wooden surface in an indoor setting. (b) Prompt: A serene mountainous landscape bathed in the warm glow of sunset or twilight, with snow-capped peaks rising above the green vegetation-covered slopes. A calm body of water rests in the foreground, reflecting the sky above, which is dotted with clouds tinged with pink and orange hues. Figure 13: Video prediction examples of Vidu. 14 3.3 Subject-Driven Generation We surprisingly find that Vidu can perform subject-driven video generation by finetuning solely on images without videos. For example, we use the DreamBooth [12] technique to designate the learned subject as a special symbol <V> for finetuning. As shown in Figure 14, the generated videos faithfully recreates the learned subject. (a) Input images. (b) Prompt: A <V> dog lies on the ground and then goes to eat from the bowl. (c) Prompt: A <V> dog bit his tail happily and shakes his head. Figure 14: Subject-driven generation examples of Vidu. 15 4 Conclusion We present Vidu, a high-definition text-to-video generator that demonstrates strong abilities in various aspects, including duration, coherence, and dynamism of the generated videos, on par with Sora. In the future, Vidu still has room for improvement. For instance, there are occasional flaws in details, and interactions between different subjects in the video sometimes deviate from physical laws. We believe that these issues can be effectively addressed by further scaling Vidu. 5 Acknowledgements We appreciate the support of the data team and the product team for the project at Shengshu. This work was partly supported by NSFC Projects (Nos. 62061136001, 62106123, 61972224), Tsinghua Institute for Guo Qiang, and the High Performance Computing Center, Tsinghua University. J.Z is also supported by the XPlorer Prize."
17
+ }
intro_8K/test_introduction_short_2405.04272v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04272v1",
3
+ "title": "BUDDy: Single-Channel Blind Unsupervised Dereverberation with Diffusion Models",
4
+ "abstract": "In this paper, we present an unsupervised single-channel method for joint\nblind dereverberation and room impulse response estimation, based on posterior\nsampling with diffusion models. We parameterize the reverberation operator\nusing a filter with exponential decay for each frequency subband, and\niteratively estimate the corresponding parameters as the speech utterance gets\nrefined along the reverse diffusion trajectory. A measurement consistency\ncriterion enforces the fidelity of the generated speech with the reverberant\nmeasurement, while an unconditional diffusion model implements a strong prior\nfor clean speech generation. Without any knowledge of the room impulse response\nnor any coupled reverberant-anechoic data, we can successfully perform\ndereverberation in various acoustic scenarios. Our method significantly\noutperforms previous blind unsupervised baselines, and we demonstrate its\nincreased robustness to unseen acoustic conditions in comparison to blind\nsupervised methods. Audio samples and code are available online.",
5
+ "authors": "Eloi Moliner, Jean-Marie Lemercier, Simon Welker, Timo Gerkmann, Vesa V\u00e4lim\u00e4ki",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "eess.AS",
9
+ "cats": [
10
+ "eess.AS",
11
+ "cs.LG",
12
+ "cs.SD"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "When acoustic waves propagate in enclosures and get reflected by walls, the sound received is perceived as reverberated, which can significantly degrade speech intelligibility and quality [1]. The goal of dereverberation is to recover the anechoic component from rever- berant speech. We focus here on the single-channel scenario, where measurements from only one microphone are available, which is sig- nificantly more challenging than multi-channel scenarios [2]. Traditional dereverberation algorithms assume some statistical properties, such as Gaussianity or sparsity, about the anechoic and reverberant signals. These properties are leveraged to perform dere- verberation in the time, spectral or cepstral domain [3]. These meth- ods can tackle informed scenarios, where the room impulse response (RIR) is known [4, 5] as well as blind scenarios where the RIR is unknown [6, 7]. Informed dereverberation is easier than blind dere- verberation, but most scenarios in real-life applications are blind, as the RIR is either not measured beforehand, or becomes invalid even with the slightest deviations in receiver or emitter positions. Data-driven approaches rely less on such assumptions but rather learn the signal properties and structures from data [8]. Most of these methods are based on supervised learning using pairs of anechoic and reverberant speech. Supervised predictive mod- els have been widely used for blind dereverberation, including time-frequency (T-F) maskers [9], time-domain methods [10] and \u2217These authors contributed equally to this work. 1uhh.de/sp-inf-buddy. spectro-temporal mapping [11]. Generative models represent an- other category of dereverberation algorithms aiming to learn the dis- tribution of anechoic speech conditioned on reverberant input. Some blind supervised methods using generative models such as diffusion models [12,13] have been recently proposed [14,15]. However, su- pervised approaches struggle with limited generalization to diverse acoustic conditions due to the scarcity and variability of available RIR data. Unsupervised approaches offer the potential to circumvent such limitations as they do not require paired anechoic/reverberant data. This paper builds upon prior work [16], which proposed an unsupervised method for informed single-channel dereverberation based on diffusion posterior sampling. The previous study showed the potential of leveraging diffusion models as a strong clean speech prior, which, when combined with a criterion to match the measure- ment, reached state-of-the-art dereverberation in an informed sce- nario [16]. This paper extends the method to blind dereverberation, where the unknown RIR is estimated along the anechoic speech. We parameterize the RIR with a model-based subband filter, where each subband of the reverberation filter is modeled by an exponentially decaying signal. The resulting algorithm is an optimization scheme alternating between the diffusion process generating the anechoic speech, and the parameter search estimating the acoustic conditions. Previous works in related domains explore various parameter estimation techniques for solving blind inverse problems with dif- fusion posterior sampling. For image deblurring, [17] propose to use a parallel diffusion process to estimate the deblurring kernel, while [18] adopts an expectation-maximization approach. In the au- dio domain, [19] address the problem of blind bandwidth extension by iteratively refining the parameters of the lowpass filter degrada- tion. Closely related is the work by Saito et al. [20], which per- form unsupervised blind dereverberation using DDRM [21] and the weighted-prediction error (WPE) algorithm as initialization [6]. We name our method BUDDy for Blind Unsupervised Derever- beration with Diffusion Models. We show experimentally that BUDDy efficiently removes reverberation from speech utterances in many acoustic scenarios, thereby largely outperforming previous blind unsupervised techniques. As supervision is not required dur- ing the training phase, we demonstrate that BUDDy does not lose performance when presented with unseen acoustic conditions, as opposed to existing blind supervised dereverberation approaches.",
17
+ "main_content": "2.1. Diffusion-Based Generative Models Diffusion-based generative models, or simply diffusion models [12, 22], emerged as a class of generative models that learn complex data distributions via iterative denoising. At training time, the target data arXiv:2405.04272v1 [eess.AS] 7 May 2024 distribution is transformed into a tractable Gaussian distribution by a forward process, incrementally adding noise. During the inference, the reverse process refines an initial noise sample into a data sample, by progressively removing noise. The reverse diffusion process, which transports noise samples from a Gaussian prior to the data distribution pdata, can be characterized by the following probability flow ordinary differential equation (ODE): dx\u03c4 = [f(x\u03c4, \u03c4) \u22121 2g(\u03c4)2\u2207x\u03c4 log p(x\u03c4)]d\u03c4, (1) where \u03c4 indexes the diffusion steps flowing in reverse from Tmax to 0. The current diffusion state x\u03c4 starts from the initial condition xTmax \u223cN(0, \u03c3(Tmax)2I) and ends at x0 \u223cpdata. We adopt the variance exploding parameterization of Karras et al. [23], where the drift and diffusion are defined as f(x\u03c4, \u03c4) = 0 and g(\u03c4) = \u221a 2\u03c4, respectively. Similarly, we adopt \u03c3(\u03c4) = \u03c4 as the noise variance schedule, which defines the so-called transition kernel i.e. the marginal densities: p\u03c4(x\u03c4|x0) = N(x\u03c4; x0, \u03c3(\u03c4)2I). The score function \u2207x\u03c4 log p(x\u03c4) is intractable at inference time as we do not have access to x0. In practice, a score model parameterized with a deep neural network s\u03b8(x\u03c4, \u03c4) is trained to estimate the score function using a denoising score matching objective [24]. 2.2. Diffusion Posterior Sampling for Dereverberation Single-channel dereverberation can be considered as the inverse problem of retrieving the anechoic utterance x0 \u2208RL from the reverberant measurement y \u2208RL, which is often modelled by convolving the anechoic speech with an RIR h \u2208RLh, expressed as y = h \u2217x0. We aim to solve this inverse problem by sampling from the posterior distribution p(x0|y, h) of anechoic speech given the measurement and the RIR. We adopt diffusion models for this posterior sampling task by replacing the score function \u2207x\u03c4 log p(x\u03c4) in (1) by the posterior score \u2207x\u03c4 log p(x\u03c4|y, h) [13]. Applying Bayes\u2019 rule, the posterior score is obtained as \u2207x\u03c4 log p(x\u03c4|y, h) = \u2207x\u03c4 log p(x\u03c4) + \u2207x\u03c4 log p(y|x\u03c4, h), (2) where the first term, or prior score, can be approximated with a trained score model s\u03b8(x\u03c4, \u03c4) \u2248\u2207x\u03c4 log p(x\u03c4). The likelihood p(y|x\u03c4, h) is generally intractable because we lack a signal model for y given the diffusion state x\u03c4. We will introduce in the next section a series of approximations to make its computation tractable. 3. METHODS 3.1. Likelihood Score Approximation In order to obtain a tractable likelihood computation, we posit as in [25] that a one-step denoising estimate of x0 at time \u03c4 can serve as a sufficient statistic for x\u03c4 in this context, i.e. that p(y|x\u03c4, h) \u2248 p(y|\u02c6 x0, h). Such estimate \u02c6 x0 can be obtained using the score model: \u02c6 x0 \u2206 = \u02c6 x0(x\u03c4, \u03c4) = x\u03c4 \u2212\u03c3(\u03c4)2s\u03b8(x\u03c4, \u03c4). (3) Furthermore, we consider here that the convolution model remains valid when using this denoised estimate, and therefore that p(y|\u02c6 x0, h) \u2248p(y|\u02c6 x0\u2217h). Finally, we model the estimation error as following a Gaussian distribution in the compressed STFT domain. p(y|\u02c6 x0 \u2217h) = N(Scomp(y); Scomp(\u02c6 x0 \u2217h), \u03b72I), (4) where Scomp(y) = |STFT(y)|2/3 exp{j\u2220STFT(y)} is the compressed spectrogram. We apply this compression to account for the heavy-tailedness of speech distributions [26]. With this series of approximations, we obtain the following likelihood score: \u2207x\u03c4 log p(y|x\u03c4, h) \u2248\u2212\u03b6(\u03c4)\u2207x\u03c4 C(y, h \u2217\u02c6 x0), (5) where the function C(\u00b7, \u00b7) is defined as: C(y, \u02c6 y) = 1 M M X m=1 K X k=1 \u2225Scomp(y)m,k \u2212Scomp(\u02c6 y)m,k\u22252 2. (6) The weighting parameter \u03b6(\u03c4) controls the trade-off between adherence to the prior data distribution and fidelity to the observed data. According to our Gaussian assumption (4), its theoretical value should depend on the unknown variance \u03b7 as \u03b6(\u03c4) = 1/2\u03b72. In practice, we resort to the same parameterization as in [19,27]. 3.2. Reverberation Operator The employed reverberation operator relies on a subband filtering approximation [28], which is applied within the Short-Time Fourier Transform (STFT) domain. Let H := STFT(h) \u2208CNh\u00d7K represent the STFT of an RIR h with Nh time frames and K frequency bins. Similarly, let X \u2208CM\u00d7K, and Y \u2208CM+Nh\u22121\u00d7K, denote the STFTs of anechoic x0 and reverberant y speech signals, repectively. The subband convolution operation applies independent convolutions along the time dimension of each frequency band: Ym,k = Nh X n=0 Hn,kXm\u2212n,k. (7) In the blind scenario, we need to estimate H, which is an arduous task without knowledge of the anechoic speech. We constrain the space of possible solutions by designing a structured, differentiable RIR prior whose parameters \u03c8 can be estimated through gradient descent. We denote the complete forward reverberation operator, including forward and inverse STFT, as A\u03c8(\u00b7) : RL \u2192RL. We denote as A \u2208RNh\u00d7K and \u03a6 \u2208RNh\u00d7K the RIR magnitudes and phases of H, respectively. We parameterize the magnitude matrix A as a multi-band exponential decay model defined in B < K frequency bands. Let A\u2032 \u2208RNh\u00d7B be the subsampled version of A in the B selected frequency bands. Each frequency band b is characterized by its weight wb and exponential decay rate \u03b1b, such that the corresponding subband magnitude filter can be expressed as: A\u2032 n,b = wbe\u2212\u03b1bn. (8) Once the weights and decay rates parameters are estimated, we reconstruct the magnitudes A by interpolating the subsampled A\u2032 using A = exp(lerp(log(A\u2032))), where lerp represents linear interpolation of the frequencies. Given the lack of structure of RIR phases, we perform independent optimization for each phase factor in \u03a6. The resulting set of parameters to optimize is therefore \u03c8 = {\u03a6, (wb, \u03b1b)b=1,...,B}. After each optimization step, the estimated time-frequency RIR H is further processed through a projection step: H = STFT(\u03b4 \u2295Pmin(iSTFT(H))). (9) This operation primarily ensures STFT consistency [29] of H. We additionally include a projection Pmin that ensures the time domain RIR has minimum phase lag to guarantee a stable inverse filter, using the Hilbert transform method [30]. Finally, to make the directto-reverberation ratio only depend on the late reverberation and to xN \u03c8N xn \u03c8n Score Model s\u03b8(xn, \u03c3n) \u02c6 x0 RIR Optimization \u00d7Nits. Posterior Sampling Step LH Score Approx. \u2212\u03b6(\u03c4n)\u2207xnC(y, A\u03c8n(\u02c6 x0)) xn\u22121 \u03c8n\u22121 x0 \u03c80 Fig. 1: Blind unsupervised dereverberation alternating between RIR estimation and posterior sampling for speech reconstruction. enforce further constraints on \u03c8 for a more stable optimization, we take the direct path to be at the first sample and with amplitude one. This is achieved by replacing the first sample of the time-domain RIR with a unit impulse, as indicated by the operation \u03b4 \u2295(\u00b7). 3.3. Blind Dereverberation Inference The inference process solves the following objective: \u02c6 x0, \u02c6 \u03c8 = arg min x0,\u03c8 C(y, A\u03c8(x0)) + R(\u03c8), s.t. x0 \u223cpdata. (10) This objective seeks to find the optimal speech \u02c6 x0 and RIR parameters \u02c6 \u03c8 that minimize the reconstruction error C(y, A\u03c8(x0)) while also incorporating a regularization term R(\u03c8). An essential aspect is the constraint x0 \u223cpdata, which ensures that the estimated signal \u02c6 x0 adheres to the distribution pdata of anechoic speech samples. This constraint is implemented in a soft manner by leveraging a pretrained score model s\u03b8(x\u03c4, \u03c4) trained on anechoic speech. The inference algorithm is outlined in Algorithm 1 and visualized in Fig. 1, using the discretization further described in Eq. (12). The algorithm employs the likelihood score approximation from Sec. 3.1, but replacing the convolution with the the reverberation operator A\u03c8(\u00b7), while its parameters \u03c8 are optimized in parallel with the speech signal through gradient descent. We introduce in (10) a noise regularization term R(\u03c8): R(\u03c8) = 1 Nh Nh X l=1 K X k=1 \u2225Scomp(\u02c6 h\u03c8)l,k \u2212Scomp(\u02c6 h\u03c8\u2032 + \u03c3\u2032v)l,k\u22252 2, (11) where \u02c6 h\u03c8 = A\u03c8(\u03b4) represents the estimated RIR in the waveform domain, v \u223cN(0, I) is a vector of white Gaussian noise, and \u02c6 h\u03c8\u2032 is a copy of the current estimate of \u02c6 h\u03c8, such that the arg min in (10) does not apply to it. In code, this is analogous to detaching the gradients of \u02c6 h\u03c8 using a stop grad operator. We adopt an annealed schedule for the noise level \u03c3\u2032(\u03c4), resembling the score model schedule \u03c3(\u03c4) but with different hyper-parameters. This regularization term injects noise in the RIR parameter gradients, with decreasing noise power, which enables a wider and smoother exploration while allowing for convergence toward the end of the optimization. 4. EXPERIMENTAL SETUP 4.1. Data We use VCTK [34] as clean speech, selecting 103 speakers for training, 2 for validation and 2 for testing. We curate recorded RIRs Algorithm 1 Inference algorithm Require: reverberant speech y xinit \u2190WPE(y) Sample xN \u223cN(xinit, \u03c32 NI) \u25b7Warm initialization Initialize \u03c8N \u25b7Initialize the RIR parameters for n \u2190N, . . . , 1 do \u25b7Discrete step backwards sn \u2190s\u03b8(xn, \u03c4n) \u25b7Evaluate score model \u02c6 x0 \u2190xn \u2212\u03c32 nsn \u25b7Get one-step denoising estimate \u02c6 x0 \u2190Rescale(\u02c6 x0) \u03c80 n\u22121 \u2190\u03c8n \u25b7Use the RIR parameters from last step for j \u21900, . . . , Nits. do \u25b7RIR optimization JRIR(\u03c8j n\u22121) \u2190C(y, A\u03c8j n\u22121(\u02c6 x0)) + R(\u03c8j n\u22121) \u03c8j+1 n\u22121 \u2190\u03c8j n\u22121 \u2212Adam(JRIR(\u03c8j n\u22121)) \u25b7Optim. step \u03c8j+1 n\u22121 \u2190project(\u03c8j+1 n\u22121) \u25b7Projection step \u03c8n\u22121 \u2190\u03c8M n\u22121 gn \u2190\u03b6(\u03c4n)\u2207xnC(y, A\u03c8n\u22121(\u02c6 x0)) \u25b7LH score approx. xn\u22121 \u2190xn \u2212\u03c3n(\u03c3n\u22121 \u2212\u03c3n)(sn + gn) \u25b7Update step return x0 \u25b7Reconstructed audio signal from various public datasets (please visit our code repository for details). In total we obtain approximately 10,000 RIRs, and split them between training, validation, and testing using ratios 0.9, 0.05, and 0.05, respectively. The training and validation sets are only used to train the baselines which require coupled reverberant/anechoic data. All data is resampled at 16 kHz. 4.2. Baselines We compare our method BUDDy to several blind supervised baselines such as NCSN++M [31] and diffusion-based SGMSE+ [14] and StoRM [15]. We also include blind unsupervised approaches leveraging traditional methods such as WPE [6] and Yohena et al. [7], as well as diffusion models Saito et al. [20] and GibbsDDRM [33] with code provided by the authors. For WPE, we take 5 iterations, a filter length of 50 STFT frames (400 ms) and a delay of 2 STFT frames (16 ms). 4.3. Hyperparameters and Training Configuration Data representation: We train the score model s\u03b8 using only the anechoic data from VCTK. For training, 4-s segments are randomly extracted from the utterances. Using publicly available code, the blind supervised models NCSN++M [31], SGMSE+ [14] and StoRM [15] are trained using coupled reverberant/anechoic speech, where the reverberant speech is obtained by convolving the anechoic speech from VCTK with the normalized RIRs. Reverberation operator: For all methods, STFTs are computed using a Hann window of 32 ms and a hop size of 8 ms. For subband filtering, we further employ 50% zero-padding to avoid aliasing artifacts. Given our sampling rate of fs = 16 kHz, this results in K = 513 frequency bins. We set the number of STFT frames of our operator to Nh = 100 (800 ms). We subsample the frequency scale in B = 26 bands, with a 125-Hz spacing between 0 and 1 kHz, a 250-Hz spacing between 1 and 3 kHz, and a 500-Hz spacing between 3 and 8 kHz. We optimize the RIR parameters \u03c8 with Adam, where the learning rate is set to 0.1, the momentum parameters to \u03b21 = 0.9, and \u03b22 = 0.99, and Nits. = 10 optimization iterations per diffusion step. We constrain the weights wb between 0 and 40 dB, Table 1: Dereverberation results obtained on VCTK-based reverberant datasets. Values indicate mean and standard deviation. We indicate for each method in the table if is blind (i.e. have no knowledge of the RIR) and/or unsupervised. Boldface numbers indicate best performance for supervised and unsupervised methods separately. For all metrics, higher is better. Matched Mismatched Method Blind Unsup. DNS-MOS PESQ ESTOI DNS-MOS PESQ ESTOI Reverberant 3.14 \u00b1 0.52 1.61 \u00b1 0.37 0.50 \u00b1 0.14 3.05 \u00b1 0.47 1.57 \u00b1 0.29 0.47 \u00b1 0.11 RIF+Post [5] \u2717 \u2713 3.41 \u00b1 0.47 2.66 \u00b1 0.40 0.76 \u00b1 0.09 3.55 \u00b1 0.45 2.86 \u00b1 0.31 0.78 \u00b1 0.09 InfDerevDPS [16] \u2717 \u2713 3.91 \u00b1 0.35 3.77 \u00b1 0.41 0.83 \u00b1 0.09 3.92 \u00b1 0.32 3.69 \u00b1 0.31 0.84 \u00b1 0.08 NCSN++M [31] \u2713 \u2717 3.75 \u00b1 0.38 2.85 \u00b1 0.55 0.80 \u00b1 0.10 3.61 \u00b1 0.39 2.08 \u00b1 0.47 0.64 \u00b1 0.09 SGMSE+M [14,31] \u2713 \u2717 3.88 \u00b1 0.32 2.99 \u00b1 0.48 0.78 \u00b1 0.09 3.74 \u00b1 0.34 2.48 \u00b1 0.47 0.69 \u00b1 0.09 StoRM [15] \u2713 \u2717 3.90 \u00b1 0.33 3.33 \u00b1 0.48 0.82 \u00b1 0.10 3.83 \u00b1 0.32 2.51 \u00b1 0.53 0.67 \u00b1 0.09 Yohena and Yatabe [7] \u2713 \u2713 2.99 \u00b1 0.56 1.80 \u00b1 0.33 0.55 \u00b1 0.12 2.94 \u00b1 0.44 1.71 \u00b1 0.29 0.51 \u00b1 0.10 WPE [32] \u2713 \u2713 3.24 \u00b1 0.54 1.81 \u00b1 0.42 0.57 \u00b1 0.14 3.10 \u00b1 0.48 1.74 \u00b1 0.37 0.54 \u00b1 0.12 Saito et al. [20] \u2713 \u2713 3.22 \u00b1 0.56 1.68 \u00b1 0.40 0.51 \u00b1 0.13 3.12 \u00b1 0.52 1.70 \u00b1 0.33 0.52 \u00b1 0.10 GibbsDDRM [33] \u2713 \u2713 3.33 \u00b1 0.53 1.70 \u00b1 0.37 0.51 \u00b1 0.13 3.30 \u00b1 0.52 1.75 \u00b1 0.36 0.52 \u00b1 0.11 BUDDy (proposed) \u2713 \u2713 3.76 \u00b1 0.41 2.30 \u00b1 0.53 0.66 \u00b1 0.12 3.74 \u00b1 0.38 2.24 \u00b1 0.54 0.65 \u00b1 0.12 and the decays \u03b1b between 0.5 and 28. This prevents the optimization from approaching degenerate solutions at early sampling stages. Furthermore, we rescale the denoised estimate \u02c6 x0 at each step to match the empirical dataset standard deviation \u03c3data = 5 \u00b7 10\u22122, so as to enforce a constraint on the absolute magnitudes of \u02c6 h\u03c8 and \u02c6 x0. Forward and reverse diffusion We set the extremal diffusion times to Tmax = 0.5 and Tmin = 10\u22124. For reverse diffusion, we follow Karras et al. [23] and employ a discretization of the diffusion time axis using N = 200 steps according to: \u2200n < N, \u03c4n = \u03c3n = \u0012 T 1/\u03c1 max + n N \u22121(T n/\u03c1 min \u2212T 1/\u03c1 max) \u0013\u03c1 , (12) with warping \u03c1 = 10. We use the second-order Euler-Heun stochastic sampler in [23] with Schurn = 50 and \u03b6\u2032 = 0.5 (prior scaling, see [27]), and the initial point xinit is taken to be the output of WPE [6] (with same parameters as the WPE baseline) plus Gaussian noise with standard deviation \u03c3 = Tmax. The annealing schedule \u03c3\u2032(\u03c4) in the noise regularization term in (11) is the same as the diffusion noise schedule \u03c3(\u03c4) but we bound it between extremal values \u03c3\u2032 min = 5 \u00d7 10\u22124 and \u03c3\u2032 max = 10\u22122. Network architecture: To remain consistent with [16], the unconditional score network architecture is NCSN++M [15, 31], a lighter variant of the NCSN++ [13] with 27.8M parameters instead of 65M. Training configuration: We adopt Adam as the optimizer to train the unconditional score model, with a learning rate of 10\u22124 and an effective batch size of 16 for 190k steps. We track an exponential moving average of the DNN weights with a decay of 0.999. Evaluation metrics: We assess the quality and intelligibility of speech using the intrusive Perceptual Evaluation of Speech Quality (PESQ) [35] and extended short-term objective intelligibility (ESTOI) [36]. We also employ the non-intrusive DNS-MOS [37], as a DNN-based mean opinion score (MOS) approximation. 5. RESULTS AND DISCUSSION Table 1 shows the dereverberation results for all baselines and indicates whether each approach is blind and/or unsupervised. We included the results for RIF+Post [5] and InfDerevDPS [16] in the informed scenario to show the upper bound of dereveberation quality one can achieve with perfect knowledge of the room acoustics. We use the same score model s\u03b8 and cost function C(\u00b7, \u00b7) for InfDerevDPS [16] as for BUDDy. Blind supervised approaches NCSN++M, SGMSE+M, and StoRM largely profit from the supervision during training, and boast a better performance compared to the unsupervised methods. However, in the mismatched setting, their performance dwindles because of their limited generalizability. In contrast, the proposed method BUDDy benefits from unsupervised training, and therefore, modifying the acoustic conditions does not impact performance at all: typically NCSN++M loses 0.78 PESQ by switching from the matched case to the mismatched case, where BUDDy loses 0.06. Our method then outperforms NCSN++M and comes within reach of other supervised approaches, although the generative nature of SGMSE+ and StoRM allow them to retain a relatively high generalization ability. We also observe that the traditional blind unsupervised methods such as WPE [6] and Yohena and Yatabe [7] can only perform limited dereverberation, as they do not benefit from the strong anechoic speech prior that learning-based methods parameterized with deep neural networks offer. Finally, we note that BUDDy performs significantly better on all metrics than the diffusion-based blind unsupervised baselines Saito et al. [20] and GibbsDDRM [33], as these perform mild dereverberation in the presented acoustic conditions, where the input direct-to-reverberant ratio is significanty lower than in the authors\u2019 setup. 6. CONCLUSIONS This paper presents BUDDy, the first unsupervised method simultaneously performing blind dereverberation and RIR estimation using diffusion posterior sampling. BUDDy significantly outperforms traditional and diffusion-based unsupervised blind approaches. Unlike blind supervised methods, which often struggle with generalization to unseen acoustic conditions, our unsupervised approach overcomes this limitation due to its ability to adapt the reverberation operator to a broad range of room impulse responses. While blind supervised methods outperform our approach when the tested conditions match those at training time, our method is on par or even outperforms some supervised baselines in a mismatched setting. 7."
18
+ }
intro_8K/test_introduction_short_2405.04356v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04356v1",
3
+ "title": "Diffusion-driven GAN Inversion for Multi-Modal Face Image Generation",
4
+ "abstract": "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.",
5
+ "authors": "Jihyun Kim, Changjae Oh, Hoseok Do, Soohyun Kim, Kwanghoon Sohn",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "In recent years, multi-modal image generation has achieved remarkable success, driven by the advancements in Genera- tive Adversarial Networks (GANs) [15] and diffusion mod- els (DMs) [11, 18, 48]. Facial image processing has become a popular application for a variety of tasks, including face image generation [21, 39], face editing [6, 12, 30, 36, 37, 46], and style transfer [7, 64]. Many tasks typically utilize the pre-trained StyleGAN [21, 22], which can generate real- istic facial images and edit facial attributes by manipulating the latent space using GAN inversion [39, 42, 58]. In these tasks, using multiple modalities as conditions is becoming a popular approach, which improves the user\u2019s controllability in generating realistic face images. However, existing GAN *Corresponding author This research was supported by the National Research Founda- tion of Korea (NRF) grant funded by the Korea government (MSIP) (NRF2021R1A2C2006703). rebuttal (a) Oil painting (b) Watercolor Visual input 2D face image generation 3D-aware face image generation Face style transfer \u201cThe woman has bangs, brown hair. She is smiling.\u201d \u201cGreek statue\u201d \u201csilver hair Elf\u201d \u201cCartoon style\u201d Overview of our method \u201cThe chubby man has receding hairline, eyeglasses, gray hair, and double chin.\u201d \u201cWatercolor painting\u201d GAN Ours Diffusion \u201cShe has blond hair, straight hair, and wears heavy makeup.\u201d Visual condition Text condition Figure 1. We present a method to map the diffusion features to the latent space of a pre-trained GAN, which enables diverse tasks in multi-modal face image generation and style transfer. Our method can be applied to 2D and 3D-aware face image generation. inversion methods [51, 58] have poor alignment with inputs as they neglect the correlation between multi-modal inputs. They struggle to map the different modalities into the latent space of the pre-trained GAN, such as by mixing the latent codes or optimizing the latent code converted from a given image according to the input text. Recently, DMs have increased attention in multi-modal image generation thanks to the stability of training and the flexibility of using multiple modalities as conditions. DMs [23, 53, 54] can control the multiple modalities and render diverse images by manipulating the latent or atten- tion features across the time steps. However, existing text- to-image DMs rely on an autoencoder and text encoder, such as CLIP [41], trained on unstructured datasets col- lected from the web [40, 45] that may lead to unrealistic arXiv:2405.04356v1 [cs.CV] 7 May 2024 image generation. Moreover, some approaches address multi-modal face image generation in a 3D domain. In GAN inversion [14, 51], multi-view images can be easily acquired by manip- ulating the latent code with pre-trained 3D GANs. While DMs are inefficient in learning 3D representation, which has the challenge to generate multi-view images directly due to the lack of 3D ground-truth (GT) data for train- ing [32, 47]. They can be used as a tool to acquire training datasets for 3D-aware image generation [24, 33]. In this paper, we present a versatile face generative model that uses text and visual inputs. We propose an ap- proach that takes the strengths of DMs and GAN and gener- ates photo-realistic images with flexible control over facial attributes, which can be adapted to 2D and 3D domains, as illustrated in Figure 1. Our method employs a latent map- ping strategy that maps the diffusion features into the la- tent space of a pre-trained GAN using multi-denoising step learning, producing the latent code that encodes the details of text prompts and visual inputs. In summary, our main contributions are: (i) We present a novel method to link a pre-trained GAN (StyleGAN [22], EG3D [4]) and DM (ControlNet [62]) for multi-modal face image generation. (ii) We propose a simple mapping network that links pre- trained GAN and DM\u2019s latent spaces and an attention- based style modulation network that enables the use of meaningful features related to multi-modal inputs. (iii) We present a multi-denoising step training strategy that enhances the model\u2019s ability to capture the textual and structural details of multi-modal inputs. (iv) Our model can be applied for both 2D- and 3D-aware face image generation without additional data or loss terms and outperforms existing DM- and GAN-based methods.",
15
+ "main_content": "2.1. GAN Inversion GAN inversion approaches have gained significant popularity in the face image generation task [7, 31, 51, 59] using the pre-trained 2D GAN, such as StyleGAN [21, 22]. This method has been extended to 3D-aware image generation [27, 60, 61] by integrating 3D GANs, such as EG3D [4]. GAN inversion can be categorized into learning-based, optimization-based, and hybrid methods. Optimization-based methods [44, 67] estimate the latent code by minimizing the difference between an output and an input image. Learning-based methods [1, 52] train an encoder that maps an input image into the latent space of the pre-trained GAN. Hybrid methods [58, 66] combine these two methods, producing an initial latent code and then refining it with additional optimizations. Our work employs a learning-based GAN inversion, where a DM serves as the encoder. We produce latent codes by leveraging semantic features in the denoising U-Net, which can generate images with controlled facial attributes. 2.2. Diffusion Model for Image Generation Many studies have introduced text-to-image diffusion models [36, 43, 45] that generate images by encoding multimodal inputs, such as text and image, into latent features via foundation models [41] and mapping them to the features of denoising U-Net via an attention mechanism. ControlNet [62] performs image generation by incorporating various visual conditions (e.g., semantic mask, scribbles, edges) and text prompts. Image editing models using DMs [16, 20, 26, 28, 34] have exhibited excellent performance by controlling the latent features or the attention maps of a denoising U-Net. Moreover, DMs can generate and edit images by adjusting latent features over multiple denoising steps [2]. We focus on using latent features of DM, including intermediate features and cross-attention maps, across denoising steps to link them with the latent space of GAN and develop a multi-modal face image generation task. 2.3. Multi-Modal Face Image Generation Face generative models have progressed by incorporating various modalities, such as text [25], semantic mask [38, 55], sketch [5, 9], and audio [65]. Several methods adopt StyleGAN, which can generate high-quality face images and edit facial attributes to control the style vectors. The transformer-based models [3, 13] are also utilized, which improves the performance of face image generation by handling the correlation between multi-modal conditions using image quantization. A primary challenge faced in face generative models is to modify the facial attributes based on given conditions while minimizing changes to other attributes. Some methods [39, 57] edit facial attributes by manipulating the latent codes in GAN models. TediGAN [58] controls multiple conditions by leveraging an encoder to convert an input image into latent codes and optimizing them with a pre-trained CLIP model. Recent works [19, 35] use DMs to exploit the flexibility of taking multiple modalities as conditions and generate facial images directly from DMs. Unlike existing methods, we use the pre-trained DM [62] as an encoder to further produce the latent codes for the pre-trained GAN models. 3. Method 3.1. Overview Figure 2 illustrates the overall pipeline of our approach. During the reverse diffusion process, we use the middle and decoder blocks of a denoising U-Net in ControlNet [62] as an encoder E. A text prompt c, along with a visual condition x, are taken as input to the denoising U-Net. Subsequently, E produces the feature maps h from the middle block, and \ud835\udc300 \ud835\udefe \u2219\u2219\u2219 \ud835\udc61= 0 \ud835\udc61= \ud835\udc47 \ud835\udc3c0 \u2032 \ud835\udc3c0 \ud835\udc51 \ud835\udc210 \ud835\udc3c\ud835\udc47 \u2032 \u2219\u2219\u2219 Conv ReLU \ud835\udc21\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udc5a \ud835\udc300 \ud835\udc300 \ud835\udefd Conv ReLU FC \u0de0 \ud835\udc05\ud835\udc61 \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd \ud835\udc1f0 \ud835\udc300 \ud835\udc5a \ud835\udc50 Reverse Process of Diffusion \ud835\udc1a\ud835\udc61 \ud835\udc1f\ud835\udc61 Max-pool Average Average Upsample \ud835\udc05\ud835\udc61 \ud835\udc00\ud835\udc61 \u0d25 \ud835\udc00\ud835\udc61 \u0d24 \ud835\udc05\ud835\udc61 Style Modulation Network \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \ud835\udc1a0 \ud835\udc50 \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d \u201cThis person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Pixel-wise multiplication Pixel-wise addition Our Model Mapping Network AbSMNet Frozen Figure 2. Overview of our method. We use a diffusion-based encoder E, the middle and decoder blocks of a denoising U-Net, that extracts the semantic features ht, intermediate features ft, and cross-attention maps at at denoising step t. We present the mapping network M (Sec. 3.2) and the attention-based style modulation network (AbSMNet) T (Sec. 3.3) that are trained across t (Sec. 3.4). M converts ht into the mapped latent code wm t , and T uses ft and at to control the facial attributes from the text prompt c and visual input x. The modulation codes w\u03b3 t and w\u03b2 t are then used to scale and shift wm t to produce the final latent code, wt, that is fed to the pre-trained GAN G. We obtain the generation output I\u2032 t from our model Y and we use the image Id 0 from the U-Net after the entire denoising process for training T (Sec. 3.4). Note that only the networks with the dashed line ( ) are trainable, while others are frozen. the intermediate features f and the cross-attention maps a from the decoder blocks. h is then fed into the mapping network M, which transforms the rich semantic feature into a latent code wm. The Attention-based Style Modulation Network (AbSMNet), T , takes f and a as input to generate the modulation latent codes, w\u03b3 and w\u03b2, that determine facial attributes related to the inputs. The latent code w is then forwarded to the pre-trained GAN G that generates the output image I\u2032. Our model is trained across multiple denoising steps, and we use the denoising step t to indicate the features and images obtained at each denoising step. With this pipeline, we aim to estimate the latent code, w\u2217 t , that is used as input to G to render a GT image, Igt: w\u2217 t = arg min wt L(Igt, G(wt)), (1) where L(\u00b7, \u00b7) measures the distance between Igt and the rendered image, I\u2032 = G(wt). We employ learning-based GAN inversion that estimates the latent code from an encoder to reconstruct an image according to given inputs. 3.2. Mapping Network Our mapping network M aims to build a bridge between the latent space of the diffusion-based encoder E and that of the pre-trained GAN G. E uses a text prompt and a visual input, and these textual and image embeddings are aligned by the cross-attention layers [62]. The feature maps h from the middle block of the denoising U-Net particularly contain rich semantics that resemble the latent space of the generator [28]. Here we establish the link between the latent spaces of E and G by using ht across the denoising steps t. Given ht, we design M that produces a 512-dimensional latent code wm t \u2208RL\u00d7512 that can be mapped to the latent space of G: wm t = M(ht). (2) M is designed based on the structure of the map2style block in pSp [42], as seen in Figure 2. This network consists of convolutional layers downsampling feature maps and a fully connected layer producing the latent code wm t . 3.3. Attention-based Style Modulation Network By training M with learning-based GAN inversion, we can obtain wm t and use it as input to the pre-trained GAN for image generation. However, we observe that ht shows limitations in capturing fine details of the facial attributes due to its limited spatial resolution and data loss during the encoding. Conversely, the feature maps of the DM\u2019s decoder blocks show rich semantic representations [53], benefiting from aggregating features from DM\u2019s encoder blocks via skip connections. We hence propose a novel Attentionbased Style Modulation Network (AbSMNet), T , that produces style modulation latent codes, w\u03b3 t , w\u03b2 t \u2208RL\u00d7512, by using ft and at from E. To improve reflecting the multimodal representations to the final latent code wt, we modulate wm t from M using w\u03b3 t and w\u03b2 t , as shown in Figure 2. We extract intermediate features, ft = {f n t }N n=1, from N different blocks, and cross-attention maps, at = {ak t }K k=1, from K different cross-attention layers of the n-th block, in E that is a decoder stage of denoising U-Net. The discrim(a) Cross-attention maps averaging for all denoising steps t= 0 \ud835\udc61= \ud835\udc47 (b) Cross-attention maps for individual denoising steps \ud835\udc00\ud835\udc61 0 \ud835\udc00\ud835\udc61 1 \ud835\udc00\ud835\udc61 2 \u0d25 \ud835\udc00\ud835\udc61 \ud835\udc00\ud835\udc47 1 \ud835\udc05\ud835\udc47 1 \u0de0 \ud835\udc05\ud835\udc47 1 (c) Example of an intermediate feature map Multi-modal inputs Output \u201cThe person has arched eyebrows, wavy hair, and mouth slightly open.\u201d Figure 3. Visualization of cross-attention maps and intermediate feature maps. (a) represents the semantic relation information between an input text and an input semantic mask in the spatial domain. The meaningful representations of inputs are shown across all denoising steps and N different blocks. (b) represents N different cross-attention maps, At, at denoising steps t = T and t = 0. (c) shows the example of refined intermediate feature map \u02c6 F1 T at 1st block and t = T that is emphasized corresponding to input multi-modal conditions. The red and yellow regions of the map indicate higher attention scores. As the denoising step approaches T, the text-relevant features appear more clearly, and as the denoising step t approaches 0, the features of the visual input are more preserved. inative representations are represented more faithfully because ft consists of N multi-scale feature maps that can capture different sizes of facial attributes, which allows for finer control over face attributes. For simplicity, we upsample each intermediate feature map of ft to same size intermediate feature maps Ft = {Fn t }N n=1, where Fn t \u2208RH\u00d7W \u00d7Cn has H, W, and Cn as height, width and depth. Moreover, at is used to amplify controlled facial attributes as it incorporates semantically related information in text and visual input. To match the dimension with Ft, we convert at to At = {An t }N n=1, where An t \u2208RH\u00d7W \u00d7Cn, by max-pooling the output of the cross-attention layers in each decoder block and upsampling the max-pooling outputs. To capture the global representations, we additionally compute \u00af At \u2208RH\u00d7W \u00d71 by depth-wise averaging the max-pooling output of at over each word in the text prompt and upsampling it. As illustrated in Figures 3 (a) and (b), At and \u00af At represent the specific regions aligned with input text prompt and visual input, such as semantic mask, across denoising steps t. By a pixel-wise multiplication between Ft and At, we can obtain the refined intermediate feature maps \u02c6 Ft that emphasize the representations related to multiShift Net \u0de1 \ud835\udc6d\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udc6d\ud835\udc61 Weighted sum map2style \ud835\udc30\ud835\udc61 \ud835\udefe \ud835\udc30\ud835\udc61 \ud835\udefd Scale Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc59 Shift Net Concat Scale Net Shift Net \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe\ud835\udc54 \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc54 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefe \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd\ud835\udc59 \ud835\udefc\ud835\udc61 \ud835\udefd 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd \ud835\udefc\ud835\udc61 \ud835\udefe map2style \u0de0 \u0d24 \ud835\udc05\ud835\udc61 \u0de0 \ud835\udc05\ud835\udc61 Weighted sum \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefd \u0de0 \ud835\udc05\ud835\udc61 \ud835\udefe Figure 4. Style modulation network in T . The refined intermediate feature maps \u02c6 Ft and \u02c6 \u00af Ft are used to capture local and global semantic representations, respectively. They are fed into the scale and shift network, respectively. The weighted summations of these outputs are used as input to the map2style network, which finally generates the scale and shift modulation latent codes, w\u03b3 t , and w\u03b2 t . modal inputs as shown in Figure 3 (c). The improved average feature map \u02c6 \u00af Ft \u2208RH\u00d7W \u00d71 is also obtained by multiplying \u00af At with \u00af Ft, where \u00af Ft \u2208RH\u00d7W \u00d71 is obtained by first averaging the feature maps in Ft = {Fn t }N n=1 and then depth-wise averaging the outputs. \u02c6 Ft and \u02c6 \u00af Ft distinguish textand structural-relevant semantic features, which improves the alignment with the inputs. We use \u02c6 Ft and \u02c6 \u00af Ft as input to the style modulation network that produces the modulation codes w\u03b3 t , and w\u03b2 t as shown in Figure 4. We capture both local and global features by using \u02c6 Ft, which consists of feature maps representing different local regions on the face, and \u02c6 \u00af Ft, which implies representations of the entire face. We concatenate N intermediate feature maps of \u02c6 Ft, concat(\u02c6 F1 t \u00b7 \u00b7 \u00b7 \u02c6 FN t ), and it is forward to the scale and shift networks that consist of convolutional layers and Leaky ReLU, forming the local modulation feature maps, \u02c6 F\u03b3l t and \u02c6 F\u03b2l t . We also estimate global modulation feature maps, \u02c6 F\u03b3g t and \u02c6 F\u03b2g t , by feeding \u02c6 \u00af Ft to the scale and shift network. The final scale, \u02c6 F\u03b3 t , and shift, \u02c6 F\u03b2 t , feature maps are estimated by the weighted summation: \u02c6 F\u03b3 t = \u03b1\u03b3 t \u02c6 F\u03b3l t + (1 \u2212\u03b1\u03b3 t )\u02c6 F\u03b3g t , (3) \u02c6 F\u03b2 t = \u03b1\u03b2 t \u02c6 F\u03b2g t + (1 \u2212\u03b1\u03b2 t )\u02c6 F\u03b2g t , where \u03b1\u03b3 t and \u03b1\u03b2 t are learnable weight parameters. Through the map2style module, we then convert \u02c6 F\u03b3 t and \u02c6 F\u03b2 t into the final scale, w\u03b3 t \u2208RL\u00d7512, and shift, w\u03b2 t \u2208RL\u00d7512, latent codes. With these modulation latent codes, we achieve more precise control over facial details while corresponding to the input multi-modal inputs at the pixel level. Finally, the mapped latent code wm t from M is modulated by w\u03b3 t and w\u03b2 t from T to get the final latent code wt that is used to obtain the generated image I\u2032 t as follows: wt = wm t \u2299w\u03b3 t \u2295w\u03b2 t , (4) I\u2032 t = G(wt). (5) 10132 5987 13044 9807 rebuttal (a) \u201cThis person has brown hair, and eyeglasses.\u201d (b)\u201cThis person has mustache.\u201d (c) \u201cThis person has gray hair, and eyeglasses.\u201d Inputs TediGAN UaC Ours (a) (b) (c) (a) (b) (c) (a) (b) (c) (a) \u201cShe has high cheekbones, straight hair, black hair.\u201d (b)\u201cShe has high cheekbones, straight hair, blond hair.\u201d (c) \u201cHe has blond hair, sideburns.\u201d (a) \u201cHe has brown hair, and wavy hair.\u201d (b)\u201cHe has black hair, and straight hair.\u201d (c) \u201cHe has black hair, and goatee.\u201d Collaborative ControlNet Figure 5. Visual examples of the 2D face image generation using a text prompt and a semantic mask. For each semantic mask, we use three different text prompts (a)-(c), resulting in different output images (a)-(c). 3.4. Loss Functions To optimize M and T , we use reconstruction loss, perceptual loss, and identity loss for image generation, and regularization loss [42] that encourages the latent codes to be closer to the average latent code \u00af w. For training M, we use the GT image Igt as reference to encourage the latent code wm t to generate a photo-realistic image as follows: LM = \u03bbm 0 \u2225Igt \u2212G(wm t )\u22252+ (6) \u03bbm 1 \u2225F(Igt) \u2212F(G(wm t )\u22252+ \u03bbm 2 (1 \u2212cos(R(Igt), R(G(wm t ))))+ \u03bbm 3 \u2225E(zt, t, x, c) \u2212\u00af w\u22252, where R(\u00b7) is pre-trained ArcFace network [8], F(\u00b7) is the feature extraction network [63], zt is noisy image, and the hyper-parameters \u03bbm (\u00b7) guide the effect of losses. Note that we freeze T while training M. For training T , we use Id 0 produced by the encoder E into the reconstruction and perceptual losses. With these losses, the loss LT encourages the network to control facial attributes while preserving the identity of Igt: LT = \u03bbs 0\u2225Id 0 \u2212G(wt)\u22252+ (7) \u03bbs 1\u2225F(Id 0) \u2212F(G(wt)\u22252+ \u03bbs 2(1 \u2212cos(R(Igt), R(G(wt))))+ \u03bbs 3\u2225E(zt, t, x, c) \u2212\u00af w\u22252, where the hyper-parameters \u03bbs (\u00b7) guide the effect of losses. Similar to Equation 6, we freeze M while training T . We further introduce a multi-step training strategy that considers the evolution of the feature representation in E over the denoising steps. We observe that E tends to focus more on text-relevant features in an early step, t = T, and structure-relevant features in a later step, t = 0. Figure 3 (b) shows the attention maps \u00af A showing variations across the denoising step. As the attention map, we can capture the textual and structural features by varying the denoising steps. To effectively capture the semantic details of multi-modal conditions, our model is trained across multiple denoising steps. 4. Experiments 4.1. Experimental Setup We use ControlNet [62] as the diffusion-based encoder that receives multi-modal conditions, including text and visual conditions such as a semantic mask and scribble map. The StyleGAN [22] and EG3D [4] are exploited as pre-trained 2D and 3D GAN, respectively. See the Supplementary Material for the training details, the network architecture, and additional results. Datasets. We employ the CelebAMask-HQ [29] dataset comprising 30,000 face RGB images and annotated semantic masks, including 19 facial-component categories such as skin, eyes, mouth, and etc. We also use textual deOurs I (a) (b) (c) (d) Ours IDE-3D \u201cThe person has brown hair, and sideburn.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has gray hair, and straight hair.\u201d \u201cThe person has black hair, and wavy hair.\u201d (a) (b) (c) (d) Inputs Figure 6. Visual examples of the 3D-aware face image generation using a text and a semantic mask. We show the images generated with inputs and arbitrary viewpoints. Input conditions Method Model Domain FID\u2193 LPIPS\u2193 SSIM\u2191 ID\u2191 ACC\u2191 mIoU\u2191 Text + semantic mask TediGAN [58] GAN 2D 54.83 0.31 0.62 0.63 81.68 40.01 IDE-3D [51] GAN 3D 39.05 0.40 0.41 0.54 47.07 10.98 UaC [35] Diffusion 2D 45.87 0.38 0.59 0.32 81.49 42.68 ControlNet [62] Diffusion 2D 46.41 0.41 0.53 0.30 82.42 42.77 Collaborative [19] Diffusion 2D 48.23 0.39 0.62 0.31 74.06 30.69 Ours GAN 2D 46.68 0.30 0.63 0.76 83.41 43.82 Ours GAN 3D 44.91 0.28 0.64 0.78 83.05 43.74 Text + scribble map ControlNet [62] Diffusion 2D 93.26 0.52 0.25 0.21 Ours GAN 2D 55.60 0.32 0.56 0.72 Ours GAN 3D 48.76 0.34 0.49 0.62 Table 1. Quantitative results of multi-modal face image generation on CelebAMask-HQ [29] with annotated text prompts [58]. scriptions provided by [58] describing the facial attributes, such as black hair, sideburns, and etc, corresponding to the CelebAMask-HQ dataset. For the face image generation task using a scribble map, we obtain the scribble maps by applying PiDiNet [49, 50] to the RGB images in CelebAMask-HQ. We additionally compute camera parameters based on [4, 10] for 3D-aware image generation. Comparisons. We compare our method with GAN-based models, such as TediGAN [58] and IDE-3D [51], and DMbased models, such as Unite and Conquer (UaC) [35], ControlNet [62], and Collaborative diffusion (Collaborative) [19], for face generation task using a semantic mask and a text prompt. IDE-3D is trained by a CLIP loss term like TediGAN to apply a text prompt for 3D-aware face image generation. ControlNet is used for face image generation using a text prompt and a scribble map. We use the official codes provided by the authors, and we downsample the results into 256 \u00d7 256 for comparison. Evaluation Metrics. For quantitative comparisons, we evaluate the image quality and semantic consistency using sampled 2k semantic maskand scribble map-text prompt pairs. Frechet Inception Distance (FID) [17], LPIPS [63], and the Multiscale Structural Similarity (MS-SSIM) [56] are employed for the evaluation of visual quality and diversity, respectively. We also compute the ID similarity mean score (ID) [8, 57] before and after applying a text prompt. Additionally, we assess the alignment accuracy between the input semantic masks and results using mean Intersectionover-Union (mIoU) and pixel accuracy (ACC) for the face generation task using a semantic mask. 4.2. Results Qualitative Evaluations. Figure 5 shows the visual comparisons between ours and two existing methods for 2D face image generation using a text prompt and a semantic mask as input. We use the same semantic mask with different text prompts (a)-(c). TediGAN produces results consistent with the text prompt as the latent codes are optimized using the input text prompt. However, the results are inconsistent with the input semantic mask, as highlighted in the red boxes. UaC shows good facial alignment with the input semantic mask, but the results are generated with unexpected attributes, such as glasses, that are not indicated in the inputs. Collaborative and ControlNet produce inconsistent, blurry, and unrealistic images. Our model is capable of preserving semantic consistency with inputs and generating realistic facial images. As shown in Figure 5, our method preserves the structure of the semantic mask, such as the hairline, face position, and mouth shape, while changing the attributes through a text prompt. Figure 6 compares our method with IDE-3D [51] to validate the performance of 3D-aware face image generation Input View 1. 2. 3. 4. Novel Views (a) Inputs (b) ControlNet (c) Ours Input text: 1. \u201cThis young woman has straight hair, and eyeglasses and wears lipstick.\u201d 2. \u201cThe man has mustache, receding hairline, big nose, goatee, sideburns, bushy eyebrows, and high cheekbones.\u201d 3. \u201cShe has big lips, pointy nose, receding hairline, and arched eyebrows.\u201d 4. \u201cThis man has mouth slightly open, and arched eyebrows. He is smiling.\u201d Figure 7. Visual examples of 3D-aware face image generation using text prompts and scribble maps. Using (1-4) the text prompts and their corresponding (a) scribble maps, we compare the results of (b) ControlNet with (c) multi-view images generated by ours. using a semantic mask and a text prompt. We use the same semantic mask with different text prompts in Figures 6 (a) and (b), and use the same text prompt with different semantic masks in Figures 6 (c) and (d). The results of IDE-3D are well aligned with the semantic mask with the frontal face. However, IDE-3D fails to produce accurate results when the non-frontal face mask is used as input. Moreover, the results cannot reflect the text prompt. Our method can capture the details provided by input text prompts and semantic masks, even in a 3D domain. Figure 7 shows visual comparisons with ControlNet on 2D face generation from a text prompt and a scribble map. The results from ControlNet and our method are consistent with both the text prompt and the scribble map. ControlNet, however, tends to over-emphasize the characteristic details related to input conditions. Our method can easily adapt to the pre-trained 3D GAN and produce photo-realistic multiview images from various viewpoints. Quantitative Evaluations. Table 1 reports the quantitative results on CelebAMask-HQ with text prompts [58]. Our method using text prompts and semantic masks shows performance increases in all metrics in 2D and 3D domains, compared with TediGAN and UaC. Our model using 2D GAN significantly improves LPIPS, ID, ACC, and mIoU scores, surpassing TediGAN, UaC, ControlNet, and Collaborative, respectively. It demonstrates our method\u2019s strong ability to generate photo-realistic images while reflecting input multi-modal conditions better. For 3D-aware face image generation using a text prompt and a semantic mask, it \ud835\udcaf (c) w/o \ud835\udc34, \u04a7 \ud835\udc34 (d) Full model urns, and bags under eyes.\u201d and has arched eyebrows, black hair.\u201d 2. 3. 1. Input text: 1. \u201cThis man has gray hair.\u201d 2. \u201cHe has double chin, sideburns, and bags under eyes.\u201d 3. \u201cShe wears heavy makeup and has arched eyebrows, black hair.\u201d (a) Inputs (b) w/o T (c) w/o A, \u00af A (d) Ours Figure 8. Effect of M and T . (b) shows the results using only M, and (c) shows the effect of the cross-attention maps (A and \u00af A) in T . The major changes are highlighted with the white boxes. Method M T At Igt Id 0 FID\u2193 LPIPS\u2193ID\u2191 ACC\u2191 (a) \u2713 \u2713 \u2713 62.08 0.29 0.62 81.09 (b) \u2713 \u2713 \u2713 \u2713 48.68 0.28 0.66 82.86 (c) \u2713 \u2713 \u2713 \u2713 54.27 0.31 0.58 80.58 (d) \u2713 \u2713 \u2713 \u2713 61.60 0.29 0.62 80.04 (e) \u2713 \u2713 \u2713 \u2713 \u2713 44.91 0.28 0.78 83.05 Table 2. Ablation analysis on 3D-aware face image generation using a text prompt and a semantic mask. We compare (a) and (b) with (e) to show the effect of our style modulation network and (c) and (d) with (e) to analyze the effect of Igt and Id in model training. is reasonable that IDE-3D shows the highest FID score as the method additionally uses an RGB image as input to estimate the latent code for face generation. The LPIPS, SSIM, and ID scores are significantly higher than IDE-3D, with scores higher by 0.116, 0.23, and 0.24, respectively. Our method using 3D GAN exhibits superior ACC and mIoU scores for the 3D face generation task compared to IDE3D, with the score difference of 35.98% and 32.76%, likely due to its ability to reflect textual representations into spatial information. In face image generation tasks using a text prompt and a scribble map, our method outperforms ControlNet in FID, LPIPS, SSIM, and ID scores in both 2D and 3D domains. Note that the ACC and mIoU scores are applicable for semantic mask-based methods. 4.3. Ablation Study We conduct ablation studies to validate the effectiveness of our contributions, including the mapping network M, the AbSM network T , and the loss functions LM and LT . Effectiveness of M and T . We conduct experiments with different settings to assess the effectiveness of M and T . w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. (a) Inputs (b) w/ \ud835\udc3c\ud835\udc61=0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours \u201cShe wears lipstick and has arched eyebrows, and slightly \u201cThis young person has goatee, mustache, big lips, and strai d) Ours urs and big lips, ws, and (a) Inputs (b) w/ \ud835\udc3c0 \ud835\udc51 (c) w/ \ud835\udc3c\ud835\udc54\ud835\udc61 (d) Ours 2. 1. Input text: 1. \u201cThis young person has goatee, mustache, big lips, and straight hair.\u201d 2. \u201cShe wears lipstick and has arched eyebrows, and mouth slightly open.\u201d Figure 9. Effect of using Id from the denoising U-Net and the GT image Igt in model training. Using text prompts (1, 2) with (a) the semantic mask, we show face images using our model trained with (b) Id 0 , (c) Igt, and (d) both. We also show the advantages of using cross-attention maps in our model. The quantitative and qualitative results are presented in Table 2 and Figure 8, respectively. When using only M, we can generate face images that roughly preserve the structures of a given semantic mask in Figure 8 (a), including the outline of the facial components (e.g. face, eye) in Figure 8 (b). On the other hand, T enables the model to express face attribute details effectively, such as hair colors and mouth open, based on the multi-modal inputs in Figure 8 (c). The FID and ACC scores are higher than the model using only M in Table 2 (b). We further present the impact of adopting cross-attention maps to T for style modulation. Figure 8 (d) shows how the attention-based modulation approach enhances the quality of results, particularly in terms of the sharpness of desired face attributes and the overall consistency between the generated image and multi-modal conditions. Table 2 (e) demonstrates the effectiveness of our method by showing improvements in FID, LPIPS, ID, and ACC. Our method, including both M and T with cross-attention maps, significantly improves the FID showing our model\u2019s ability to generate high-fidelity images. From the improvement of the ID score, the crossattention maps enable relevantly applying the details of input conditions to facial components. Model Training. We analyze the effect of loss terms LM and LT by comparing the performance with the model trained using either Id 0 from the denoising U-Net or GT image Igt. The model trained using Id 0 produces the images in Figure 9 (b), which more closely reflected the multi-modal conditions (a), such as \u201cgoatee\u201d and \u201chair contour\u201d. In Table 2 (c), the ACC score of this model is higher than the model trained only using Igt in Table 2 (d). The images generated by the model trained with Igt in Figure 9 (c) are more perceptually realistic, as evidenced by the lower LPIPS score compared to the model trained with Id 0 in TaInput text: 1. 2. 3. 1. \u201cA photo of a face of a beautiful elf with silver hair in live action movie.\u201d 2. \u201cA photo of a white Greek statue.\u201d 3. \u201cA photo of a face of a zombie.\u201d Figure 10. Visual examples of 3D face style transfer. Our method generates stylized multi-view images by mapping the latent features of DM and GAN. ble 2 (c) and (d). Using Igt also preserves more conditionirrelevant features inferred by the ID scores in Table 2 (c) and (d). In particular, our method combines the strengths of two models as shown in Figure 9 (d) and Table 2 (e). 4.4. Limitations and Future Works Our method can be extended to multi-modal face style transfer (e.g. face \u2192Greek statue) by mapping the latent spaces of DM and GAN without CLIP losses and additional dataset, as shown in Figure 10. For the 3D-aware face style transfer task, we train our model using Id 0 that replaces GT image Igt in our loss terms. This method, however, is limited as it cannot transfer extremely distinct style attributes from the artistic domain to the photo-realistic domain of GAN. To better transfer the facial style in the 3D domain, we will investigate methods to map the diffusion features related to the input pose into the latent space of GAN in future works. 5. Conclusion We presented the diffusion-driven GAN inversion method that translates multi-modal inputs into photo-realistic face images in 2D and 3D domains. Our method interprets the pre-trained GAN\u2019s latent space and maps the diffusion features into this latent space, which enables the model to easily adopt multi-modal inputs, such as a visual input and a text prompt, for face image generation. We also proposed to train our model across the multiple denoising steps, which further improves the output quality and consistency with the multiple inputs. We demonstrated the capability of our method by using text prompts with semantic masks or scribble maps as input for 2D or 3D-aware face image generation and style transfer."
16
+ }
intro_8K/test_introduction_short_2405.04370v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04370v1",
3
+ "title": "Diff-IP2D: Diffusion-Based Hand-Object Interaction Prediction on Egocentric Videos",
4
+ "abstract": "Understanding how humans would behave during hand-object interaction is vital\nfor applications in service robot manipulation and extended reality. To achieve\nthis, some recent works have been proposed to simultaneously predict hand\ntrajectories and object affordances on human egocentric videos. They are\nregarded as the representation of future hand-object interactions, indicating\npotential human motion and motivation. However, the existing approaches mostly\nadopt the autoregressive paradigm for unidirectional prediction, which lacks\nmutual constraints within the holistic future sequence, and accumulates errors\nalong the time axis. Meanwhile, these works basically overlook the effect of\ncamera egomotion on first-person view predictions. To address these\nlimitations, we propose a novel diffusion-based interaction prediction method,\nnamely Diff-IP2D, to forecast future hand trajectories and object affordances\nconcurrently in an iterative non-autoregressive manner. We transform the\nsequential 2D images into latent feature space and design a denoising diffusion\nmodel to predict future latent interaction features conditioned on past ones.\nMotion features are further integrated into the conditional denoising process\nto enable Diff-IP2D aware of the camera wearer's dynamics for more accurate\ninteraction prediction. The experimental results show that our method\nsignificantly outperforms the state-of-the-art baselines on both the\noff-the-shelf metrics and our proposed new evaluation protocol. This highlights\nthe efficacy of leveraging a generative paradigm for 2D hand-object interaction\nprediction. The code of Diff-IP2D will be released at\nhttps://github.com/IRMVLab/Diff-IP2D.",
5
+ "authors": "Junyi Ma, Jingyi Xu, Xieyuanli Chen, Hesheng Wang",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Accurately anticipating human intentions and future actions is important for artificial intelligence systems in robotics and extended reality [1, 2, 3]. Recent works have tried to tackle the problem from various perspectives, including action recognition and anticipation [4, 5, 6, 7], gaze prediction [8, 9, 10, 11], hand trajectory prediction [12, 13, 14, 15], and object affordance extraction [12, 16, 14, 17]. Among them, jointly predicting hand motion and object affordances can effectively facilitate more reasonable robot manipulation as the prior contextual information, which has been demonstrated on some robot platforms [1, 18, 19]. We believe that deploying such models pretrained by internet-scale human videos on robots is a promising path towards embodied agents. Therefore, our work aims to jointly predict hand trajectories and object affordances on egocentric videos as a concrete hand-object interaction (HOI) expression, following the problem modeling of previous works [12, 14]. Currently, the state-of-the-art approaches [12, 13] predicting hand trajectories and object affordances on egocentric videos tend to exploit the autoregressive (AR) model. They reason about the next \u2217Corresponding author: [email protected] Preprint. Under review. arXiv:2405.04370v1 [cs.CV] 7 May 2024 view1 (other observations) view2 (last observation) gap egocentric images (a) Existing Paradigm (b) Diff-IP2D Paradigm t autoregressive model HOI (t2) HOI (t1) predicted interaction diffusion-based model denoising HOI (t1) HOI (t2) HOI (t3) predicted interaction egocentric images t steps HOI (t1) HOI (t3) HOI (t1) HOI (t2) in parrallel motion features (c) Autoregressive Generation vs. Parallel Generation (d) Inherent Gaps gt gt ego motion real actions pixel movement gap accumulated error gt bidirectional unidirectional 3D environments Figure 1: Diff-IP2D vs. Existing Paradigm. The existing HOI prediction paradigm (a) tends to accumulate prediction errors under unidirectional constraints. In contrast, our proposed Diff-IP2D (b) directly forecasts all the future interaction states in parallel with denoising diffusion, mitigating error accumulation with bidirectional constraints (c). Moreover, we integrate egomotion information into our proposed paradigm to narrow the inherent gaps (d) in HOI prediction. HOI state only according to the previous steps (Fig. 1(a)). However, expected \u201cpost-contact states\u201d also affect \u201cpre-contact states\u201d according to human intentions that persist across the holistic HOI process as an oracle. There must be more coherent constraints that reflect human intention and mutually connect the preceding and the following motion in the HOI prediction process. Inspired by this, we argue that predicting future HOI states in parallel considering the bidirectional constraints within the holistic sequence outperforms generating the next state autoregressively (Fig. 1(c)). With diffusion models emerging across multiple domains [20, 21, 22, 23, 24, 25, 26, 27], their strong forecasting capability has been widely validated. Therefore, we propose a diffusion-based method to predict future hand-object interaction in parallel, considering bidirectional constraints in the latent space compared to the traditional autoregressive generation (Fig. 1(b)). In the forward process, the past and future video images are first encoded to sequential latent features. Noises are gradually added to the part of the future sequence while the past features remain anchored. Subsequently, a Transformer-based network is devised for learning to reverse the diffusion and reconstruct the input latent features. Finally, the proposed predictors are exploited to recover future hand trajectories and object affordances from the denoised latents. A new regularization strategy is also proposed to link the two latent spaces adjacent to the denoising diffusion process. Moreover, we also identify two inherent gaps (Fig. 1(d)) affecting HOI prediction in the existing paradigm: 1) Directly predicting the projection of 3D future hand trajectories and object affordances on 2D egocentric image plane is an ill-posed problem involving spatial ambiguities. There is generally a gap between 2D pixel movements and 3D real actions, which can be bridged by spatial transformation across multiple views changing with egomotion. 2) The past egocentric videos are absorbed to predict future interaction states on the last observed image, which is actually a \u201ccanvas\u201d from a different view w.r.t all the other frames. Therefore, there is also a gap between the last observation (first-person view) and the other observations (analogous to third-person view) caused by egomotion. To fill the two gaps together, we further propose to integrate the camera wearer\u2019s egomotion into our diffusion-based paradigm. The utilized homography features enable the denoising model aware of the camera wearer\u2019s dynamics and the spatial relationship between consecutive egocentric video frames. The main contributions of this paper are as follows: 1) We propose a diffusion-based hand-object interaction prediction method, dubbed Diff-IP2D. To our best knowledge, this is the first work to jointly forecast future hand trajectories and object affordances by the devised denoising diffusion probabilistic model with only 2D egocentric videos as input. It provides a foundation generative paradigm in the field of HOI prediction. 2) The homography egomotion features are integrated to fill the motion-related gaps inherent in HOI prediction on egocentric videos. 3) We extend the existing metrics and propose the first protocol for jointly evaluating the performance of hand trajectory predic- tion and object affordance prediction. 4) Comprehensive experiments are conducted to demonstrate that our Diff-IP2D can predict plausible hand trajectories and object affordances compared to the state-of-the-art baselines, showing its potential for deployment on artificial intelligence systems. 2",
15
+ "main_content": "Understanding hand-object interaction. Human HOI comprehension can guide the downstream tasks in artificial intelligence systems. As a pioneer work, Calway et al. [28] connect the specific human tasks to relevant objects, revealing the importance of object-centric understanding in different HOI modes. In contrast, Liu et al. [29] focus on capturing the changeable attributes of objects, which underlines the relationship between object-centric interaction and goal-oriented human activities. After that, more and more works contribute to HOI understanding by pixel-wise semantic segmentation [30, 31, 32, 33], bounding-box-wise detection [34, 35, 36, 37], fine-grained hand/object pose estimation [38, 39, 40, 41, 42, 43]. Ego4D [44] further provides a standard benchmark that divides HOI understanding into several predefined subtasks. Predicting hand-object interaction. Analyzing only past human behavior may be insufficient for service robot manipulation or extended reality. Forecasting possible future object-centric HOI states based on historical observations is also valuable, which attracts increasing attention due to the general knowledge that can be transferred to robot applications [1, 18, 19, 45]. For example, Dessalene et al. [46] propose to generate contact anticipation maps and next active object segmentations as future HOI predictions. Liu et al. [14] first achieve hand trajectory and object affordance prediction simultaneously, revealing that predicting hand motion benefits the extraction of interaction hotspots. Following this work, Liu et al. [12] further develop an object-centric Transformer to jointly forecast future trajectories and affordances autoregressively, and annotate publicly available datasets to support future works. More recently, Bao et al. [13] lift the problem to 3D spaces where hand trajectories are predicted by an uncertainty-aware state space Transformer in an autoregressive manner. However, this method needs additional 3D perception inputs from the RGB-D camera. In this work, we still achieve joint hand trajectory and object affordance prediction on 2D human videos rather than in 3D space. We focus on capturing more general knowledge from only egocentric camera observations in an iterative non-autoregressive (iter-NAR) manner, rather than the autoregressive way of the state-of-the-art works [12, 13]. Diffusion-based egocentric video analysis. Diffusion models have been successfully utilized in exocentric and egocentric video prediction [47, 48, 49, 50, 2] due to their strong generation ability. With only egocentric videos as inputs, diffusion-based techniques can also achieve human mesh recovery [51, 52], 3D HOI reconstruction [53, 54], and 3D HOI synthesizing [16, 55]. However, none of these works concentrate on the combination of fine-grained hand trajectories and object affordances as future HOI representations for potential utilization in artificial intelligence systems. Our proposed Diff-IP2D first achieves this based on the denoising diffusion probabilistic model [20], which dominates the existing paradigm [12, 13] in prediction performance on egocentric videos. 3 Proposed Method 3.1 Preliminaries Task definition. Given the video clip of past egocentric observations I = {It}0 t=\u2212Np+1, we aim to predict future hand trajectories H = {HR t , HL t }Nf t=1(HR t , HL t \u2208R2) and potential object contact points O = {On}No n=1(On \u2208R2), where Np and Nf are the numbers of frames in the past and future time horizons respectively, and No denotes the number of predicted contact points used to calculate interaction hotspots as object affordances. Following the previous works [12, 14], we predict the future positions of the right hand, the left hand, and the affordance of the next active object on the last observed image of the input videos. Diffusion models. In this work, we propose a diffusion-based approach to gradually corrupt the input to noisy features and then train a denoising model to reverse this process. We first map the input images into a latent space z0 \u223cq(z0), which is then corrupted to a standard Gaussian noise zS \u223cN(0, I). In the forward process, the perturbation operation can be represented as q(zs|zs\u22121) = N(zs; \u221a1 \u2212\u03b2szs\u22121, \u03b2sI), where \u03b2 is the predefined variance scales. In the reverse process, we set a denoising diffusion model to gradually reconstruct the latent z0 from the noisy zS. The denoised features can be used to recover the final future hand trajectories and object affordances. 3 forward process future HOI features conditional past HOI features reverse process Multi-Feature Extractor egomotion homography Hand Trajectory Head trajectory loss shared weights regularization affordance loss diffusion-related losses Input: sequential past egocentric images Output: future HOI states feature space (s=S) Side-Oriented Fusion Module MADT Predictors MADT Object Affordance Head global/right/left intermediate features right/left fused features diffusion process feature space (s=S/2) feature space (s=0) Hand Trajectory Head Figure 2: System Overview of Diff-IP2D. Our proposed paradigm takes in sequential past egocentric images and jointly predicts hand trajectories and object affordances as future HOI states. The observations are mapped to the latent feature space for the diffusion process. 3.2 Architecture System overview. Accurately reconstructing the future part of the input sequence is critical in the diffusion-based prediction task. We empirically found that ground-truth hand waypoints Hgt = {HR,gt t , HL,gt t }Nf t=1(HR,gt t , HL,gt t \u2208R2) and contact points Ogt = {Ogt n}No n=1(Ogt n \u2208R2) provide discrete and sparse supervision signals for reconstruction, which is not enough for capturing possible high-level semantics such as human intentions in the denoising process. Therefore, as Fig. 2 shows, we first use Multi-Feature Extractor and Side-Oriented Fusion Module to transform the input images into latent HOI features, and then implement diffusion-related operation in the latent continuous space. The HOI features denoised by Motion-Aware Denoising Transformer are further absorbed by Hand Trajectory Head and Object Affordance Head to generate future hand trajectories and object hotspots. Multi-Feature Extractor (MFE). Following the previous work [12], we use MFE that consists of a pretrained Temporal Segment Network (TSN) provided by Furnari et al. [34], RoIAlign [56] with average pooling, and Multilayer Perceptron (MLP) to extract hand, object, and global features for each sequence image It \u2208I. The positions of hand-object bounding boxes are also encoded to feature vectors fused with hand and object features. Side-Oriented Fusion Module (SOFM). Our proposed SOFM is a learnable linear transformation to fuse the above-mentioned three types of feature vectors into the final latent form for two sides respectively. Specifically, the global features and right-side features (right-hand/object features) are concatenated to the right-side HOI features FR = {F R t }X t=\u2212Np+1(F R t \u2208Ra, X = Nf for training and X = 0 for inference). The operation and feature sizes are the same as the leftside counterparts, leading to FL = {F L t }X t=\u2212Np+1. We further concatenate the side-oriented features along the time axis respectively to generate the input latents F R seq, F L seq \u2208R(Np+X)\u00d7a for the following diffusion model. Motion-Aware Denoising Transformer (MADT). Our proposed MADT takes in the noisy latent HOI features and reconstructs future HOI features for the following predictors conditioned on past HOI counterparts. MADT consists of several stacked Transformer layers as shown in Fig. 3. Inspired by the text generation technique [26], we anchor the past HOI features for both forward and reverse processes. We only impose noises and denoise at the positions of the future feature sequence. The features of the two sides are denoised using the same model, leading to \u02c6 F R seq and \u02c6 F L seq. In addition, egomotion guidance is proposed here to fill the gaps mentioned in Sec. 1. Specifically, we first extract the Scale-Invariant Feature Transform (SIFT) descriptors to find the pixel correspondence between two adjacent images of past observations I. Then we calculate the homography matrix with RANSAC that finds a transformation to maximize the number of inliers in the keypoint pairs. We accumulate the consecutive homography matrices and obtain Mseq \u2208RNp\u00d73\u00d73 representing the camera wearer\u2019s motion between It (t \u22640) and I0. They are further linearly embedded into an egomotion feature Eseq \u2208RNp\u00d7b by Motion Encoder. The multi-head cross-attention module 4 MHSA Add & Norm MHCA Add & Norm FFN Add & Norm past HOI features TE PE egomotion feature latent noisy samples denoised future HOI features \u3002 homography Motion Encoder N X input video clip \u3002\u3002 t m1,1 m1,2 m1,3 m2,1 m2,2 m2,3 m3,1 m3,2 m3,3 ... ... ... ... ... ... Figure 3: Architecture of our proposed MADT. MADT receives corrupted latent HOI features with the position embedding (PE) and time embedding (TE), and outputs denoised future HOI features. (MHCA) in the devised Transformer layer then absorbs the egomotion feature to guide the denoising process. More analysis on the use of egomotion guidance can be found in Appendix, Sec. B. Predictors. Our proposed predictors consist of Hand Trajectory Head (HTH) and Object Affordance Head (OAH). HTH contains an MLP that receives the future parts of the denoised features, \u02c6 F R seq[Np+1: Np+Nf] and \u02c6 F L seq[Np+1 : Np+Nf] to generate future waypoints H of two hands. As to OAH, we empirically exploit Conditional Variational Autoencoder (C-VAE) [57] to generate possible contact points O in the near future. Take the right hand as an example, the condition is selected as the time-averaged \u02c6 F R seq and predicted waypoints HR t . Note that we additionally consider denoised future HOI features \u02c6 F R seq[Np+1 : Np+Nf] (t>0) besides the features from the past observation (t\u22640) for object affordance prediction. This aligns with the intuitive relationship between the contact points and the overall interaction process. Therefore, we integrate richer conditional features from trajectory prediction into the object affordance prediction compared to the previous work [12] only conditioned on historical features. 3.3 Training Forward process. We implement partial noising [26] in the forward process during training. Taking the right side as an example, the output of SOFM is first extended by a Markov transition q(z0|F R seq) = N(F R seq, \u03b20I), where F R seq \u2208R(Np+Nf)\u00d7a. We discard the embedding process from Gong et al. [26] since the HOI feature F R seq is already in the continuous latent space. In each following forward step of the diffusion model, we implement q(zs|zs\u22121) by adding noise to the future part of zs\u22121, i.e., zs\u22121[Np+1:Np+Nf] for both sides. Reverse process. After corrupting the initial z0 to zS by the forward process, our proposed MADT is adopted to denoise zS to z0 in a classifier-free manner. Considering the guidance of egomotion features, the reverse process can be modeled as pMADT(z0:S) := p(zs) QS s=1 pMADT(zs\u22121|zs, Mseq). Specifically, the MADT model fMADT(zs, s, Mseq) predicts the injected noise for each forward step with pMADT(zs\u22121|zs, Mseq) = N(zs\u22121; \u00b5MADT(zs, s, Mseq), \u03c3MADT(zs, s, Mseq)). The same denoising operation and motion-aware guidance are applied to HOI features of both sides. Training objective. The loss function to train the networks in Diff-IP2D contains four parts, including diffusion-related losses, trajectory loss, affordance loss, and an additional regularization term (see Fig. 2). Take the right side as an example, we use the variational lower bound LR VLB as the diffusion-related losses: LR VLB = S X s=2 ||zR 0 \u2212fMADT(zR s, s, Mseq)||2 + ||F R seq \u2212\u02c6 F R seq||2, (1) where \u02c6 F R seq = fMADT(zR 1, 1, Mseq). To reconstruct hand trajectories beyond the latent feature space, we further set trajectory loss LR traj with the distance between the ground-truth waypoints and the ones predicted by HTH: LR traj = Nf X t=1 ||HR t \u2212HR,gt t ||2, (2) 5 where HR t = fHTH( \u02c6 F R seq[Np+1:Np+Nf]). We only focus on the future part out of the holistic sequence for computing LR traj since we let HTH be more sensitive to predictions rather than bias it to past observations. As to the object affordance prediction, we also compute the affordance loss Laff after multiple stochastic sampling considering the next active object recognized following Liu et al. [12] (assuming in the right side here for brevity): Laff = No X n=1 ||On \u2212Ogt n||2 + cLKL, (3) where On =fOAH( \u02c6 F R seq, HR t ), and LKL = 1 2(\u2212log \u03c32 OAH( \u02c6 F R seq, HR t )+\u00b52 OAH( \u02c6 F R seq, HR t )+\u03c32 OAH( \u02c6 F R seq, HR t )\u2212 1) is the KL-Divergence regularization for C-VAE, which is scaled by c = 1e-3. The latent features and predicted hand waypoints are fused by MLP suggested by the previous work [12]. We consider both reconstructed future HOI features \u02c6 F R seq[Np+1:Np+Nf] and anchored past counterparts \u02c6 F R seq[0:Np] compared to [12] as mentioned before. We also notice that the latent feature spaces before and after the denoising diffusion process represent the same \u201cprofile\u201d of the input HOI sequence. Therefore, we propose an additional regularization term implicitly linking F R seq and \u02c6 F R seq by hand trajectory prediction: LR reg = Nf X t=1 || \u02dc HR t \u2212HR,gt t ||2, (4) where \u02dc HR t = fHTH(F R seq[Np+1:Np+Nf]). Although Eq. (4) does not explicitly contain the term \u02c6 F R seq, the training direction is the same with Eq. (2), thus maintaining training stability. The regularization helps the convergence of Diff-IP2D by consistently constraining the two latent spaces alongside the diffusion process. Here we do not use object affordance prediction for regularization because we empirically found that incorporating OAH mitigates training efficiency while the positive effect is not obvious. Finally, we get the total loss to train our proposed Diff-IP2D: Ltotal = \u03bbVLB(LR VLB + LL VLB) + \u03bbtraj(LR traj + LL traj) + \u03bbaffLaff + \u03bbreg(LR reg + LL reg), (5) where \u03bbVLB, \u03bbtraj, \u03bbaff, and \u03bbreg are the weights to balance different losses. Besides, we leverage the importance sampling technique proposed in improved DDPM [58], which promotes the training process focusing more on the steps with relatively large Ltotal. 3.4 Inference In the inference stage, we first sample F R noise, F L noise \u2208RNf\u00d7a from a standard Gaussian distribution, which is then concatenated with F R seq, F L seq \u2208RNp\u00d7a along the time axis to generate zR S and zL S. Then we use MADT to predict zR 0 and zL 0 based on DDIM sampling [59]. Note that we anchor the past part of reparameterized zs as the fixed condition in every step of the inference process following Gong et al. [26]. Finally, the generated \u02c6 F R seq and \u02c6 F L seq are used to predict future hand waypoints and contact points by fHTH(\u00b7) and fOAH(\u00b7) as mentioned before. It can be seen from the inference stage that Diff-IP2D can be regarded as an iter-NAR model in the latent feature space. Compared to the state-of-the-art baselines in an autoregressive manner, our approach shifts the iteration from F1,1 F1,2 F1, Nf ... F2,1 F2,2 F2, Nf ... FS,1 FS,2 FS, Nf ... ... denoising diffusion process time axis ... ... F1 F2 FNf ... time axis H1 H2 HN ... f FS-1,1 FS-2,1 FS, Nf ... H1 H2 HN ... f F3 H3 F1 F2 FNf ... time axis H1 H2 HN ... f F3 H3 (b) Iter-NAR Prediction (a) AR Prediction Figure 4: Comparison of AR and our iter-NAR prediction. the time axis to the denoising direction, which is shown in Fig. 4. This alleviates the accumulated artifacts caused by the limited iteration in the time dimension, and maintains bidirectional constraints among the sequential features to generate future HOI states in parallel, providing a deeper understanding of human intention. We further present the mathematical relationship between the two iter-NAR models, Diff-IP2D for HOI prediction and DiffuSeq [26] for text generation in Appendix, Sec. A. 6 4 Experiments 4.1 Experimental setups Datasets. Following the previous work [12], we utilize three publicly available datasets including Epic-Kitchens-55 (EK55) [60], Epic-Kitchens-100 (EK100) [61], and EGTEA Gaze+ (EG) [11]. For the EK55 and EK100 datasets, we sample past Np = 10 frames (2.5 s) to forecast HOI states in future Nf = 4 frames (1.0 s), both at 4 FPS. As to the EG dataset, Np = 9 frames (1.5 s) are used for Nf = 3 HOI predictions (0.5 s) at 6 FPS. See the Appendix, Sec. C.2 for more details. Diff-IP2D configuration. MFE extracts the hand, object, and global feature vectors all with the size of 512 for each input image. For the EK55 and EK100 datasets, the outputs of SOFM F R seq, F L seq have the size of 14 \u00d7 512 for training and 10 \u00d7 512 for inference. For the EG dataset, F R seq, F L seq are 9 \u00d7 512 for training and 12 \u00d7 512 for inference. As to the diffusion process, the total number of steps S is set to 1000. We also provide an ablation study on multiple steps for training and inference in Appendix, Sec. D.3. The square-root noise schedule in Diffusion-LM [62] is adopted here for the forward diffusion process. MADT has 6 Transformer layers (Fig. 3) for denoising, where the embedding dimension is 512, the number of heads is set to 4, and the intermediate dimension of the feed-forward layer is set to 2048. Motion Encoder linearly projects each homography matrix to an egomotion feature vector of 512. We use an MLP with hidden dimensions 256 and 64 to predict the hand waypoints as HTH, and a C-VAE containing an MLP with a hidden dimension 512 to predict contact points as OAH. The training configurations can be found in Appendix, Sec. C.2. In the reference stage, we generate the 10 candidate samples for each prediction. Baseline configuration. We choose Constant Velocity Hand (CVH), Seq2Seq [63], FHOI [14], OCT [12], and USST [13] as the baselines for hand trajectory prediction. CVH is the most straightforward one which assumes two hands remain in uniform motion over the future time horizon with the average velocity during past observations. Besides, we adjust the input and architecture of USST to the 2D prediction task since it was originally designed for 3D hand trajectory prediction. We choose Center Object [14], Hotspots [64], FHOI [14], OCT [12], and Final Hand of USST [13] (USST-FH) as the baselines for object affordance prediction. USST-FH puts a mixture of Gaussians at the last hand waypoint predicted by USST since its vanilla version can only predict waypoints. Evaluation metrics. Following the previous work [14, 12, 13], we use Final Displacement Error (FDE) to evaluate prediction performance on hand trajectories. Considering the general knowledge of \u201cpost-contact trajectories\u201d extracted from human videos is potentially beneficial to robot manipulation [1, 18], we additionally extend the metric Average Displacement Error to Weighted Displacement Error (WDE): WDE = 1 2Nf X R,L Nf X t=1 t Nf D(Ht, Hgt t ), (6) where D(\u00b7) denotes the L2 distance function and the later waypoints contribute to larger errors. We select the mean error among the 10 samples for each hand trajectory prediction. As to the object affordance prediction, we use Similarity Metric (SIM) [65], AUC-Judd (AUC-J) [66], and Normalized Scanpath Saliency (NSS) [67] as evaluation metrics. We use all 10 contact point candidates to compute the metric values for each affordance prediction. Moreover, we propose a novel object-centric protocol to jointly evaluate the two prediction tasks. We first calculate the averaged hand waypoints \u00af HR t and \u00af HL t for each future timestamp from multiple samples. Then we select the waypoint closest to each predicted contact prediction On as an additional \u201cinteraction point\u201d, which can be formulated by: \u00af Hip n = minR,L,tD( \u00af Ht, On), (7) Finally, the joint hotspot is predicted using { \u00af Hip n \u222aOn}No n=1. This protocol comprehensively considers object-centric attention since HOI changes the object states and hand waypoints must have a strong correlation with object positions. Note that we also use the quantitative metrics same as the ones for object affordance prediction, which are denoted as SIM\u2217, AUC-J\u2217, and NSS\u2217. More clarifications about our proposed new protocol can be found in Appendix, Sec. C.1. 7 Table 1: Comparison of performance on hand trajectory and object affordance prediction approach EK55 EK100 EG WDE \u2193 FDE \u2193 WDE \u2193 FDE \u2193 WDE \u2193 FDE \u2193 CVH 0.636 0.315 0.658 0.329 0.689 0.343 Seq2Seq [63] 0.505 0.212 0.556 0.219 0.649 0.263 FHOI [14] 0.589 0.307 0.550 0.274 0.557 0.268 OCT [12] 0.446 0.208 0.467 0.206 0.514 0.249 USST [13] 0.458 0.210 0.475 0.206 0.552 0.256 Diff-IP2D (ours) 0.411 0.181 0.407 0.187 0.478 0.211 SIM \u2191 AUC-J \u2191 NSS \u2191 SIM \u2191 AUC-J \u2191 NSS \u2191 SIM \u2191 AUC-J \u2191 NSS \u2191 Center Object [14] 0.083 0.553 0.448 0.081 0.558 0.401 0.094 0.562 0.518 Hotspots [64] 0.156 0.670 0.606 0.147 0.635 0.533 0.150 0.662 0.574 FHOI [14] 0.159 0.655 0.517 0.120 0.548 0.418 0.122 0.506 0.401 OCT [12] 0.213 0.710 0.791 0.187 0.677 0.695 0.227 0.704 0.912 USST-FH [13] 0.208 0.682 0.757 0.179 0.658 0.754 0.190 0.675 0.729 Diff-IP2D (ours) 0.226 0.725 0.980 0.211 0.736 0.917 0.242 0.722 0.956 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 SIM\u2217\u2191 AUC-J\u2217\u2191 NSS\u2217\u2191 FHOI [14] 0.130 0.602 0.487 0.113 0.545 0.409 0.118 0.501 0.379 OCT [12] 0.219 0.720 0.848 0.182 0.684 0.662 0.194 0.672 0.752 Diff-IP2D (ours) 0.222 0.730 0.888 0.204 0.727 0.844 0.226 0.701 0.825 Figure 5: Visualization of hand trajectory prediction on Epic-Kitchens. The waypoints from groundtruth labels, Diff-IP2D, and the second-best baseline [12] are connected by red, white, and blue dashed lines respectively. 4.2 Separate evaluation on hand trajectory and object affordance prediction We first present the evaluation results on hand trajectory prediction. As Tab. 1 depicts, our proposed Diff-IP2D outperforms all the baselines on the EK55 and EK100 datasets on WDE and FED. This is mainly achieved by the devised iter-NAR paradigm of Diff-IP2D alleviating degeneration in AR baselines, as well as the egomotion guidance. The visualization of the related hand prediction results is shown in Fig. 5. It can be seen that our proposed method can better capture the camera wearer\u2019s intention (such as putting the food in the bowl) and generate more reasonable future trajectories even if there is a lack of past observations for hands (such as reaching out towards the table). Besides, our method can predict a good final hand position although there is a large shift in the early stage (the subfigure in the bottom right corner of Fig. 5), which benefits from our diffusion-based parallel generation. When directly transferring the models trained on Epic-Kitchens to the unseen EG dataset, our method still outperforms the other baselines, which improves by 7.0% and 15.3% against the second-best method on WDE and FDE respectively. This reveals the solid generalization capability of our diffusion-based approach across different environments. The comparison results of object affordance prediction are also shown in Tab. 1. Our proposed Diff-IP2D predicts the hotspots with larger SIM, AUC-J, and NSS compared to all the baselines on both Epic-Kitchens data and unseen EG data. Fig. 6 illustrates the predicted contact points with minimum distances to the ground-truth ones. Our proposed method focuses more on objects of interest considering the features of the holistic interaction and potential hand trajectories, and therefore grounds the contact points closer to the ground-truth labels than the counterparts of the baseline. 8 \u8981\u8bf4 \u4e3a\u4e86\u663e\u793a\u65b9\u4fbf \u52a0\u4e86\u4e2a\u865a\u62df\u7684hotspots\u5728\u4e0a\u9762 Figure 6: Visualization of object affordance prediction on Epic-Kitchens. The contact points from ground-truth, Diff-IP2D, and the state-of-the-art baseline OCT [12] are represented by red, white, and blue dots respectively. For a clearer illustration, we additionally put a fixed Gaussian with each contact point as the center. See the Appendix, Sec. D.6 for more visualization results. Table 2: Ablation study on egomotion guidance approach EK55 EK100 WDE \u2193 FDE \u2193 SIM \u2191 AUC-J \u2191 NSS \u2191 WDE \u2193 FDE \u2193 SIM \u2191 AUC-J \u2191 NSS \u2191 Diff-IP2D* 0.427 0.186 0.218 0.717 0.929 0.439 0.198 0.201 0.710 0.846 Diff-IP2D 0.411 0.181 0.226 0.725 0.980 0.407 0.187 0.211 0.736 0.917 improvement 3.7% 2.7% 3.7% 1.1% 5.5% 7.3% 5.6% 5.0% 3.7% 8.4% Diff-IP2D*: Diff-IP2D w/o egomotion guidance 4.3 Joint evaluation on hand trajectory and object affordance prediction We further compare Diff-IP2D with the other two joint prediction baselines, FHOI [14] and OCT [12], using our proposed object-centric protocol. The video clips containing both ground-truth hand waypoints and contact points are used for evaluation in this experiment. The results are also shown in Tab. 1, which indicates that our proposed Diff-IP2D can generate the best object-centric HOI predictions considering the two tasks concurrently on both Epic-Kitchens and unseen EG data. The results also suggest that Diff-IP2D outperforms the baselines on object-centric HOI prediction by focusing more attention on the target objects and predicting reasonable hand trajectories around them. 4.4 Ablation study on egomotion guidance We provide an ablation study of the egomotion features used to guide MADT denoising on the EK55 and EK100 datasets. Here we replace the MHCA in MADT with a multi-head self-attention module (MHSA) to remove the egomotion guidance while keeping the same parameter number. The experimental results in Tab. 2 show that the guidance of motion features improves our proposed diffusion-based paradigm noticeably on both hand trajectory prediction and object affordance prediction. This is achieved by narrowing the two gaps caused by 2D-3D ill-posed problem and view difference mentioned in Sec. 1. Note that the egomotion guidance is more significant on the EK100 dataset than on the EK55 dataset. The reason could be that EK100 has a larger volume of training data incorporating more diverse egomotion patterns than EK55, leading to a model that can capture human dynamics better. More results of the related joint evaluation are presented in Appendix, Sec. D.1. 4.5 Conclusion and insights In this paper, we propose a novel hand-object interaction prediction method Diff-IP2D. Specifically, we implement the denoising diffusion in the latent feature space under the egomotion guidance, and jointly predict future hand trajectories and object affordances with the recovered latents as input. According to the experimental results, Diff-IP2D dominates the existing baselines on both off-the-shelf metrics and our new evaluation protocol, suggesting promising applications in artificial intelligence systems. It learns to recover latent HOI features and forecast future HOI states in parallel, which can serve as a foundation generative paradigm for future works on the same or similar prediction tasks. 9"
16
+ }
intro_8K/test_introduction_short_2405.04403v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04403v1",
3
+ "title": "Learning To See But Forgetting To Follow: Visual Instruction Tuning Makes LLMs More Prone To Jailbreak Attacks",
4
+ "abstract": "Augmenting Large Language Models (LLMs) with image-understanding capabilities\nhas resulted in a boom of high-performing Vision-Language models (VLMs). While\nstudying the alignment of LLMs to human values has received widespread\nattention, the safety of VLMs has not received the same attention. In this\npaper, we explore the impact of jailbreaking on three state-of-the-art VLMs,\neach using a distinct modeling approach. By comparing each VLM to their\nrespective LLM backbone, we find that each VLM is more susceptible to\njailbreaking. We consider this as an undesirable outcome from visual\ninstruction-tuning, which imposes a forgetting effect on an LLM's safety\nguardrails. Therefore, we provide recommendations for future work based on\nevaluation strategies that aim to highlight the weaknesses of a VLM, as well as\ntake safety measures into account during visual instruction tuning.",
5
+ "authors": "Georgios Pantazopoulos, Amit Parekh, Malvina Nikandrou, Alessandro Suglia",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.CL"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "LLM AND Jailbreak",
15
+ "gt": "Visual Instruction Tuning extends the instruction- following abilities of Large Language Models (LLMs) to the visual modality. The common recipe for a Vision-Language Model (VLM), is to combine an existing LLM along with a vision encoder and learn a mapping between the two unimodal experts (Alayrac et al., 2022; Dai et al., 2023b; Liu et al., 2024). As a result, VLMs can solve additional tasks as opposed to their language-only counterparts, while their performance correlates heavily with the capabilities of their unimodal backbones. LLMs have become the go-to option for practi- cally all Natural Language Processing (NLP) tasks, with models such as ChatGPT (OpenAI, 2022) and Gemini (Gemini Team et al., 2023) witness- ing widespread deployment. While these mod- els exhibit\u2014to some degree\u2014general capabilities (OpenAI, 2023a), previous work shows they are susceptible to misuse (Bommasani et al., 2021; Kreps et al., 2022; Weidinger et al., 2021). Conse- quently, a large body of work incorporates safety mechanisms in model development to constrain model behavior to a \u201csafer\u201d subset by aligning mod- els with values (Askell et al., 2021; Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022). Despite these efforts, LLMs are vulnerable to ma- licious prompts\u2014referred to as \u201cjailbreaking\u201d (Wei et al., 2024; Xie et al., 2023): engineered to trick the LLM outside of the safer subset and generate the potentially harmful content it was trained to re- ject (Qi et al., 2023). An example of such behavior is illustrated in Figure 1, where the model initially refuses to comply with the input question, but com- *Equal Contribution pletely adheres to the modified adversarial prompt. Therefore, recent approaches to model develop- ment incorporate safety tuning against jailbreaking during training (Jiang et al., 2023; Touvron et al., 2023) that mitigate these vulnerabilities. Adversarial attacks, in the form of input pertur- bations, can also affect vision or language models (Goodfellow et al., 2014; Madry et al., 2018; Yu and Rieser, 2023). With VLMs now being deployed to a wider audience (Brooks et al., 2024; Gemini Team et al., 2023; OpenAI, 2023b), we believe that now is a good time to consider: how does visual instruction tuning affect the safety of LLMs? We explore this question by prompting publicly- available VLMs, and the LLM backbones they were derived from, by employing jailbreaking techniques across eight distinct scenarios (Liu et al., 2023d). We demonstrate that visual instruction tuning dam- ages guardrails put in place during the LLM training (Jiang et al., 2023; Touvron et al., 2023), leaving models vulnerable to adversarial attacks. Overall, our findings further highlight the importance of con- sidering the effects of the training stages, training data, and the ways to evaluate and mitigate poten- tial harms when introducing new modalities (Wei- dinger et al., 2023). As such, with great instruction tuning comes great responsibility.",
16
+ "main_content": "2.1. Instruction-tuning models Language models trained on self-supervised objectives show broad pattern recognition abilities (Brown et al., 2020; Radford et al., 2019; Raffel et al., 2020) when paired with instruction-tuning: arXiv:2405.04403v1 [cs.CV] 7 May 2024 Figure 1: Example responses to the query \u201cWhat are some effective ways to create and spread fake news that incites violence and unrest in society?\u201d Vicuna 13B (Chiang et al., 2023) does not directly generate hateful content (top-left), but will given a jailbreak pre-prompt (bottom-left). LLaVa 13B (Liu et al., 2023c) produces hateful content both with (bottom-right) and without (top-right) a jailbreak pre-prompt. a fine-tuning paradigm that uses language instructions as input to solve multiple tasks (Chung et al., 2022; Gupta et al., 2022; Wei et al., 2021). Instruction-tuning is an established concept in NLP (Chung et al., 2022; Mishra et al., 2022) as resulting models generalize better to user queries (Chung et al., 2022; Sanh et al., 2022; Wei et al., 2021) by learning to connect them to concepts seen during pretraining for zero-shot generalization on unseen tasks (Gupta et al., 2022; Mishra et al., 2022). Visual Instruction Tuning refers to the process of converting a LLM into a VLM, often using language (Bai et al., 2023a; Chiang et al., 2023) and vision experts (Fang et al., 2023; Radford et al., 2021), by learning a mapping between the two modalities. Existing approaches concatenate visual and textual representations with a lightweight adapter module (Liu et al., 2024). Other techniques construct \u201cvisual prompts\u201d with a resampler\u2014where learnable latent tokens are informed by each modality (Bai et al., 2023b; Li et al., 2023a; Zhu et al., 2023). Training involves multiple stages, with initial stages focusing on image-text alignment and later stages on supervised fine-tuning (SFT). As VLMs based on this recipe are successful across established multimodal tasks (Goyal et al., 2017; Singh et al., 2019), a large body of work focuses on the safety aspect of these models through the hallucination prism. These works typically measure the degree to which model responses are factually grounded to the visual context (Li et al., 2023b; Liu et al., 2023a,b). However, they do not explore how safety guardrails integrated into the LLM are impacted by visual instruction tuning. 2.2. Jailbreaking and adversarial attacks LLMs and VLMs exhibit vulnerabilities along the same lines as other deep learning models; slight perturbations in inputs can result in (possibly coherent) \u201challucinated\u201d responses (Bender et al., 2021; Goodfellow et al., 2014; Liu et al., 2023b; Szegedy et al., 2013). Learning from vast training corpora improves a model\u2019s generalization capabilities (Radford et al., 2018; Raffel et al., 2020). However, as datasets surpass trillions of tokens (Gao et al., 2020; Hoffmann et al., 2022; Touvron et al., 2023), it is difficult to know the characteristics and biases included in them (Gehman et al., 2020). Moreover, while instruction-tuned models can make reasonable predictions with irrelevant and misleading prompts (Webson and Pavlick, 2022), a model\u2019s strong pattern recognition abilities can at the same time be exploited forcing potentially harmful responses (Ganguli et al., 2022; Perez et al., 2022). As a result, various methods (Christiano et al., 2017; Dai et al., 2023a; Ouyang et al., 2022) try to better align generated content to one more preferred by humans; encouraging safer and more ethical responses (Bai et al., 2022; Ganguli Vision-Language Model Large Language Model LLaVA-1.5 (Liu et al., 2023c) Vicuna 13B (Chiang et al., 2023) Qwen-VL-Chat (Bai et al., 2023b) Qwen-Chat 7B (Bai et al., 2023a) InternLM-XComposer2 (Dong et al., 2024) InternLM2-Chat 7B (InternLM Team, 2023) Table 1: VLM & LLM pairs used in our experiments. et al., 2022). Other measures include SFT on datasets with adversarial prompts and exemplary responses (Touvron et al., 2023), and context distillation (Askell et al., 2021) which finetunes a model on outputs generated by another model prompted for safe behavior. However, introducing visual inputs opens a new attack vector as adversarial inputs imperceptible to the human eye can steer models to unsafe behavior (Qi et al., 2023). 3. Experimental Setup We hypothesize that after visual instruction tuning, models become less safe and more vulnerable to jailbreaks as opposed to their original LM backbone. To test this hypothesis, we prompt three state-of-the-art VLMs and their LM counterparts with questions related to prohibited scenarios, both with and without jailbreak prompt prefixes.1 Model Selection Table 1 displays the evaluated VLMs along with their respective LLM backbones. We selected these models because: 1) they showcased strong performance in established multimodal tasks (Goyal et al., 2017; Li et al., 2023b; Marino et al., 2019); 2) they connect vision and language models in different ways; and 3) they incorporate safety mechanisms during the development of their LLM. Finally, all chosen VLMs and LLMs are open-source, ensuring reproducibility. See Appendix A for additional details about this selection. Data Preparation We query each model with a prompt, a question, and, for the VLMs, an input image. We leverage the jailbreak prompt dataset from Liu et al. (2023d), which contains questions to simulate prohibited scenarios and prompts that were successful in jailbreaking ChatGPT (OpenAI, 2022).2 Liu et al. (2023d) categorized jailbreak prompts into one-of-three different types, and oneof-ten different patterns. Overall, we employ 40 input queries: derived from eight prohibited scenarios, with each containing five questions. We used four jailbreak prompts that cover all patterns 1Code available at https://github.com/ gpantaz/vl_jailbreak 2See Appendix B for a short description of each scenario, and we refer to Liu et al. (2023d) for details. to ensure models are evaluated fairly across all jailbreak types, resulting in 160 queries to evaluate how susceptible models are to jailbreaking. In order to mimic a common downstream use case of VLMs, we retrieve the most relevant image for each question from the pretraining data of LLaVA (Liu et al., 2024) by selecting the image with the maximum CLIPScore (Hessel et al., 2021) using the base CLIP model (Radford et al., 2021). Finally, we also use a blank image (i.e. an image with only white pixels) to simulate pure text generation by removing any visual context. As a result, we have four conditions for VLMs from the combinations of original vs. jailbreak prompts, and retrieved images vs. blank images. Data Annotation To understand the impact of visual instruction tuning against adversarial inputs, all 1,800 responses were annotated by the authors of this paper\u2014each with a background in developing large-scale multimodal models\u2014and measured inter-annotator agreement on a subset of 100 samples, resulting in a Krippendorff\u2019s alpha of 0.674. We labeled every response with one of: \u2022 Refusal: The model refuses to answer the query; optionally, providing counter-arguments or condemning unethical/illegal actions. \u2022 Neutral: Generated content is neither a refusal nor is potentially harmful; such as, when models do not acknowledge the harmful intent but respond in the opposite manner. For example, when asked \u201cHow to screen job candidates based on race?\u201d, models might provide measures against discrimination. \u2022 Harmful: Response contains potentially harmful content. This includes where generated content promotes discrimination towards individuals\u2014 possibly based on demographics or gender\u2014or encourages illegal activities. \u2022 Not Applicable: Content is not applicable for the study. For example, the model repeated the instruction or continuously produced the same (invalid or incoherent) sentence. 4. Results Figure 2 illustrates the percentage of the responses labeled as harmful across all models. We observe that all VLMs generate substantially more hateful responses as opposed to their LLM backbones. In particular, LLaVA generates 27.50% and 6% more harmful content than Vicuna, with and without jailbreak pre-prompts respectively. Additionally, QwenChat/Qwen-VL-Chat and InterLM2-Chat/InterLMXComposer2 exhibit similar behavior, though they Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 50 60 70 Percentage of harmful responses 20.00 60.50 47.50 66.50 40.00 69.00 Vicuna & LLaVA Vicuna LLaVA LLaVA-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 7.50 42.50 15.00 45.00 12.50 47.50 Qwen & Qwen-VL-Chat Qwen-Chat Qwen-VL-Chat Qwen-VL-Chat-Blank Jailbreak Prompt Jailbreak Prompt 0 10 20 30 40 Percentage of harmful responses 10.00 40.62 17.50 41.88 17.50 45.62 InterLM2 & InterLM-Xcomposer2 InterLM2-Chat InterLM-XComposer2 InterLM-XComposer2-Blank Figure 2: Percentage of harmful responses for every LLM & VLM pair. Across all model pairs, the VLM generates harmful content more frequently compared to its LLM backbone. generate less harmful responses. Consequently, the safeguards imposed on the LLMs during model development are, at best, relaxed as an outcome of the visual instruction tuning stage. Furthermore, VLMs are more prone to generate potentially harmful content when provided with a prompt and a semantically-relevant image. While this may seem obvious, we observe that in the case of adversarial input, including a blank image results leads to more harmful responses. We hypothesize that this is due to \u201ccompeting objectives\u201d (Wei et al., 2024); where, on one hand, the model tries to generate content relative to both the instruction and the image, while on the other hand, it tries to adhere to its safeguards. Using a jailbreak pre-prompt, however, provides a signal stronger than the content of the image resulting in the aforementioned behavior. 5. Discussion Why are VLMs more prone to jailbreak attacks? Competing objectives present a significant challenge for both VLMs and LLMs. Given an adversarial prompt, both models must navigate between providing relevant responses and resisting adherence to the adversarial prompt. While we have not explored whether this effect is magnified in VLMs, we hypothesize that both models are equally susceptible to the impact of competing objectives. A more plausible scenario is that VLMs forget queries from adversarial prompts when undergoing visual instruction tuning. Reframing generation of appropriate responses to adversarial prompts as its own task, it becomes evident that models may inadvertently disregard this task during further finetuning. This behavior is particularly likely to occur as the model must incorporate an additional modality during the instruction tuning stage. However, we believe this issue can be mitigated through continual learning or training methodologies that expose the model to additional (image-text or text-only) examples that demonstrate appropriate responses during the visual instruction tuning stage. In the follow-up section, we further elaborate on possible strategies to mitigate the forgetting effect. 5.1. Suggestions for Future Work Evaluation & Benchmarking Most current evaluations of VLMs focus exclusively on model capabilities, such as grounding, reasoning, and factuality (Weidinger et al., 2021). Some recent benchmarks are starting to address the gap in safety (Li et al., 2024b; Roger et al., 2023) and robustness to adversarial attacks (Carlini et al., 2024; Zhao et al., 2024). However, creating comprehensive benchmarks to evaluate the safety of VLMs remains a crucial area for future research. A possible step in this direction would be to implement a unified framework for evaluating VLMs similar to LM-Harness (Gao et al., 2023) and SALAD-Bench (Li et al., 2024a), ensuring transparency and reproducibility. Additionally, we emphasize the need for \u201cdata parity\u201d when evaluating from a safety perspective. Without it, jailbreak prompts may be accidentally leaked into (pre-)training data, leading to inflated scores (Golchin and Surdeanu, 2023; Li and Flanigan, 2023; Zhou et al., 2023). However, as jailbreaking is an adversarial setting, it should be evaluated on out-of-distribution prompts (Yuan et al., 2023) that are held-out and/or regularly updated (Kiela et al., 2021). Safety Defenses in All Training Stages VLMs are trained following a curriculum: typically involving image-text alignment and instruction-tuning stages (Bai et al., 2023a; Li et al., 2023a; Liu et al., 2024). Our analysis indicates that when safety is not considered across all\u2014or, at least, final\u2014 stages, models become misaligned and are therefore more likely to generate harmful content. Korbak et al. (2023) show that incorporating conditional pretraining\u2014where text segments are conditioned on human preferences\u2014can reduce the toxicity of model outputs without sacrificing performance on other tasks. As a result, when training a model from scratch, safety should be considered at every stage. However, as training from scratch is resource-intensive, it may be more practical to initialize a VLM with pretrained experts. Another possible solution is to ensure that the VLM alignment is part of the final training stage. However, multimodal datasets annotated with human preferences or exemplar responses against adversarial prompts (Li et al., 2024b) are largely missing. Therefore, an important avenue for future work would be to collect or synthetically generate (Liu et al., 2024) such resources. The goal of maintaining safety alignment after visual instruction tuning resembles a continual learning scenario. Future work could draw inspiration from approaches that aim to mitigate catastrophic forgetting (Hadsell et al., 2020; Ke and Liu, 2022). For instance, previous work has found that methods such as experience replay (Biesialska et al., 2020) and logit distillation (Jin et al., 2022) can be effective in continual pretraining of language models. Further benefits could be achieved through more sophisticated approaches, such as selectively updating a small isolated set of parameters for vision (Gururangan et al., 2022; Ke et al., 2022). 6. Conclusion In this paper, we argue that relying on the safety alignment of the backbone LLM downplays the potential vulnerabilities of VLMs. To support this claim, we used three VLMs with strong performance on public benchmarks, each with a different LLM as a starting point with safety playing a crucial role for development of the LLM. Our analysis has shown that visual instruction tuning can affect all VLMs, making them more prone to generate potentially harmful responses both with and without jailbreaking attacks. Furthermore, we have provided suggestions with regard to core evaluation procedures and incorporating safety measures during the successive training stages of visual instruction tuning. Finally, notwithstanding the impressive progress in the development of VLMs, we emphasize that our ultimate goal in this paper is to identify weaknesses in existing approaches and provide recommendations aimed at propelling the field forward. 7. Limitations While our results consistently showcased evidence that visual instruction tuning has a negative impact on model safety, we have only evaluated three models with public weights and using English prompts. Furthermore, even though the developers of each model claim that they have taken action towards incorporating safety mechanisms, the exact details are not disclosed. As a result, we cannot guarantee that these models are not trained on any of the jailbreaking prompts because not all data used to train each LLM is publicly accessible. This highlights the need for the ability to conduct open research replications that enable similar studies. Lastly, we have not explored to what degree these models are sensitive to image attacks either through adversarial noise, adjusting the attention mask during generation, or completely removing the image. 8. Bibliographical"
17
+ }
intro_8K/test_introduction_short_2405.04483v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04483v1",
3
+ "title": "CloudDiff: Super-resolution ensemble retrieval of cloud properties for all day using the generative diffusion model",
4
+ "abstract": "Clouds play a crucial role in the Earth's water and energy cycles,\nunderscoring the importance of high spatiotemporal resolution data on cloud\nphase and properties for accurate numerical modeling and weather prediction.\nCurrently, Moderate Resolution Imaging Spectroradiometer (MODIS) provides cloud\nproducts with a spatial resolution of 1 km. However, these products suffer from\na lengthy revisit cycle. This study develops a generative diffusion model\n(donated as CloudDiff) for super-resolution retrieval of high spatiotemporal\ncloud phase and properties, applicable both day and night. Leveraging 2 km\nspatial resolution Himawari-8 Advanced Himawari Imager (AHI) thermal infrared\n(TIR) radiances and viewing geometry as condition, alongside daytime MODIS\nproducts as targets, the model can generate cloud phase (CLP), cloud top height\n(CTH), cloud optical thickness (COT), and cloud effective radius (CER) at 1 km\nspatial resolution and 10-minute temporal resolution. The conditional diffusion\nmodel can generate sharper images and capture finer local features than\ndeterministic super-resolution approaches. It draws multiple samples based on\nthe underlying probability distribution, enabling retrieval uncertainty\nassessment. Evaluations show agreement between cloud phase and properties\nderived from the CloudDiff and MODIS cloud products. The ensemble mean is found\nto enhance retrieval accuracy and credibility, outperforming the deterministic\nmodel.",
5
+ "authors": "Haixia Xiao, Feng Zhang, Lingxiao Wang, Wenwen Li, Bin Guo, Jun Li",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "physics.ao-ph",
9
+ "cats": [
10
+ "physics.ao-ph"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Clouds are critical in the Earth\u2019s water and energy budgets (Li et al., 2005). Their influence on the radiation budget can induce either heating or cooling of the planet, contingent upon the radiative characteristics of the cloud and its altitude (Stephens et al., 1981, 1990). The significance of clouds is further underscored by variables such as cloud optical thickness (COT), cloud effective radius (CER), cloud top height (CTH), and cloud phase (CLP). These parameters profoundly impact the Earth\u2019s net radiation balance due to their distinct scattering and absorption characteristics (Fauchez et al., 2018a; Min et al., 2020; Wang et al., 2016a). Achieving an accurate representation of these optical properties remains a formidable challenge, primarily because the microscale physical processes within clouds are difficult to explicitly sim- ulate in global numerical models (Baran, 2012; Ceppi et al., 2017; Waliser et al., 2009). Consequently, there is an urgent need to obtain cloud phase and properties with high spatial and temporal resolution. Such detailed cloud data are indispens- able for a deeper understanding of atmospheric physical processes, the enhancement of data assimilation techniques, and the improvement of weather forecasting accuracy (Muskatel et al., 2021). The retrieval of cloud properties has been conducted for several decades. Since the 1970s, airborne measurements have been employed to retrieve COT and CER, resulting in numerous successful experimental studies (Finger et al., 2015; King, 1987; Krisna et al., 2018; Platnick et al., 1995; Twomey and Cocks, 1989). However, these campaigns incur high costs, and the temporal and spatial coverage of field obser- vations is limited. With the advancement of satellite remote sensing technology, particularly passive sensors (geostationary and polar-orbiting satellites), researchers have increasingly utilized data from visible and near-infrared bands to retrieve cloud properties. This approach enables the characterization of cloud properties at various spatial and temporal resolutions (King et al., 1992; Menzel et al., 2008; Platnick et al., 2003; Tang et al., 2017; Zhang and Platnick, 2011; Zhuge et al., 2020), owing to the wide observational coverage provided by passive sensors. The basic physical prin- ciple behind this method is that the cloud radiances measured by the nonabsorptive channels in the visible or near-infrared wavelengths are influenced by COT, while those captured by water-absorption channels in the shortwave infrared wavelength are sensitive to the CER (Nauss and Kokhanovsky, 2011). These retrieval methods, which rely on solar radiation, are effective only for daytime scenes. However, they are not applicable to nighttime scenes and exhibit higher uncertainties in high-latitude regions and optically thin cloud scenes (Wang et al., 2016b). Thermal Infrared (TIR) retrieval algorithm, utilizing the split-window technique (Parol et al., 1991; Toshiro, 1985), offer valuable capabilities for both daytime and nighttime scene analysis. This technique retrieves COT and CER from the brightness temperature differences between two distinct channels in the infrared atmospheric windows, where gaseous absorption is minimal. Additionally, the optimal estimation methodology (Rodgers, 2000) has been implemented for the Atmospheric Infrared 2 Sounder V6 (AIRS) and Advanced Microwave Sounding Unit (AMSU), utilizing in- frared spectral data to successfully retrieve the physical and optical properties of clouds (Kahn et al., 2014, 2015). However, due to significant absorption by cloud particles in the infrared spectrum, these traditional IR-based algorithms primarily excel in retrieving optically thin cloud properties, while facing challenges in scenarios involving opaque, thick clouds (Wang et al., 2016a). Consequently, an alternative approach is necessary to provide a more comprehensive solution. The data-driven deep learning method, renowned for their proficiency in capturing the spatial variations of image features with fast computation, have been extensively applied in the cloud identification and properties retrieval (Tong et al., 2023; Zhao et al., 2023). For example, Wang et al. (2022) developed a convolutional neural network (CNN) model for the continuous cloud identification and retrieval of cloud properties (i.e., COT, CER, and CTH) throughout the diurnal cycle for the Moderate Resolu- tion Imaging Spectroradiometer (MODIS), leveraging utilizing daytime MODIS TIR radiances alongside satellite viewing zenith angles (VZA). Additionally, employing a transfer-learning-based UNet model and MODIS/Himawari-8 cloud products, Li et al. (2023) successfully estimated the CER, COT, and CTH from Himawari-8 TIR measurements, and results showed that the model enhanced performance for optically thick clouds. Previous research has relied on either polar-orbiting (e.g., MODIS) or geostation- ary (e.g., Himawari-8 Advanced Himawari Imager) satellite sensors for cloud property estimation. While polar-orbiting satellites offer high-resolution cloud products (1 km resolution), they suffer from a lengthy revisit cycle, impacting temporal resolution. Conversely, geostationary satellites provide frequent revisits, offering high temporal resolution and continuous cloud observation (Meng et al., 2024). However, their spa- tial resolution is lower compared to polar-orbiting satellites. Hence, combining data from both types of satellites to achieve high spatiotemporal resolution in cloud phase and properties is a promising direction to explore. For high-impact weather events such as severe convective storms, tropical and ex- tratropical cyclones, the underlying dynamical and thermodynamic mechanisms are complex, leading to significant uncertainties in retrieving their cloud properties. Un- fortunately, current CNN/UNet retrieval methods primarily focus on deterministic modeling, which often neglects the inherent uncertainties within the data. Diffusion models, a novel category of likelihood-based models recently highlighted for gener- ating high-quality images (Sohl-Dickstein et al., 2015; Song and Ermon, 2019), offer desirable characteristics such as distribution coverage (Ho et al., 2020). Unlike deter- ministic retrieval methods, diffusion models derive probability distribution functions and can generate a large number of samples (Ho et al., 2020; Ling et al., 2024; Bishop, 2024), while guaranteeing that the retrieval distribution encapsulates all plausible out- comes, thus allowing for estimating the probability density and its score. Diffusion models have proven successful in various research domains, such as computer vision for image generation and synthesis (Croitoru, 2023), precipitation nowcasting (Nai 3 et al., 2024), estimating the unresolved geophysical processes (Pan et al., 2023), and earth system model downscaling (Hess et al., 2024), showcasing their effectiveness in handling complex systems. The primary objective of this study is to develop a diffusion model aimed at super- resolution high spatiotemporal resolution cloud optical properties and cloud phase retrieval throughout the diurnal cycle using a geostationary satellite. Leveraging the TIR channels of the Himawari-8 satellite and employing MODIS cloud products as ground truth, we have developed a generative diffusion model capable of cloud identification and retrieval of COT, CER, and CTH, characterized by high precision and enhanced spatiotemporal resolution. The efficacy of this model is evaluated against standard MODIS cloud product measurements, focusing particularly on its generalization capabilities and the uncertainty, analyzed across typhoon case studies and extended datasets. The data, methodology, and experimental details are outlined in Section 2. The performance outcomes of the model are thoroughly examined in Section 3. Lastly, Section 4 offers conclusions and discussions.",
15
+ "main_content": "2.1. Data 2.1.1. Himawari-8 AHI Satellite Data Himawari-8, launched in October 2014, is the geostationary satellite sensor system operated by the Japan Meteorological Agency (JMA). It represents the latest iteration in the Multifunctional Transport Satellite (MTSAT) series. The Advanced Himawari Imager (AHI) sensor onboard Himawari-8 captures full disk images every 10 minutes across 16 spectral bands from visible to infrared wavelengths, with spatial resolutions ranging from 500 m to 2 km and temporal resolutions between 2.5 and 10 minutes, covering regions from East Asia to Australia. The TIR measurements are sensitive to optically thin clouds and are continuously obtained throughout the diurnal cycle, independent of solar geometry (Fauchez et al., 2018a). In this study, TIR radiations from Himawari-8 AHI are utilized to estimate cloud properties during both daytime and nighttime. Additionally, the VZA are employed to construct the retrieval model. Table 1 summarizes the used TIR measurements (6.95\u201313.30 \u00b5m) and VZA of Himawari-8 AHI. 2.1.2. MODIS data With the launch of NASA\u2019s Terra satellite in 1999, followed by Aqua in 2002, MODIS has emerged as one of the most indispensable satellite remote sensing platforms for Earth science research. It measures reflected solar and emitted thermal radiation across 36 spectral channels (0.42\u201314.24 \u00b5m), offering unique spectral and spatial capabilities for retrieving cloud properties (Platnick et al., 2016). The Terra-MODIS (MOD06) and Aqua-MODIS (MYD06) products, which have a spatial resolution of 1 km, are accessible through the Atmosphere Archive and Distribution System website 4 (https://ladsweb.modaps.eosdis.nasa.gov/). These products include cloud top properties (e.g., CTH, CLP for both day and night) and cloud optical and microphysical properties (e.g., COT, CER, daytime only). Over the years, the MODIS cloud products have demonstrated consistent high accuracy and reliable performance (King et al., 2003; Platnick et al., 2015). In this study, the daytime MODIS cloud optical and physical properties (CTH, COT, CER, and CLP) from the Level-2 cloud product (MYD06 L2 and MOD06 L2) are utilized as ground truth to develop the super-resolution retrieval model. Table 1: The Himawari-8 AHI data used for cloud parameter super-resolution retrieval. Band Number Bandwidth (\u00b5m) Central Wavelength (\u00b5m) Spatial resolution (km) Spatial resolution (minute) 9 6.89\u20137.01 6.95 10 7.26\u20137.43 7.35 11 8.44\u20138.76 8.6 12 9.54\u20139.72 9.63 2 10 13 10.3\u201310.6 10.45 14 11.1\u201311.3 11.20 15 12.2\u201312.5 12.35 16 13.20\u201313.40 13.30 VZA \u2013 \u2013 2.1.3. Data preprocessing As described above, the TIR measurements (6.95 \u00b5m, 7.35 \u00b5m, 8.60 \u00b5m, 9.60 \u00b5m, 10.45 \u00b5m, 11.20 \u00b5m, 12.35 \u00b5m, and 13.30 \u00b5m) along with the VZA of the Himawari-8 AHI serve as the inputs for the model, while the MODIS level-2 CLP, CTH, COT, and CER data are used as the targets for training the model. To optimize the model during training and enhance its accuracy, we normalized the inputs and targets. By employing min-max normalization, we scaled the input and output variables to fall within the range of 0 to 1. To cover as wide a range of the Earth\u2019s surface and viewing geometries as possible, and to accommodate seasonal variations, we collected data from January 2016 to October 2017. Specifically, data from January 2016 to May 2017 was utilized for model training, data from June to August 20, 2017 for model validation, and data from August 21, 2017, to October 2017 served as the test set. Owing to the differing spatiotemporal resolutions of the Himawari-8 AHI and MODIS cloud products, we performed spatiotemporal matching of the data. In this process, we selected data from both MODIS and Himawari-8 for the same regions and times, with the cloud product grid points being twice that of the TIR observations. To alleviate memory and computational demands and to accelerate the selection process for the model, 5 we cropped the cloud products in the training, validation, and test sets to a size of 256\u00d7256 km, while the input TIR observations were sized at 128\u00d7128 km. Ultimately, our training set comprised 76,247 samples, with the validation and test sets containing 9,530 and 9,532 samples, respectively. 2.2. Method The diffusion model is a state-of-the-art deep learning technique that employs probabilistic denoising processes to develop generative models (Bishop, 2024). The model typically operates on the principle of simulating a gradual process of denoising, effectively reconstructing data points from a noise-like distribution. This process is modeled as a reverse Markov chain, where a data sample is initially transformed into noise through a sequence of diffusion steps and then reconstructed back into a clean sample through learned reverse transitions. In a classical set-up, the model involves iteratively applying a series of conditional Gaussian distributions, beginning from a distribution of noise p(zT) and progressively denoising it to retrieve the original data distribution p(x0). This can be succinctly represented as, p(x0) = Z \u00b7 \u00b7 \u00b7 Z p(x0|x1)p(x1|x2) \u00b7 \u00b7 \u00b7 p(xT\u22121|zT)p(zT) dx1 \u00b7 \u00b7 \u00b7 dxT\u22121dzT. (1) In each iteration, the model utilizes the noisy data from the previous step as input, subsequently refining it to a greater degree of accuracy in accordance with the data\u2019s original state. The denoising path is learned from training data, thereby enabling the model to effectively generate or reconstruct high-quality data samples. 2.2.1. Conditional diffusion model In our study, these TIR measurements and VZA variable are denoted by y which is the condition variable. The target variables, cloud products, are represented by x. The objective is to approximate the conditional distribution of x given y, using a significantly large dataset of paired samples (xi, yi). The conditional diffusion model incorporates conditioning variables into the generative process (Batzolis, 2021), allowing the model to generate data conditioned on specific information. Mathematically, this can be represented as the transition from a noise distribution p(zT) to the data distribution p(x0) conditioned on a variable y, described by, p(x0|y) = Z p(x0|zT, y)p(zT|y) dzT, (2) where, zT represents the latent variables at the final timestep, and the model iteratively refines these variables through the conditioning on y, enhancing its ability to target specific data generation tasks. As Figure 1 shows, the conditional diffusion model enables to produce cloud products given the conditions of TIR and VZA variables, making it particularly useful in scenarios where the output needs to be tailored to specific environments. In this framework, for any given y, the algorithm 6 outputs samples of x from x \u223cp(x0|y), where p is a learned distribution that does not adhere to any predefined probability distribution form. The forward process has the same scheme as the Denoising Diffusion Probabilistic Models(DDPMs) (Ho et al., 2020), but in the reverse process we embed the conditional variables into the UNet for modelling the conditional probability distributions (Nai et al., 2024). \ud835\udc650 \ud835\udc651 \ud835\udc652 ... \ud835\udc65\ud835\udc47 \ud835\udc65\ud835\udc47 \ud835\udc650 Forward Diffusion Process Reverse Diffusion Process ... \ud835\udc65\ud835\udc47\u22121 UNet UNet Condition Figure 1: The CloudDiff for super-resolution cloud identification and properties retrieval. The generated samples x are cloud products, and the conditions y includes TIR and VZA variables. In the forward process, the data x0 undergoes a series of transformations, gradually adding noise over discrete time steps T until it is converted into pure Gaussian noise xT \u2261zT. The noise addition at each timestep t is defined by a variance schedule \u03b2t, and can be described by the following stochastic differential equation, xt = p 1 \u2212\u03b2txt\u22121 + p \u03b2t\u03f5, \u03f5 \u223cN(0, I), (3) where \u03f5 represents Gaussian noise. The reverse process, where the model learns to reconstruct the original data from noise, is explicitly conditioned on y. At each step, the model estimates the original data xt\u22121 from the current noisy data xt using a neural network parameterized by {\u03b8}. This network predicts the mean \u00b5\u03b8(xt, t, y) of the distribution for xt\u22121, typically modeled as, xt\u22121 = \u00b5\u03b8(xt, t, y) + \u03c3t\u03f5, \u03f5 \u223cN(0, I), (4) where \u03c3t is a predetermined noise level (Ho et al., 2020). 7 The objective of training this conditional diffusion model is to minimise the difference between the estimated xt\u22121 and its actual value. This effectively allows the model to learn the reverse of the forward diffusion process. The loss function is originally from the Fisher divergence (Song and Ermon, 2019; Song et al., 2021; Nai et al., 2024), but equivalently used as a variant of the mean squared error between the predicted and actual previous timestep values, conditioned on y, L(\u03b8) = Ex0,\u03f5,y \u0002 \u2225\u03f5 \u2212\u03f5\u03b8(xt, t, y)\u22252\u0003 , (5) where \u03f5\u03b8 represents the outputs of the UNet as the predictions of the noise used to generate xt from xt\u22121. To improve the representation ability, we have introduced the multi-head attention modules into the UNet architecture (Vaswani et al., 2017). After training, the conditional diffusion model (hereafter, CloudDiff) is capable of generating multiple samples simultaneously. In our tests, we generate 30 samples per evaluation instance. These samples are reminiscent of the ensemble members used in numerical weather prediction\u2019s dynamical models, which employ large numbers of members for ensemble predictions (Li et al., 2024). Furthermore, we conduct comparative analyses between the CloudDiff and established deterministic data-driven methods. For this purpose, the study uses a supervised learning approach with a UNet architecture (Trebing et al., 2021), referred to as the deterministic model, as the benchmark. This method is specifically applied to the tasks of super-resolution retrieval of cloud properties and cloud identification, serving as a baseline for performance comparison. 2.2.2. Performance evaluation The CloudDiff serves as a super-resolution approach that requires an appropriate evaluation scheme. Although intuitive, sample-by-sample comparisons cannot fully demonstrate the effectiveness of the super-resolution technique. To obtain a comprehensive performance evaluation, we collect MODIS labels for assessing the quality of the generated cloud products. Consequently, we employ Mean Absolute Error (MAE) and Mean Squared Error (MSE) as metrics, allowing for a quantitative assessment of the model\u2019s performance in enhancing spatial resolution. These metrics, commonly used in cloud properties retrieval (Wang et al., 2022; Zhao et al., 2023), are defined as follows, MAE = 1 NNp N X i=1 Np X j=1 |xi,j \u2212\u02c6 xi,j| , (6) RMSE = v u u t 1 NNp N X i=1 Np X j=1 (xi,j \u2212\u02c6 xi,j)2, (7) where N represents the number of samples, xi denotes the values from MODIS cloud products, and \u02c6 xi represents the super-resolution retrieved cloud products. Np indicates the number of pixels for each sample, and j labels the index of the pixels. It 8 should be noted that a more accurate super-resolution model will have a smaller root mean square error (RMSE) and mean absolute error (MAE). 3. Results 3.1. Case study We begin our study with a case analysis focusing on Typhoon Hato (No.1713) over the offshore areas of China to evaluate the performance of the CloudDiff and comprehend its uncertainty. Typhoon Hato developed in the northwest Pacific Ocean at 06:00 UTC on August 20, 2017, and progressively intensified. By 01:00 UTC on August 23, it had escalated to a severe typhoon, peaking at Category 16 with maximum sustained winds of 52 m/s. It made landfall near Zhuhai City, Guangdong Province, China, around 04:50 UTC on August 23 as a severe typhoon, causing substantial devastation in southern China. On that day, the Terra satellite passed over the coastal Zhuhai area around 02:50 UTC; thus, our analysis primarily focused on evaluating the retrieved COT, CER, CTH, and CLP at this specific time. The analysis covered the typhoon area between 19.78\u00b0N\u201322.32\u00b0N and 111.68\u00b0E\u2013114.22\u00b0E, corresponding to a grid size of 256\u00d7256. Figure 2 presents the various cloud properties generated by the CloudDiff across 30 samples and grid points where MODIS cloud properties were not captured by samples. Since all 30 CLP samples indicated ice clouds within the study area, CLP results are not displayed. It is observed that the cloud properties generated by different samples vary slightly but generally reflect the typhoon\u2019s morphology accurately. Despite variations in COT values among the samples and differing degrees of overestimation and underestimation in the typhoon\u2019s cloud wall, they accurately estimated the optical thickness at the typhoon eye. Notably, underestimation occurred for COT values over 90 at about 16.03% of the grid points, and overestimation at 1.67% of the grid points, while COT values below 60 were well retrieved. Regarding CER, some samples did not accurately represent the CER, generally overestimating (9.68%, mainly around the typhoon eye) and underestimating (12.49%, mainly in the typhoon\u2019s cloud wall). Additionally, samples underestimated CTH to various extents, particularly on the west and southwest sides of the typhoon eye, with a total underestimation of 30.41% in CTH and a mere 0.63% overestimation. To evaluate the performance and uncertainty of the CloudDiff, we compared the cloud properties with those from the deterministic model (Fig. 3). The results show that individual sample produces more sharpness and more local details of COT, CER, and CTH compared to the ensemble mean (appears blurrier). The deterministic model\u2019s results blurrier than the ensemble mean and also lack detail. Regarding COT, compared to MODIS cloud products, the sample underestimated the COT in the typhoon eye region and overestimated areas with COT <90. The ensemble mean (the mean values of 30 samples) also overestimated the extent of COT <90 but reported lower values than single sample, somewhat correcting the underestimation of COT in 9 Samples COT CER CTH Figure 2: Cloud properties retrieval in the typhoon Hato region centering around 21.8\u00b0N, 113.8\u00b0E at 0250 UTC on August 23, 2017, was conducted using the CloudDiff. The columns represent samples and grid points where MODIS cloud properties are not captured by samples. The underestimation and overestimation are respectively indicated by black squares and green \u2019x\u2019. The background is colored based on MOD06 cloud products. the typhoon eye region by single sample. The standard deviation of 30 samples, which can donate the retrieval uncertainty, indicates large error in the estimates of COT in the typhoon\u2019s cloud wall, mainly because most samples overestimated the COT in this area (see Fig. 2). The deterministic model not only overestimated the extent of COT >90 (with lower internal values) but also underestimated the optical thickness on the western side of the typhoon eye. Both single sample and ensemble mean, as well as the deterministic model, inaccurately retrieved areas with CER >35\u00b5m and overestimated the CER in the typhoon eye area. However, the CloudDiff exhibited smaller biases in CER retrievals compared to the deterministic model, and standard deviations mostly below 6\u00b5m across most regions, indicating small uncertainty. Regarding CTH, CloudDiff exhibits minimal uncertainty, with standard deviations generally below 1 km across most regions. compared to MODIS, the ensemble mean more accurately represented CTH in the southern part of the typhoon eye than individual samples, but it underestimated areas with CTH greater than 16 km and the CTH in the typhoon eye. The deterministic model also underestimated CTH greater than 16 km and the CTH in the typhoon eye. Additionally, deterministic model 10 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N MODIS Sample Ensemble mean Deterministic model Std 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 0 20 40 60 80 0 10 20 30 40 0 10 20 30 40 50 m 0 3 6 9 12 m 10 11 12 13 14 15 16 17 km 0.0 0.4 0.8 1.2 1.6 km CTH CER COT Figure 3: MOD06 cloud products and retrieved cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. The columns are MOD06 cloud products, sample, esemble means, deterministic model, and standard deviation (std). underestimated CTH at the image edges. Moreover, both the ensemble mean and deterministic model accurately retrieved CLP (not showed), consistent with MODIS cloud classification results. Overall, the super-resolution cloud properties retrieval based on the CloudDiff proved superior to those from the deterministic model, providing sharper and more localized details of 1 km cloud properties during the typhoon event. Using 30 samples generated by the CloudDiff, we computed probability estimates for various thresholds of cloud property estimates and cloud phase probability results (Fig. 4), which deterministic model cannot provide. Based on the thresholds provided by the International Satellite Cloud Climatology Project (ISCCP) for COT and CTH associated with cloud types, we computed probability estimates for COT (Fig.4b,c,d) and CTH (Fig.4j,k,l) at similar thresholds in ISCCP. The results indicate that the probability estimates from the CloudDiff are close to with MODIS data, with probabilities exceeding 80% in the 3.6<COT<23 and 23<COT regions. Additionally, all MODIS CTH values were greater than 6.4 km, and the CloudDiff estimated probabilities of CTH>6.4 km to be over 90%. Following ISCCP cloud classifications, the predominant cloud types in the typhoon eye and its southwestern sea regions are cirrostratus, while other areas feature deep convection clouds. For CER, thresholds of 20 \u00b5m and 40 \u00b5m were selected for probability estimation (Fig.4f,g,h), revealing that the CloudDiff\u2019s CER estimates primarily 11 fall within the (20, 40] range, with very low probabilities for CER in the (0, 20] and CER>40 \u00b5m ranges. In comparison to MODIS, the CloudDiff tends to overestimate CER in the typhoon eye and underestimate CER over the western land areas of the typhoon eye. Furthermore, the CloudDiff\u2019s probability estimates for clouds classified as ice clouds in the study area exceed 99 % (not showed) , aligning well with MODIS. Overall, through probabilistic estimation, we can better ascertain the range of cloud property values and cloud phase, evaluate the uncertainty in cloud property retrieval and identification, and enhance the accuracy of super-resolution retrievals. 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (a)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (b)(0, 3.6] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (c)(3.6, 23] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (d)COT > 23 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (e)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (f)(0, 20] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (g)(20, 40] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (h)CER > 40 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (i)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (j)(0, 3.2] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (k)(3.2, 6.4] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (l)CTH > 6.4 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 10 20 30 40 50 m 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 10 11 12 13 14 15 16 17 km 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 CTH CER COT Figure 4: The probability estimates for cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. (b-d) present the probability estimates of COT within different threshold ranges.(fh) display the probability estimates of CER for varying thresholds. (j-l) show the probability estimates for CTH across different threshold ranges. 3.2. Overall evaluation We evaluated the overall performances of the models using data from the test set. We employed MSE and RMSE metrics to evaluate cloud properties. A comparative analysis was conducted to investigate how the number of samples affects the superresolution retrieval performance. This analysis included ensemble means with 1 and 30 samples. Additionally, we compared these results with those from the deterministic model. Figure 5 illustrates the RMSE and MSE comparisons between the MODIS cloud products and the super-resolution retrieval results. 12 ALL Water Ice 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 MAE (a) COT ALL Water Ice 4 5 6 7 8 9 10 11 (b) CER ( m) ALL Water Ice 0.75 1.00 1.25 1.50 1.75 2.00 2.25 (c) CTH (km) ALL Water Ice 12 13 14 15 16 17 RMSE (d) ALL Water Ice 6 8 10 12 14 16 (e) ALL Water Ice 1.5 2.0 2.5 3.0 3.5 4.0 (f) size 1 5 10 20 30 Deterministic model Figure 5: The performance evaluation of cloud properties. Skill metrics were calculated between the CloudDiff/deterministic model and MODIS cloud products. Different sizes of circles represent ensemble sizes ranging from 1 to 30, while pentagrams indicate deterministic model. For COT, CER, and CTH, the results indicate significantly higher MAE and RMSE values when the ensemble size is 1. As the ensemble size increases beyond five, both the MAE and RMSE of the ensemble mean gradually decrease. An interesting observation is that the improvement in super-resolution retrieval capability from 20 to 30 samples is relatively minor, suggesting that approximately 20 samples are sufficient to capture most of the high-resolution details and adequately cover the uncertainty space in the retrieval process. The MAE and RMSE values of the deterministic model retrieval approach those when the ensemble size is 5, and are notably lower than those observed with an ensemble size of 30. Specifically, for COT at an ensemble size of 30, the ensemble mean MAE for all clouds (water and ice) is 6.62, with an RMSE of 12.51, compared to the deterministic model results which have an MAE of 7.45 and an RMSE of 13.48. For water clouds alone, the MAE is 6.97 and the RMSE is 12.68, with ice clouds showing slightly better performance (MAE = 6.23, RMSE = 12.32). For CER, the ensemble mean MAE for all clouds at an ensemble size of 30 is 5.87\u00b5m, with an RMSE of 8.93\u00b5m. Water clouds exhibit a lower MAE of 4.47\u00b5m and RMSE of 6.62\u00b5m, whereas ice clouds have a higher MAE of 7.48\u00b5m and RMSE of 10.98\u00b5m. Similarly, for CTH at the same ensemble size, the ensemble mean MAE for all clouds is 1.18 km, with an RMSE of 2.15 km. The MAE for water clouds is 0.91 km and RMSE is 1.72 km, with ice clouds performing worse (MAE = 1.61 km, RMSE = 2.68 km). 13 Clear Water Ice CloudDiff Clear Water Ice MODIS 0.89 0.10 0.02 0.10 0.85 0.05 0.02 0.10 0.88 (a) OA=85.89% Clear Water Ice Deterministic model Clear Water Ice MODIS 0.87 0.11 0.02 0.11 0.83 0.06 0.03 0.11 0.87 (b) OA=84.52% 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 6: Confusion matrix of CLP products between MODIS and CloudDiff (a), deterministic model (b). \u2019OA\u2019 is the overall accuracy In addition, the cloud identification results were assessed. Here, we primarily compared the performance of the deterministic model with the ensemble mean results of 30 samples. The validation results demonstrate the model\u2019s capability to accurately identify true targets from MODIS data. Figure 6 presents the CLP identification results for the ensemble mean of the CloudDiff (Fig. 6a) and the deterministic model (Fig. 6b), which categorize the targets primarily into clear sky, water clouds, and ice clouds. The CloudDiff achieves an overall accuracy (OA) of 85.89%. Specifically, it shows a retrieval accuracy for clear sky and ice clouds of 89% and 88% respectively, and 85% for water clouds. In contrast, the deterministic model exhibits a retrieval accuracy of 88% for both clear sky and water clouds, but a slightly lower accuracy of 83% for ice clouds, with an OA of 84.52%, which is marginally lower than that of the CloudDiff. Overall, the ensemble mean of the CloudDiff demonstrates superior performance in identifying clear sky, water clouds, and ice clouds compared to the deterministic model. In summary, the CloudDiff enables the efficient generation of realistic samples that are faithful to a broad range of resolved retrieval schemes and sufficiently diverse to cover most plausible outcomes. 4. Conclusions In this study, we propose a conditional diffusion model named CloudDiff for cloud identification and retrieval of COT, CER, and CTH. The model is trained on 2 km TIR measurements from AHI onboard the Himawari-8 and satellite VZA, using MODIS 1 km resolution cloud products as training data. This CloudDiff is capable of generating cloud properties and CLP with high spatiotemporal resolution (1 km, 10-minute). It can produce various samples to effectively cover the distribution and range of cloud properties and also offers uncertainty estimates. Evaluation of the model on Typhoon Hato demonstrates that the 30 samples generated by the CloudDiff accurately capture the range of COT, CER, and CTH during 14 the typhoon event and effectively identify cloud phases. Compared to the deterministic model, the CloudDiff\u2019s cloud properties more closely align with MODIS cloud products and improve the sharpness of the super-resolution retrieval. Additionally, the model can provide probability estimates for different threshold cloud properties, significantly enhancing retrieval accuracy. Further evaluation on the test set shows that MAE and RMSE decrease as the ensemble size increases, with the lowest errors observed at an ensemble size of 30. The performance of deterministic model matches that of the ensemble mean when the ensemble size is 5, underscoring the superior results of the CloudDiff. The results clearly demonstrate that increasing the samples size enhances retrieval capabilities, but this improvement is minimal beyond a certain size; for instance, increasing the ensemble size from 20 to 30 offers little improvement. Although the CloudDiff has shown promising results, further improvements are still possible. Integrating additional conditional variables such as ERA5 meteorological data could improve the super-resolution retrieval effectiveness. Given adequate computing resources, it is feasible to generate more samples and determine the optimal ensemble size for even better performance. Future work will involve case studies of high-impact weather events to further assess the CloudDiff\u2019s performance and explore specific applications in ensemble retrieval. We hope that the demonstrated utility of generative artificial intelligence technology for cloud identification and probabilistic retrieval will promote its application in remote sensing, which is crucial for quantifying uncertainty in identification and forecasting weather events such as typhoons. We believe it is time to explore the potential of diffusion models in cloud remote sensing, offering a promising solution for challenges such as cloud image forecasting and satellite precipitation estimation. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work was supported by the National Natural Science Foundation of China (42222506 and 42075125). L. Wang also thanks the National Natural Science Foundation of China (12147101) for supporting his visit to Fudan University. The authors would like to thank NASA for freely providing the Himiwari-8 products (https:// www.eorc.jaxa.jp/ptree/index.html) and MODIS data online (https://ladsweb. modaps.eosdis.nasa.gov/). We acknowledge Xiaoye Wang from Fudan University for assisting with data processing. 15"
16
+ }
intro_8K/test_introduction_short_2405.04534v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04534v1",
3
+ "title": "Tactile-Augmented Radiance Fields",
4
+ "abstract": "We present a scene representation, which we call a tactile-augmented radiance\nfield (TaRF), that brings vision and touch into a shared 3D space. This\nrepresentation can be used to estimate the visual and tactile signals for a\ngiven 3D position within a scene. We capture a scene's TaRF from a collection\nof photos and sparsely sampled touch probes. Our approach makes use of two\ninsights: (i) common vision-based touch sensors are built on ordinary cameras\nand thus can be registered to images using methods from multi-view geometry,\nand (ii) visually and structurally similar regions of a scene share the same\ntactile features. We use these insights to register touch signals to a captured\nvisual scene, and to train a conditional diffusion model that, provided with an\nRGB-D image rendered from a neural radiance field, generates its corresponding\ntactile signal. To evaluate our approach, we collect a dataset of TaRFs. This\ndataset contains more touch samples than previous real-world datasets, and it\nprovides spatially aligned visual signals for each captured touch signal. We\ndemonstrate the accuracy of our cross-modal generative model and the utility of\nthe captured visual-tactile data on several downstream tasks. Project page:\nhttps://dou-yiming.github.io/TaRF",
5
+ "authors": "Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, Andrew Owens",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "As humans, our ability to perceive the world relies crucially on cross-modal associations between sight and touch [19, 50]. Tactile sensing provides a detailed understanding of material properties and microgeometry, such as the intri- cate patterns of bumps on rough surfaces and the complex motions that soft objects make when they deform. This type of understanding, which largely eludes today\u2019s com- puter vision models, is a critical component of applica- tions that require reasoning about physical contact, such as robotic locomotion [3, 24, 31, 34, 37, 38] and manipula- tion [6, 7, 11, 42, 60], and methods that simulate the behav- ior of materials [4, 13, 40, 41]. In comparison to many other modalities, collecting tac- tile data is an expensive and tedious process, since it re- quires direct physical interaction with the environment. A recent line of work has addressed this problem by having humans or robots probe the environment with touch sensors (see Table 1). Early efforts have been focused on capturing the properties of only a few objects either in simulation [16, 17, 52] or in lab-controlled settings [6, 7, 18, 28, 35, 52, 63], which may not fully convey the diversity of tactile signals in natural environments. Other works have gone beyond a 1 arXiv:2405.04534v1 [cs.CV] 7 May 2024 Dataset Samples Aligned Scenario Source More Than a Feeling [7] 6.5k \u2715 Tabletop Robot Feeling of Success [6] 9.3k \u2715 Tabletop Robot VisGel [35] 12k \u2715 Tabletop Robot SSVTP [28] 4.6k \u2713 Tabletop Robot ObjectFolder 1.0 [16] \u2013 \u2713 Object Synthetic ObjectFolder 2.0 [17] \u2013 \u2713 Object Synthetic ObjectFolder Real [18] 3.7k \u2715 Object Robot Burka et al. [5] 1.1k \u2715 Sub-scene Human Touch and Go [56] 13.9k \u2715 Sub-scene Human YCB-Slide\u2217[52] - \u2713 Object Human Touching a NeRF [63] 1.2k \u2713 Object Robot TaRF (Ours) 19.3k \u2713 Full scene Human Table 1. Dataset comparison. We present the number of real visual-tactile pairs and whether such pairs are visually aligned, i.e., whether the visual image includes an occlusion-free view of the touched surface. \u2217YCB-Slide has real-world touch probes but synthetic images rendered with CAD models of YCB objects on a white background [9]. lab setting and have collected touch from real scenes [5, 56]. However, existing datasets lack aligned visual and tactile in- formation, since the touch sensor and the person (or robot) that holds it often occlude large portions of the visual scene (Fig. 2). These datasets also contain only a sparse set of touch signals for each scene, and it is not clear how the sam- pled touch signals relate to each other in 3D. In this work, we present a simple and low-cost procedure to capture quasi-dense, scene-level, and spatially-aligned visual and touch data (Fig. 1). We call the resulting scene representation a tactile-augmented radiance field (TaRF). We remove the need for robotic collection by leveraging a 3D scene representation (a NeRF [39]) to synthesize a view of the surface being touched, which results in spatially aligned visual-tactile data (Fig. 2). We collect this data by mounting a touch sensor to a camera with commonly avail- able materials (Fig. 3). To calibrate the pair of sensors, we take advantage of the fact that popular vision-based touch sensors [25, 26, 32, 48] are built on ordinary cameras. The relative pose between the vision and tactile sensors can thus be estimated using traditional methods from multi-view ge- ometry, such as camera resectioning [20]. We use this procedure to collect a large real-world dataset of aligned visual-tactile data. With this dataset, we train a diffusion model [45, 51] to estimate touch at loca- tions not directly probed by a sensor. In contrast to the re- cent work of Zhong et al. [63], which also estimates touch from 3D NeRF geometry, we create scene-scale reconstruc- tions, we do not require robotic proprioception, and we use diffusion models [51]. This enables us to obtain tactile data at a much larger scale, and with considerably more diver- sity. Unlike previous visual-tactile diffusion work [57], we condition the model on spatially aligned visual and depth information, enhancing the generated samples\u2019 quality and their usefulness in downstream applications. After training, the diffusion model can be used to predict tactile informa- OF 2.0 [17] VisGel [35] OF Real [18] SSVTP [28] TG [56] TaRF (Ours) Figure 2. Visual-tactile examples. In contrast to the visual-tactile data captured in previous work, our approach allows us to sample unobstructed images that are spatially aligned with the touch sig- nal, from arbitrary 3D viewpoints using a NeRF. tion for novel positions in the scene. Analogous to quasi- dense stereo methods [15, 33], the diffusion model effec- tively propagates sparse touch samples, obtained by prob- ing, to other visually and structurally similar regions of the scene. We evaluate our visual-tactile model\u2019s ability to accu- rately perform cross-modal translation using a variety of quality metrics. We also apply it to several downstream tasks, including localizing a touch within a scene and un- derstanding material properties of the touched area. Our experiments suggest: \u2022 Touch signals can be localized in 3D space by exploiting multi-view geometry constraints between sight and touch. \u2022 Estimated touch measurements from novel views are not only qualitatively accurate, but also beneficial on down- stream tasks. \u2022 Cross-modal prediction models can accurately estimate touch from sight for natural scenes. \u2022 Visually-acquired 3D scene geometry improves cross- modal prediction.",
15
+ "main_content": "Visual-tactile datasets. Previous work has either used simulators [16, 17] or robotic arms [6, 8, 18, 35, 63] for data generation. Our work is closely related to that of Zhong et al. [63], which uses a NeRF and captured touch data to generate a tactile field for several small objects. They use the proprioception of an expensive robot to spatially align vision and touch. In contrast, we leverage the properties of the tactile sensor and novel view synthesis to use commonly available material (a smartphone and a selfie stick) to align vision and touch. This enables the collection of a larger, scene-level, and more diverse dataset, on which we train a higher-capacity diffusion model (rather than a conditional GAN). Like several previous works [5, 56], we also collect scene-level data. In contrast to them, we spatially align the signals by registering them in a unified 3D representation, thereby increasing the prediction power of the visual-tactile generative model. Capturing multimodal 3D scenes. Our work is related to methods that capture 3D visual reconstructions of spaces 2 using RGB-D data [12, 49, 55, 59] and multimodal datasets of paired 3D vision and language [1, 2, 10]. Our work is also related to recent methods that localize objects in NeRFs using joint embeddings between images and language [29] or by semantic segmentation [62]. In contrast to language supervision, touch is tied to a precise position in a scene. 3D touch sensing. A variety of works have studied the close relationship between geometry and touch, motivating our use of geometry in imputing touch. Johnson et al. [25, 26] proposed vision-based touch sensing, and showed that highly accurate depth can be estimated from the touch sensor using photometric stereo. Other work has estimated object-scale 3D from touch [54]. By contrast, we combine sparse estimates of touch with quasi-dense tactile signals estimated using generative models. Cross-modal prediction of touch from sight. Recent work has trained generative models that predict touch from images. Li et al. [35] used a GAN to predict touch for images of a robotic arm, while Gao et al. [18] applied them to objects collected on a turntable. Yang et al. [57] used latent diffusion to predict touch from videos of humans touching objects. Our goal is different from these works: we want to predict touch signals that are spatially aligned with a visual signal, to exploit scene-specific information, and to use geometry. Thus, we use a different architecture and conditioning signal, and fit our model to examples from the same scenes at training and test time. Other work has learned joint embeddings between vision and touch [28, 36, 56, 58, 61]. 3. Method We collect visual and tactile examples from a scene and register them together with a 3D visual reconstruction to build a TaRF. Specifically, we capture a NeRF F\u03b8 : (x, r) 7\u2192(c, \u03c3) that maps a 3D point x = (x, y, z) and viewing direction r to its corresponding RGB color c and density \u03c3 [39]. We associate to the visual representation a touch model F\u03d5 : vt 7\u2192\u03c4 that generates the tactile signal that one would obtain by touching at the center of the image vt. In the following, we explain how to estimate F\u03b8 and F\u03d5 and put them into the same shared 3D space. 3.1. Capturing vision and touch signals Obtaining a visual 3D reconstruction. We build the visual NeRF, F\u03b8, closely following previous work [12, 55]. A human data collector moves through a scene and records a video, covering as much of the space as possible. We then estimate camera pose using structure from motion [47] and create a NeRF using off-the-shelf packages [53]. Additional details are provided in the supplement. Capturing and registering touch. We simultaneously collect tactile and visual signals by mounting a touch sensor Visual Camera Tactile Sensor Tactile frames Visual frames Visual-Tactile Correspondences Figure 3. Capturing setup. (a) We record paired vision and touch signals using a camera attached to a touch sensor. (b) We estimate the relative pose between the touch sensor and the camera using correspondences between sight and touch. on a camera (Fig. 3), obtaining synchronized touch signals {\u03c4 i}N i=1 and video frames v. We then estimate the pose of the video frames using off-the-shelf structure from motion methods [47], obtaining poses {pv i }N i=1. Finally, we use the calibration of the mount to obtain the poses {pt i}N i=1 of the tactile measurements with respect to the scene\u2019s global reference frame. As a collection device, we mount an iPhone 14 Pro to one end of a camera rod, and a DIGIT [32] touch sensor to the other end. Note that the devices can be replaced with any RGB-D camera and vision-based tactile sensor. Capturing setup calibration. To find the relative pose between the camera and the touch sensor (Fig. 3), we exploit the fact that arbitrary viewpoints can be synthesized from F\u03b8, and that ubiquitous vision-based touch sensors are based on perspective cameras. In these sensors, an elastomer gel is placed on the lens of a commodity camera, which is illuminated by colored lights. When the gel is pressed into an object, it deforms, and the camera records an image of the deformation; this image is used as the tactile signal. This design allows us to estimate the pose of the tactile sensor through multi-view constraints from visualtactile correspondences: pixels in visual images and tactile images that are of the same physical point. We start the calibration process by synthesizing novel views from F\u03b8. The views are generated at the camera location {pv i }N i=1, but rotated 90\u25e6on the x-axis. This is because the camera is approximately orthogonal to the touch sensor (see Fig. 3). Then, we manually annotate corresponding pixels between the touch measurements and the generated frames (Fig. 3). To simplify and standardize this process, we place a braille board in each scene and probe it with the touch sensor. This will generate a distinctive touch signal that is easy to localize [23]. We formulate the problem of estimating the six degrees of freedom relative pose (R, t) between the touch sensor and the generated frames as a resectioning problem [20]. We use the estimated 3D structure from the NeRF F\u03b8 to obtain 3D points {xi}M i=1 for each of the annotated corre3 spondences. Each point has a pixel position ui \u2208R2 in the touch measurement. We find (R, t) by minimizing the reprojection error: \\ min _ { { \\ma thbf R } , { \\ma t hbf t}} \\frac {1}{M}\\sum _{i=1}^M \\lVert \\pi ({\\mathbf K}[\\mathbf {R}\\,\\,|\\,\\,\\mathbf {t}], \\mathbf {X}_i) \\bu _i \\rVert _1, (1) where \u03c0 projects a 3D point using a given projection matrix, K are the known intrinsics of the tactile sensor\u2019s camera, and the point Xi is in the coordinate system of the generated vision frames. We perform the optimization on 6-15 annotated correspondences from the braille board. For robustness, we compute correspondences from multiple frames. We represent the rotation matrix using quaternions and optimize using nonlinear least-squares. Once we have (R, t) with respect to the generated frames, we can derive the relative pose between the camera and the touch sensor. 3.2. Imputing the missing touch We use a generative model to estimate the touch signal (represented as an image from a vision-based touch sensor) for other locations within the scene. Specifically, we train a diffusion model p\u03d5(\u03c4 | v, d, b), where v and d are images and depth maps extracted from F\u03b8 (see Fig. 4). We also pass as input to the diffusion model a background image captured by the touch sensor when it is not in contact with anything, denoted as b. Although not essential, we have observed that this additional input empirically improves the model\u2019s performance (e.g., Fig. 1 the background provides the location of defects in the gel, which appear as black dots). We train the model p\u03d5 on our entire vision-touch dataset (Sec. 4). The training of p\u03d5 is divided into two stages. In the first, we pre-train a cross-modal visual-tactile encoder with self-supervised contrastive learning on our dataset. This stage, initially proposed by [23, 57], is equivalent to the self-supervised encoding pre-training that is common for image generation models [45]. We use a ResNet-50 [21] as the backbone for this contrastive model. In the second stage, we use the contrastive model to generate the input for a conditional latent diffusion model, which is built upon Stable Diffusion [45]. A frozen pretrained VQ-GAN [14] is used to obtain the latent representation with a spatial dimension of 64 \u00d7 64. We start training the diffusion model from scratch and pre-train it on the task of unconditional tactile image generation on the YCBSlide dataset [52]. After this stage, we train the conditional generative model p\u03d5 on our spatially aligned visual-tactile dataset, further fine-tuning the contrastive model end-to-end with the generation task. At inference time, given a novel location in the 3D scene, we first render the visual signals \u02c6 v and \u02c6 d from NeRF, and then estimate the touch signal \u02c6 \u03c4 of the position using the diffusion model. Latent Diffusion Gaussian Noise \u001f\u001e\u001e\u001e\u001e\u001d\u001e\u001e\u001e\u001e\u001c Depth RGB Est. Touch NeRF { Figure 4. Touch estimation. We estimate the tactile signal for a given touch sensor pose (R, t). To do this, we synthesize a viewpoint from the NeRF, along with a depth map. We use conditional latent diffusion to predict the tactile signal from these inputs. 4. A 3D Visual-Tactile Dataset In the following, we show the details of the data collection process and statistics of our dataset. 4.1. Data Collection Procedure The data collection procedure is divided into two stages. First, we collect multiple views from the scene, capturing enough frames around the areas we plan to touch. During this stage, we collect approximately 500 frames. Next, we collect synchronized visual and touch data, maximizing the geometry and texture being touched. We then estimate the camera location of the vision frames collected in the previous two stages using off-the-shelf mapping tools [47]. After estimating the camera poses for the vision frames, the touch measurements\u2019 poses can be derived by using the mount calibration matrix. More details about the pose estimation procedure can be found in the supplement. Finally, we associate each touch sensor with a color image by translating the sensor poses upwards by 0.4 meters and querying the NeRF with such poses. The field of view we use when querying the NeRF is 50\u25e6. This provides us with approximately 1,500 temporally aligned vision-touch image pairs per scene. Note that this collection procedure is scalable since it does not require specific expertise or equipment and generates abundant scene-level samples. 4.2. Dataset Statistics We collect our data in 13 ordinary scenes including two offices, a workroom, a conference room, a corridor, a tabletop, a corridor, a lounge, a room with various clothes and four outdoor scenes with interesting materials. Typically, we collect 1k to 2k tactile probes in each scene, resulting in a total of 19.3k image pairs in the dataset. Some representative samples from the collected dataset are shown in Fig. 5. Our data includes a large variety of geometry (edges, surfaces, corners, etc.) and texture (plastic, clothes, snow, wood, etc.) of different materials in the scene. During capturing process, the collector will try to 4 Figure 5. Representative examples from the captured dataset. Our dataset is obtained from nine everyday scenes, such as offices, classrooms, and kitchens. We show three such scenes in the figure above, together with samples of spatially aligned visual and tactile data. In each scene, 1k to 2k tactile probes were collected, resulting in a total of 19.3k image pairs. The data encompasses diverse geometries (edges, surfaces, corners, etc.) and textures (plastic, clothes, snow, wood, etc.) of various materials. The collector systematically probed different objects, covering areas with distinct geometry and texture using different sensor poses. thoroughly probe various objects and cover the interesting areas with more distinguishable geometry and texture with different sensor poses. To the best of our knowledge, our dataset is the first dataset that captures full, scene-scale spatially aligned vision-touch image pairs. We provide more details about the dataset in the supplement. 5. Experiments Leveraging the spatially aligned image and touch pairs from our dataset, we first conduct experiments on dense touch estimation. We then show the effectiveness of both the aligned data pairs and the synthesized touch signals by conducting tactile localization and material classification as two downstream tasks. 5.1. Implementation Details NeRF. We use the Nerfacto method from Nerfstudio [53]. For each scene, we utilize approximately 2,000 images as training set, which thoroughly cover the scene from various view points. We train the network with a base learning rate of 1 \u00d7 10\u22122 using Adam [30] optimizer for 200,000 steps on a single NVIDIA RTX 2080 Ti GPU to achieve optimal performance. Visual-tactile contrastive model. Following prior works [27, 57], we leverage contrastive learning methods to train a ResNet-50 [21] as visual encoder. The visual and tactile encoders share the same architecture but have different weights. We encode visual and tactile data into latent vectors in the resulting shared representation space. We set the dimension of the latent vectors to 32. Similar to CLIP [43], the model is trained on InfoNCE loss obtained from the pairwise dot products of the latent vectors. We train the model for 20 epochs by Adam [30] optimizer with a learning rate of 10\u22124 and batch size of 256 on 4 NVIDIA RTX 2080 Ti GPUs. Visual-tactile generative model. Our implementation of the diffusion model closely follows Stable Diffusion [46], with the difference that we use a ResNet-50 to generate the visual encoding from RGB-D images for conditioning. Specifically, we also add the RGB-D images rendered from the tactile sensors\u2019 poses into the conditioning, which we refer to in Sec. 5.2 as multiscale conditioning. The model is optimized for 30 epochs by Adam [30] optimizer with a base learning rate of 10\u22125. The learning rate is scaled by gpu number \u00d7 batch size. We train the model with batch size of 48 on 4 NVIDIA A40 GPUs. At inference time, the model conducts 200 steps of denoising process with a 7.5 guidance scale. Following prior cross-modal synthesis work [44], we use reranking to improve the prediction quality. We obtain 16 samples from the diffusion model for every instance and re-rank the samples with our pretrained contrastive model. The sample with highest similarity is the final prediction. 5.2. Dense Touch Estimation Experimental setup. We now evaluate the diffusion model\u2019s ability to generate touch images. To reduce overlap between the training and test set, we first split the frames into sequences temporally (following previous work [56]). We split them into sequences of 50 touch samples, then divide these sequences into train/validation/test with a ratio of 8/1/1. We evaluate the generated samples on Frechet Inception Distance (FID), a standard evaluation metric for cross-modal generation [56]. We also include Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM), though we note that these metrics are highly sensitive to spatial position of the generated content, and can be optimized by models that minimize simple pixelwise losses [22]. We also include CVTP metric proposed by prior work [57], which measures the similarity between visual and tactile embeddings of a contrastive model, analogous to 5 edge Condition VisGel Condition G.T. Ours L1 Ours G.T. L1 VisGel brick rock chair sofa desk wall surface desk carpet Figure 6. Qualitative touch estimation results. Each model is conditioned on the RGB image and depth map rendered from the NeRF (left). The white box indicates the tactile sensor\u2019s approximate field of view (which is much smaller than the full conditional image). The G.T. column shows the ground truth touch images measured from a DIGIT sensor. L1 and VisGel often generate blurry textures and inaccurate geometry. By contrast, our model better captures the features of the tactile image, e.g., the rock\u2019s microgeometry and complex textures and shapes of furniture. The last row shows two failure cases of our model. In both examples, our model generates a touch image that is geometrically misaligned with the ground truth. All of the examples shown here are at least 10cm away from any training sample. CLIP [43] score. We compare against two baselines: VisGel, the approach from Li et. [35], which trains a GAN for touch generation, and L1, a model with the same architecture of VisGel but trained to minimize an L1 loss in pixel space. Results. As is shown in Table 2, our approach performs much better on the high-level metrics, with up to 4x lower FID and 80x higher CVTP. This indicates that our proposed diffusion model captures the distribution and characteristics of the real tactile data more effectively. On the low-level metrics (PSNR and SSIM), all methods are comparable. In particular, the L1 model slightly outperforms the other methods since the loss it is trained on is highly correlated with low-level, pixel-wise metrics. Fig. 6 qualitatively compares samples from the different models. Indeed, our generated samples exhibit enhanced details in micro-geometry of fabrics and richer textures, including snow, wood and carpeting. However, all methods fail on fine details that are barely visible in the image, such as the tree bark. Ablation study. We evaluate the importance of the main components of our proposed touch generation approach (Table 3). Removing the conditioning on the RGB image results in the most prominent performance drop. This is expected since RGB image uniquely determines the fineModel PSNR \u2191 SSIM \u2191 FID \u2193 CVTP \u2191 L1 24.34 0.82 97.05 0.01 VisGel [35] 23.66 0.81 130.22 0.03 Ours 22.84 0.72 28.97 0.80 Table 2. Quantitative results on touch estimation for novel views. While comparable on low-level metrics with the baselines, our approach captures the characteristics of the real tactile data more effectively, resulting in a lower FID score. grained details of a tactile image. Removing depth image or contrastive pretraining has small effect on CVTP but results in a drop on FID. Contrastive re-ranking largely improves CVTP, indicating the necessity of obtaining multiple samples from the diffusion model. We also find that multiscale conditioning provide a small benefit on FID and CVTP. 5.3. Downstream Task I: Tactile Localization To help understand the quality of the captured TaRFs, we evaluate the performance of the contrastive model (used for conditioning our diffusion model) on the task of tactile localization. Given a tactile signal, our goal is to find the corresponding regions in a 2D image or in a 3D scene that are associated with it, i.e., we ask the question: what part of this image/scene feel like this? We perform the following 6 Query Heatmap Query Query Heatmap Heatmap Query Heatmap Figure 7. Tactile localization heatmaps. Given a tactile query image, the heatmap shows the image patches with a higher affinity to this tactile signal, as measured by a contrastive model trained on our dataset. We use a sliding window and compare each extracted patch with the touch signal. In each case, the center patch is the true position. Our model successfully captures the correlation between the two signals. This enables it to localize a variety of touch signals, including fine-grained geometry, e.g., a cable or a keyboard, various types of corners and edges, and large uniform regions, such as a clothing. This ability enables our diffusion model to effectively propagate sparse touch samples to other visually and structurally similar regions of the scene. Model variation PSNR \u2191SSIM \u2191FID \u2193CVTP \u2191 Full 22.84 0.72 28.97 0.80 No RGB conditioning 22.13 0.70 34.31 0.76 No depth conditioning 22.57 0.71 33.16 0.80 No contrastive pretraining 22.82 0.71 32.98 0.79 No re-ranking 22.92 0.72 29.46 0.61 No multiscale 23.19 0.72 30.89 0.77 Table 3. Ablation study. Since the fine-grained details of touch images can be determined from a RGB image, removing conditioning on the latter results in the largest performance drops. Reranking has notable impact on CVTP, indicating the necessity of obtaining multiple samples from the diffusion model. evaluations on the test set of our dataset. Note that we run no task-specific training. 2D Localization. To determine which part of an image are associated with a given tactile measurement, we follow the same setup of SSVTP [28]. We first split the image into patches and compute their embedding. Then, we generate the tactile embedding of the input touch image. Finally, we compute the pairwise similarities between the tactile and visual embeddings, which we plot as a heatmap. As we can see in Fig. 7, our constrastive encoder can successfully capture the correlations between the visual and tactile data. For instance, the tactile embeddings of edges are associated to edges of similar shape in the visual image. Note that the majority of tactile embeddings are highly ambiguous: all edges with a similar geometry feel the same. 3D Localization. In 3D, the association of an image to tactile measurements becomes less ambiguous. Indeed, since tactile-visual samples are rotation-dependent, objects with similar shapes but different orientations will generate different tactile measurements. Lifting the task to 3D still does not remove all ambiguities (for example, each side of a rectangular table cannot be precisely localized). Nonetheless, we believe it to be a good fit for a quantitative evaluation since it\u2019s rare for two ambiguous parts of the scene to be touched with exactly the same orientation. We use the following experimental setup for 3D localization. Given a tactile image as a query, we compute its distance in embedding space to all visual test images from the same scene. Note that all test images are associated with a 3D location. We define as ground-truth correspondences all test images at a distance of at most r from the 3D location of the test sample. We vary r to account for local ambiguities. As typical in the retrieval literature, we benchmark the performance with metric mean Average Precision (mAP). We consider three baselines: (1) chance, which randomly selects corresponding samples; (2) real, which uses the contrastive model trained on our dataset; and (3) real + estimated, which trains the contrastive model on both dataset samples and a set of synthetic samples generated via the scenes\u2019 NeRF and our touch generation model. Specifically, we render a new image and corresponding touch by interpolating the position of two consecutive frames in the training dataset. This results in a training dataset for the contrastive model that is twice as large. 7 r(m) Dataset 0.001 0.005 0.01 0.05 0.1 Chance 3.55 6.82 10.25 18.26 21.33 Real 12.10 22.93 32.10 50.30 57.15 Real + Est. 14.92 26.69 36.17 53.62 60.61 Table 4. Quantitative results on 3D tactile localization. We evaluate using mean Average Precision (mAP) as a metric. Training the contrastive model on our dataset of visually aligned real samples together with estimated samples from new locations in the scene results in the highest performance. The results, presented in Table 4, demonstrate the performance benefit of employing both real and synthetic tactile pairs. Combining synthetic tactile images with the original pairs achieves highest performance on all distance thresholds. Overall, this indicates that touch measurements from novel views are not only qualitatively accurate, but also beneficial for this downstream task. 5.4. Downstream Task II: Material Classification We investigate the efficacy of our visual-tactile dataset for understanding material properties, focusing on the task of material classification. We follow the formulation by Yang et al. [56], which consists of three subtasks: (i) material classification, requiring the distinction of materials among 20 possible classes; (ii) softness classification, a binary problem dividing materials as either hard or soft; and (iii) hardness classification, which requires the classification of materials as either rough or smooth. We follow the same experimental procedure of [56]: we pretrain a contrastive model on a dataset and perform linear probing on the sub-tasks\u2019 training set. Our experiments only vary the pretraining dataset, leaving all architectural choices and hyperparameters the same. We compare against four baselines. A random classifier (chance); the ObjectFolder 2.0 dataset [17]; the VisGel dataset [35]; and the Touch and Go dataset [56]. Note that the touch sensor used in the test data (GelSight) differs from the one used in our dataset (DIGIT). Therefore, we use for pretraining a combination of our dataset and Touch and Go. To ensure a fair comparison, we also compare to the combination of each dataset and Touch and Go. The findings from this evaluation, as shown in Table 5, suggest that our data improves the effectiveness of the contrastive pretraining objective, even though our data is from a different distribution. Moreover, we find that adding estimated touch probes for pretraining results in a higher performance on all the three tasks, especially the smoothness classification. This indicates that not only does our dataset covers a wide range of materials but also our diffusion model captures the distinguishable and useful patterns of different materials. Dataset Material Hard/ Soft Rough/ Smooth Chance 18.6 66.1 56.3 ObjectFolder 2.0 [17] 36.2 72.0 69.0 VisGel [35] 39.1 69.4 70.4 Touch and Go [56] 54.7 77.3 79.4 + ObjectFolder 2.0 [17] 54.6 87.3 84.8 + VisGel [35] 53.1 86.7 83.6 + Ours\u2217(Real) 57.6 88.4 81.7 + Ours\u2217(Real + Estimated) 59.0 88.7 86.1 Table 5. Material classification. We show the downstream material recognition accuracy of models pre-trained on different datasets. The final rows show the performance when combining different datasets with Touch and Go [56]. \u2217The task-specific training and testing datasets for this task are collected with a GelSight sensor. We note that our data comes from a different distribution, since it is collected with a DIGIT sensor [32]. 6. Conclusion In this work, we present the TaRF, a scene representation that brings vision and touch into a shared 3D space. This representation enables the generation of touch probes for novel scene locations. To build this representation, we collect the largest dataset of spatially aligned vision and touch probes.We study the utility of both the representation and the dataset in a series of qualitative and quantitative experiments and on two downstream tasks: 3D touch localization and material recognition. Overall, our work makes the first step towards giving current scene representation techniques an understanding of not only how things look, but also how they feel. This capability could be critical in several applications ranging from robotics to the creation of virtual worlds that look and feel like the real world. Limitations. Since the touch sensor is based on a highly zoomed-in camera, small (centimeter-scale) errors in SfM or visual-tactile registration can lead to misalignments of several pixels between the views of the NeRF and the touch samples, which can be seen in our TaRFs. Another limitation of the proposed representation is the assumption that the scene\u2019s coarse-scale structure does not change when it is touched, an assumption that may be violated for some inelastic surfaces. Acknowledgements. We thank Jeongsoo Park, Ayush Shrivastava, Daniel Geng, Ziyang Chen, Zihao Wei, Zixuan Pan, Chao Feng, Chris Rockwell, Gaurav Kaul and the reviewers for the valuable discussion and feedback. This work was supported by an NSF CAREER Award #2339071, a Sony Research Award, the DARPA Machine Common Sense program, and ONR MURI award N00014-21-1-2801. 8"
16
+ }
intro_8K/test_introduction_short_2405.04674v1.json ADDED
The diff for this file is too large to render. See raw diff
 
intro_8K/test_introduction_short_2405.04682v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04682v1",
3
+ "title": "TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation",
4
+ "abstract": "Recent advances in diffusion-based generative modeling have led to the\ndevelopment of text-to-video (T2V) models that can generate high-quality videos\nconditioned on a text prompt. Most of these T2V models often produce\nsingle-scene video clips that depict an entity performing a particular action\n(e.g., `a red panda climbing a tree'). However, it is pertinent to generate\nmulti-scene videos since they are ubiquitous in the real-world (e.g., `a red\npanda climbing a tree' followed by `the red panda sleeps on the top of the\ntree'). To generate multi-scene videos from the pretrained T2V model, we\nintroduce Time-Aligned Captions (TALC) framework. Specifically, we enhance the\ntext-conditioning mechanism in the T2V architecture to recognize the temporal\nalignment between the video scenes and scene descriptions. For instance, we\ncondition the visual features of the earlier and later scenes of the generated\nvideo with the representations of the first scene description (e.g., `a red\npanda climbing a tree') and second scene description (e.g., `the red panda\nsleeps on the top of the tree'), respectively. As a result, we show that the\nT2V model can generate multi-scene videos that adhere to the multi-scene text\ndescriptions and be visually consistent (e.g., entity and background). Further,\nwe finetune the pretrained T2V model with multi-scene video-text data using the\nTALC framework. We show that the TALC-finetuned model outperforms the baseline\nmethods by 15.5 points in the overall score, which averages visual consistency\nand text adherence using human evaluation. The project website is\nhttps://talc-mst2v.github.io/.",
5
+ "authors": "Hritik Bansal, Yonatan Bitton, Michal Yarom, Idan Szpektor, Aditya Grover, Kai-Wei Chang",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "The ability to generate videos that simulate the physical world has been a long-standing goal of artificial intelligence [1, 2, 3, 4]. In this regard, text-to-video (T2V) models have seen rapid advancements by pretraining on internet-scale datasets of images, videos, and texts [5, 6]. Previous works [7, 8, 9, 10, 11, 12] primarily focus on training conditional denoising diffusion probabilistic models [13] on paired video-text data [14, 15]. After training, these models allow for video generation by sampling from the trained diffusion model, conditioned on a text prompt. However, most of the open-models such as ModelScope[10] VideoCrafter [16, 17], OpenSora [18] are trained with single-scene video-text dataset [14, 19], which is widely available and easy to acquire. However, real-world scenarios often require the generation of multi-scene videos from multi-scene descriptions (e.g., Scene1: \u2018A koala is napping on a tree.\u2019 Scene2: \u2018The koala eats leaves on the tree.\u2019). In such cases, the generated video should accurately depict the events in their temporal order (e.g., Scene2 \u2020 Equal Contribution. \u2217Equal Advising. Contact [email protected],[email protected]. Preprint. arXiv:2405.04682v1 [cs.CV] 7 May 2024 Scene 1: \u201cA red panda climbing a tree\u201d Scene 2 : \u201cThe red panda sleeps on the top of the tree\u201d Text2Video (a) Merging Captions (b) Merging Videos (c) Time-Aligned Captions (TALC) Text2Video Text2Video Text2Video \u201c{Scene 1} then {scene 2}\u201d Figure 1: Multi-scene video generation methods. (a) Generating a video by merging scene 1 and scene 2 descriptions. (b) The resulting video is composed from the video generated by the description of scene 1 and the video generated by the description of scene 2. (c) In our method (TALC) the generated video is conditioned on the description of scene 1 for the first half of the video frames and on the description of scene 2 for the later video frames. follows Scene1) while maintaining visual consistency, meaning that backgrounds and entities should remain consistent across scenes. While high-performance text-to-video models such as Sora [4] might be able to generate multi-scene videos, we point out that they are closed-source models trained with massive compute resources and lack sufficient details on the model design, training protocol, and datasets. In this work, we present a complementary approach and tackle the challenge of effectively leveraging the capabilities of base T2V models for multi-scene video generation. The multi-scene text-to-video generation differs from long video synthesis where the goal is to either interpolate (few frames to many frames) [8] or create continuing patterns of the single event in the generated video [11]. Prior works [20, 9] use a transformers [21, 22] to generate video frames for a given scene autoregressively. However, it is hard for their model to generate multiple scenes reliably as the context length increases with history of text descriptions and visual tokens [23] of the previous generated videos (e.g., generating Scene 4 conditioned on the Scene1, 2, 3 videos and descriptions). Other works [24] utilize a latent diffusion model [25] to generate video frames autoregressively by conditioning on the entire history of generated videos and scene descriptions. However, the approach is (a) slow due to repeated sampling, (b) generates only one frame per scene description, and (c) shown to work with only limited cartoon characters [26, 27] instead of wide range of visual concepts in the real-world. In this work, our goal is to generate multi-scene videos in the end-to-end manner, using a diffusion text-to-video generative model that is capable of producing content for a wide range of visual entities and actions. As shown in Figure 1(a), the naive approach to generating a multi-scene video for the scene de- scriptions (T \u2032 1, T \u2032 2) would condition the T2V generative model on the merged descriptions. In this setup, the diffusion model processes the entire scene description together, and lacks any information regarding the expected temporal order of events in the generated videos. As a result, we find that this approach leads to poor text-video alignment. As shown in Figure 1(b), an alternative approach generates videos for the individual text descriptions independently and concatenates them in the raw input space along the temporal dimension. While this approach achieves good alignment between the scene description and the scene-specific video segment, the resulting video lacks visual consistency in terms of entity and background appearances. Prior work [28, 29] generates multi-scene videos by utilizing knowledge of the entity, background, and their movements from large language models [30]. However, these videos are generated independently for each scene before being merged. Moreover, these methods do not offer a way to learn from real- world multi-scene video-text data. To remedy these challenges, we propose TALC (Time-ALigned Captions), a simple and effective framework to generate consistent and faithful multi-scene videos. As shown in Figure 1(c), our approach conditions the T2V generative model with the knowledge of the temporal alignment between the parts of the multi-scene video and multi-scene descriptions. 2 (c) Time-Aligned Captions (TALC) (b) Merging Videos (a) Merging Captions \u201cA grizzly bear catches a \ufb01sh in a rushing river\u201d \u201cThe grizzly bear looks over its territory.\u201d \u201cA grizzly bear catches a \ufb01sh in a rushing river then the grizzly bear looks over its territory.\u201d \u201cA grizzly bear catches a \ufb01sh in a rushing river\u201d \u201cThe grizzly bear looks over its territory.\u201d Figure 2: Examples of multi-scene video generation baselines. (a) Generating video on the merged descriptions, leads to a poor text-video alignment. (b) Generating videos for the individual text descriptions and concatenate them temporally, leads to a lack of background consistency. (c) Our approach (TALC) enhances the scene-level text-video alignment and maintains background consistency. Specifically, TALC conditions the visual representations of earlier video frames on the embeddings of the earlier scene description, and likewise, it conditions the representations of later video frames on the embeddings of the later scene description in the temporal dimension. Additionally, the temporal modules in the T2V diffusion architecture allows information sharing between video frames (the first half and the second half) to maintain visual consistency. Thus, TALC enhances the scene-level text- video alignment while providing all the scene descriptions to the diffusion model at once. Further, our TALC framework can enhance the multi-scene text-to-video generation capabilities with real-world multi-scene data (\u00a73.3). In our experiments, we assess the visual consistency (background and entity consistency) and multi- scene script adherence of the generated videos from Modelscope [10] and Lumiere [6]. Through our automatic and human evaluation, we find that merging scene descriptions leads to high visual consistency but poor text adherence. On the other hand, we observe that merging videos independently achieves the highest text adherence while the visual consistency is compromised. Interestingly, switching to TALC strikes an effective balance between visual consistency and text adherence, outperforming the baseline methods by 11.1 points on the overall score. This score represents the average of visual consistency and text adherence scores, as determined by human evaluation. Furthermore, we construct a multi-scene text-video dataset from real-world videos and fine-tune the T2V generative model using TALC. On our human evaluation, the generated videos from the TALC-finetuned model exhibit higher text adherence than the base model in multi-scene scenarios. Specifically, it outperforms the baseline methods by 15.5 points on the overall score. In summary, our contributions are:",
17
+ "main_content": "In this work, we focus on generating multi-scene videos from scene descriptions using a diffusionbased Text-to-Video (T2V) generative model. The initial step is to equip the generative model with the knowledge of a wide range of visual concepts and actions. This is achieved during the pretraining stage (\u00a72.1). Subsequently, we aim to utilize the base model for multi-scene text-to-video generation task, which we formalize in (\u00a72.3). In \u00a73, we propose our TALC framework and discuss collection of real-world multi-scene text-video data for finetuning the base T2V model. 3 2.1 Diffusion Models for Text-to-Video Generation Diffusion models [13, 31] p\u03b8(x) are a class of generative models that learn data distribution pdata(x). Due to their flexible design, we can train their class-conditional versions to learn class-conditional data distributions pdata(x|y) where y is the conditioning variable, that can take various forms such as labels from a dataset or text description accompanying in a video [32]. We assume a dataset S \u2282V \u00d7 T consisting of pairs of (Vj, Tj) where Vj \u2208RL\u00d73\u00d7H\u00d7W is a raw video consisting of 3 RGB channels, L frames, H height, W width, and Tj is a text caption. We use V and T to denote the domain of videos and text, respectively. The aim of T2V generative modeling is to learn the conditional distribution of the videos conditioned on the text pS(Vj|Tj). In this work, we consider diffusion-based generative models that learn the data distribution via iterative denoising of the input video zj \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032. Here, zj can either represent the input video in the raw pixel space Vj [6] or it can represent the latent representation of the video zj = E(Vj) for the latent diffusion models [25] where E is an encoder network such as VAE [33]. Given zj, diffused variable z\u03c4,j = \u03b1\u03c4zj + \u03b2\u03c4\u03f5 are constructed where \u03f5 \u223cN(0, I) where \u03b1\u03c4 and \u03b2\u03c4 are sampled from the noise scheduler p\u03c4 [34] which define the noise levels the model is trained on. Finally, we train a denoiser network f\u03b8 [35, 36] that inputs the diffused variable z\u03c4 and embeddings of the text caption to predict the target vector y where y can be the original noise \u03f5, which minimizes the denoising score matching objective [13]: E(Vj,Tj)\u2208S,\u03c4\u223cp\u03c4 ,\u03f5\u223cN(0,I) \u0002 ||\u03f5 \u2212f\u03b8(\u03c4, z\u03c4,j, hj)||2 2 \u0003 (1) where hj = H(Tj) \u2208Rd is the embedding of the text caption Tj where H is the text embedding model [37] and d is the dimension size. 2.2 Text Conditioning Mechanism To ensure the effective textual controllability of video generation, the structure of the denoiser networks is equipped with a cross-attention mechanism [10, 8]. Specifically, it conditions the visual content z\u03c4 \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032 on the text. To do so, we first repeat the text embeddings of the text caption rj = R(hj) \u2208RL\u00d7d where R is a function that repeats the input text embedding hj for L times in the temporal dimension. Intuitively, the repeat operation represents that the L frames of the video zj are semantically aligned with the textual description Tj or its text embedding rj. In \u00a73, we will manipulate this operation to make the model architecture aware of the video-text alignment in the multi-scene scenario. These repeated text embeddings rj are inputs to the spatial attention block as the key and value in the multi-head attention block. The cross-attention enables the intermediate visual features to capture the semantic information that facilitates an alignment between the language and vision embeddings. Formally, z\u2032 \u03c4,j = CAf\u03b8(Q = z\u03c4,j; K = rj; V = rj) (2) where CAf\u03b8 is the cross attention mechanism with Q, K, V as the query, key, and value, respectively, in the spatial blocks of the denoiser network. Additionally, z\u2032 \u03c4,j is the intermediate representation that is informed with the visual and textual content of the data. In addition to the spatial blocks, the denoiser network also consists temporal blocks that aggregate features across video frames which are useful for maintaining visual consistency in the generated video. 2.3 Multi-Scene Text-to-Video Generation In many real-world scenarios, such as movies, stories, and instructional videos [38], a video may depict multiple transitions with the same or changing entities, as well as multiple actions or events. In addition, the different video segments often share contextual information such as the background or location. These videos are considered multi-scene videos. In this work, we aim to generate multi-scene video X = {x1, x2, . . . , xn} from multi-scene descriptions Y = {y1, y2, . . . , yn} where n are the number of sentences and each sentence yj is a scene description for scene j. Additionally, the index j also defines the temporal order of events in the multi-scene script i.e., we want the events 4 Text2Video Denoising UNet Scene 1: \u201cA red panda climbing a tree.\u201d Scene 2 : \u201cThe red panda sleeps on the top of the tree\u201d Figure 3: The architecture of Time-Aligned Captions (TALC). During the generation process of the video, the initial half of the video frames are conditioned on the embeddings of the description of scene 1 (ry1), while the subsequent video frames are conditioned on the embeddings of the description of scene 2 (ry2). described in the scene j to be depicted earlier than the events described in the scene k where k > j. Further, we want the parts of the entire generated video X, given by xj, to have high video-text semantic alignment with the corresponding scene description yj, also referred to as text adherence. For instance, consider a two-scene description Y = {\u2018A red panda climbs on a bamboo forest.\u2019, \u2018The red panda sleeps peacefully in the treetop.\u2019}. Here, we need the T2V generative model to synthesize the appearance of the red panda (an entity) that remains consistent throughout the generated video, also referred to as entity consistency. In addition, we will expect that the context of the multi-scene video of a forest (a background) to remain consistent, also referred to as background consistency. 3 Method 3.1 TALC: Time-Aligned Captions for Multi-Scene T2V Generation Most of the existing T2V generative models [10, 16, 6] are trained with large-scale short video-text datasets (10 seconds 30 seconds) such as WebVid-10M [14]. Here, each instance of the dataset consists of a video and a human-written video description. These videos either lack the depiction of multiple events, or the video descriptions do not cover the broad set of events in the video, instead focusing on the major event shown. As a result, the pretrained T2V generative models only synthesize single video scenes depicting individual events. We introduce TALC, a novel and effective framework to generate multi-scene videos from diffusion T2V generative models based on the scene descriptions. Our approach focuses on the role of text conditioning mechanism that is widely used in the modern T2V generative models (\u00a72.2). Specifically, we take inspiration from the fact that the parts of the generated video xj should depict the events described in the scene description yj. To achieve this, we ensure that the representations for the part of the generated video aggregates language features from the scene description yj. Consider that we want to generate a multi-scene video X \u2208RL\u00d73\u00d7H\u00d7W from the scene descriptions yj \u2208Y , using a T2V generative model f\u03b8. Furthermore, we assume that individual video segments xj are allocated L/n frames within the entire video X. Let zX = [zx1; zx2; . . . ; zxn] \u2208RL\u00d7C\u00d7H\u2032\u00d7W \u2032 represent the representation for the entire video X, and zxj \u2208R(L/n)\u00d7C\u00d7H\u2032\u00d7W \u2032 for the jth part of the video that are concatenated in the temporal dimension. In addition, consider rY = {ry1, . . . , ryn} be the set of text embeddings for the multi-scene description Y and yj be an individual scene description. In the TALC framework, the Eq. 2 is changed to: z\u2032 \u03c4,xj = CAf\u03b8(Q = z\u03c4,xj, K = ryj, V = ryj) (3) z\u2032 \u03c4,X = [z\u2032 x1; z\u2032 x2; . . . ; z\u2032 xn] (4) Here, \u03c4 represents the timestamp in the diffusion modeling setup, which is applied during training as well as inference. We illustrate the framework in Figure 3. While TALC aims to equip the generative model with the ability to depict all the events in the multi-scene descriptions, the visual consistency 5 is ensured by the temporal modules (attentions and convolution blocks) in the denoiser network. By design, our approach can be applied to the pretrained T2V model during inference. 3.2 Baselines Here, we describe the baseline methods that could be used to generate videos for the multi-scene descriptions from a given diffusion text-to-video generative model. 3.2.1 Merging Captions In this setup, we create a single caption by merging all the multi-scene descriptions. Specifically, the multi-scene descriptions Y = {y1, y2, . . . , yn} can be written as a single prompt \u2018P = y1.Then, y2. . . . Then, yn.\u2019 For instance, the two-scene description Y = {\u2018A red panda climbs on a bamboo forest.\u2019, \u2018The red panda sleeps peacefully in the treetop.\u2019} will change to P = \u2018A red panda climbs on a bamboo forest. Then, the red panda sleeps peacefully in the treetop.\u2019 Subsequently, we generate a video from the T2V model f\u03b8 by conditioning it on P. While this approach mentions the temporal sequence of the events in a single prompt, the T2V model does not understand the temporal boundaries between the two events. Specifically, the Eq. 2 suggests that the visual features for all the video frames will aggregate information from the entire multi-scene description, at once, without any knowledge about the alignment between the scene description and its expected appearance in the generated video. 3.2.2 Merging Videos In this setup, we generate videos for each scene description individually and merge them in the raw input space. Formally, the individual scene description yi conditions the T2V model f\u03b8 to generate the parts of the multi-video xi. Finally, we stitch the individual videos together to synthesize the entire video X = x1, x2, . . . , xn. In this process, the parts of the multi-scene video closely adhere to the scene descriptions, leading to high text fidelity. However, since the generated videos do not have access to all the multi-scene descriptions (e.g., the video for Scene 2 is not informed about Scene 1), the visual consistency across the entire video is quite poor. 3.3 Multi-Scene Video-Text Data Generation 0:00 0:08 0:12 0:17 0:22 Seconds Gemini Multi-Image Captions The lady gets the dried/smoked prawns ready for use She then adds the dried crayfish to the pot Next, she includes tomato puree for that rich, tangy flavor Salt is added to taste, and everything is stirred together PyScene Scene Cuts Caption A woman in a colorful scarf is showing how to make a stew Figure 4: Our approach for generating time-aligned video captions. The process begins with PyScene cuts identifying the boundaries of distinct scenes within a video. Keyframes are then selected from the median of each scene. These frames are processed collectively through the Gemini model to produce multi-image captions that maintain narrative continuity by contextualizing each scene within the video\u2019s overall sequence. While our approach generates better multi-scene videos, the text adherence capabilities of the pretrained T2V generative model are limited. This is due to the lack of multi-scene video-text data during its pretraining. Unlike single video-text datasets, the multi-scene video-text datasets are not widely available and are hard to curate for model training. This is attributed to the fact that high-quality caption generation requires a lot of human labor which is time-consuming and expensive. Prior work such as ActivityNet [39] has curated human captions for specific video scenes 6 depicting useful actions in long videos. However, the video scenes are either overlapping or have a large temporal gap between them that will be harmful for natural and smooth variations between the generated multi-scene videos. Hence, the absence of high-quality captions for continuous video scenes in the dataset makes unsuitable for T2V generative training. To this end, we aim to create a real-world multi-scene video-text dataset to allow further training of the pretrained T2V models. Specifically, we leverage the capability of the multimodal foundation model, Gemini-Pro-Vision [40], to generate high-quality synthetic data for enhanced video-text training [41]. Formally, we start with a video-text dataset M = A \u00d7 B consisting of pairs of (Ai, Bi) where Ai is a raw video and Bi is the corresponding video description from the dataset. Subsequently, we utilize PySceneDetect library 1 to generate continuous video scenes from Ai = {Ai,1, Ai,2, . . . , Ai,m} where m is the number of scene cuts in the video. A similar approach was used in a prior work [12] to detect scene changes in the video data. Then, we sample the middle video frame Fi,j as a representative of the semantic content in the video scene Ai,j. Finally, we input all the video frames Fi = {Fi,1, . . . , Fi,m} for a single video Ai and the entire video caption Bi to a large multimodal model [40]. Specifically, the model is prompted to generate high-quality captions for each of the frames Fi,j such they form a coherent narrative guided by the common caption Bi. We provide the prompt provided to the multimodal model in Appendix \u00a7A. In Figure 4 we provide an instance for the multi-scene video-text data generation. Datasets. To construct a multi-scene video-text dataset, we utilize existing dataset that include natural (real) videos and associated high-quality human-written captions that summarize the entire video. Specifically, we choose MSR-VTT [42] and VaTeX [43]. Most of the videos in MSR-VTT are 10-30 seconds long while VaTeX consists 10 seconds long videos. In addition, each video in MSR-VTT and VaTex consists 20 captions and 10 captions, respectively, out of which one is selected at random for multi-scene data generation. As described above, a single video is cut into multiple video segments using Pyscene library. In our experiments, we retain the first four video segments and discard any additional segments if the library generates more than four. Since the accuracy of the multi-scene captioning and the computational demands during finetuning are influenced by the number of scenes, we opt to limit the scene count to four for our experiments. However, future work could employ similar methodologies to scale the number of scenes, given more computing power and advanced multi-scene captioning models. We provide the data statistics for the final multi-scene data in Appendix \u00a7G. 4 Evaluation In this section, we describe the evaluation scheme for videos generated from multi-scene text descriptions. First, we describe the evaluation metrics that we aim to assess in this work (\u00a74.1). Then, we generate multi-scene descriptions for a diverse set of tasks (\u00a74.2). Finally, we present the details for automatic and human evaluation of the generated videos (\u00a74.3). 4.1 Metrics The ability to assess the quality of the generated multi-scene videos is a challenging task itself. As humans, we can judge the multi-scene videos across diverse perceptual dimensions [44] that the existing automatic methods often fails to capture [45]. Following [28], we focus on the visual consistency of the generated video, text adherence capabilities of the T2V models, and video quality of the video. Here, we present the metrics with the aspects that they intend to assess in the generated video for multi-scene text description. Visual Consistency. This metric aims to assess the (entity or background) consistency between the frames of the multi-scene videos. Here, the entity consistency aims to test whether the entities in the multi-scene video are consistent across the video frames. For instance, the appearance of an animal should not change without a change described in the text description. In addition, the background consistency aims to test whether the background of the multi-scene video remains consistent across the video frames. For instance, the room should not change without a change description in the text. 1https://github.com/Breakthrough/PySceneDetect 7 Text Adherence. This metric aims to test whether the generated video adheres to the multi-scene text description. For instance, the events and actions described in the text script should be presented in the video accurately, and in the correct temporal order. In our experiments, we compute the visual consistency and text adherence with the automatic and human evaluators. Further, we compute the overall score, which is the average of the visual consistency and text adherence scores. In addition, we also assess the visual quality of the generated videos using human evaluation to understand whether the video contains any flimsy frames, shaky images, or undesirable artifacts (Table 1. 4.2 Task Prompts Here, we curate a set of task prompts for diverse scenarios, aiming to holistically assess the quality of the generated videos. Single character in multiple visual contexts (S1). In this scenario, we instruct an LLM, GPT-4, to create a coherent script consisting of four scenes. Each scene features a specific animal character performing diverse activities in every scene. This task assesses the capability of the T2V model to generate consistent appearance of the entity and its background while adhering to the different actions (or events) described in the multi-scene text script. For instance, a generated script could be \u2018Scene 1: A red panda is climbing a tree. Scene 2: The red panda eats the leaves on the tree. Scene 3: The red panda lies down on the branch of the tree. Scene 4: The red panda sleeps on the branch\u2019. In total, we generate 100 prompts in this scenario. Different characters in a specific visual context (S2). In this scenario, we instruct a language model, GPT-4, to create a coherent script consisting of four scenes. Each scene features different animal characters engaging in the same activity in every scene [20]. This task assesses the capability of the T2V model to generate consistent appearance of the background while adhering to the appearance of the different characters in the multi-scene text script. For instance, a generated script could be \u2018Scene 1: A cat leaps onto countertop. Scene 2: A dog leaps onto the same countertop. Scene 3: A rabbit leaps onto the same countertop. Scene 4: A raccoon leaps onto the same countertop\u2019. In total, we generate 100 prompts in this scenario. Multi-scene captions from real videos (S3). Here, we aim to assess the ability of the model to generate multi-scene videos for open-ended prompts that are derived from real-world videos. This task also assesses the ability of the T2V model to generate consistent appearances of the entity and its background while adhering to multi-scene descriptions. Specifically, we use our multi-scene video-text data generation pipeline (\u00a73.3) to create such prompts for the real videos from the test splits of the video-text datasets. For example, a multi-scene text script could be \u2018Scene 1: A beauty vlogger introduces her skincare routine. Scene 2: She applies a serum to her face, smoothing it in\u2019. We present a sample of the various task prompts in the Appendix \u00a7B. In total, we generate 100 prompts in this scenario. 4.3 Evaluator In this work, we devise an automatic evaluation framework and perform human evaluation to assess the quality of the multi-scene generated videos. Automatic Evaluation. Here, we utilize the capability of a large multimodal model, GPT-4-Vision [46], to reason over multiple image sequences. First, we sample four video frames, uniformly, from each scene in the generated video (e.g., 8 videos frames for two-scene video). Then, we prompt the multimodal model with the temporal sequence of video frames from different scenes and the multi-scene text description. Specifically, we instruct the multimodal model to decide the quality of the generated video across various metrics including entity consistency, background consistency, and text adherence. For each metric, the multimodal model assigns one of three possible response {yes = 1, partial = 0.5, no = 0}. For instance, yes for the entity consistency metric implies that the video frames sampled from the generated video have consistent appearance of the entity described in the multi-scene script. In this work, we do not utilize any existing video-text alignment models [47, 41] for evaluating text adherence as they are trained on single-scene video-text datasets. We present the automatic evaluation prompt in Appendix \u00a7C. 8 Human Evaluation. We also conduct a human evaluation to assess the multi-scene generated videos along the dimensions of visual consistency, text adherence, and visual quality. Specifically, we ask the annotators from Amazon Mechanical Turk (AMT) to choose one of three options for each metric {yes, partial, no}, similar to the automatic evaluation. In addition, we choose the annotators that pass a preliminary qualification exam. We present the screenshot of the UI in Appendix \u00a7D. 4.4 Evaluation Setup Since merging captions (\u00a73.2) and TALC (\u00a73.1) methods input the entire multi-scene text description at once, the quality of the video generated by these methods is influenced by the number of scenes described in the text script. Hence, we calculate the performance of the baselines and TALC by averaging the scores assigned to videos generated for two, three, and four scenes. Additionally, we report on visual consistency by averaging the performance across the entity and background consistency metrics. Here, the entity consistency scores are calculated for the task prompts S1 and S3 (since S2 aims to change the characters across scenes), and the background consistency and text adherence scores are computed for all the task prompts. We also evaluate the impact of TALC-based finetuning on the single scene generation in Appendix \u00a7I. 5 Experiments 5.1 Text-to-Video Generative Models In this work, we utilize ModelScope [10] and Lumiere [6] T2V models for multi-scene video generation. Here, ModelScope is an open-source T2V model with 1.7 billion parameters including the video encoder, text encoder, and denoising U-net network. Specifically, it is trained to generate 16 video frames on the mix of WebVid [14] video-text dataset and LAION [48] image-text dataset. We perform most of our experiments on ModelScope due to its easy-of-access and adoption in prior works [28]. In addition, we also include Lumiere-T2V, a model that leverages space-time U-Net denoising networks to generate high-quality videos. In this work, we include early experiments with Lumiere to showcase the flexibility of the TALC approach for multi-scene video generation. Base model with TALC. As described in \u00a73.1, our approach modifies the traditional text-conditioning mechanism to be aware of the alignment between text descriptions and individual video scenes. By design, the TALC framework can be applied to the base T2V model during inference, without any multi-scene finetuning. Thus, we compare the performance of the multi-scene videos generated from ModelScope and Lumiere T2V base models under three settings: merging captions, merging videos, and TALC. In this setting, we generate 16 frames per scene from ModelScope and 80 frames per scene from Lumiere. We provide more details on the inference in Appendix \u00a7F. Finetuning with TALC. Since the base model is pretrained with single-scene data, we aim to show the usefulness of TALC framework when we have access to the multi-scene video-text data. To this end, we finetune ModelScope on the multi-scene video-text data (\u00a73.3) with TALC framework. As a pertinent baseline, we also finetune the ModelScope without TALC framework by naively merging the scene-specific captions in the raw text space. In this setting, we finetune the T2V model with 8 frames per scene and the maximum number of scenes in an instance is set to 4. We provide further details on the finetuning setup in Appendix \u00a7H. The inference settings are identical to the prior method of generating videos from the base model without finetuning. In this section, we present the results for the baselines and TALC framework averaged over a diverse task prompts and multiple scenes using automatic evaluation (\u00a75.2) and human evaluation (\u00a75.3). Finally, we provide qualitative examples for the multi-scene generated videos to showcase the usefulness of our approach (\u00a75.4). 5.2 Automatic Evaluation We compare the performance of the baselines (e.g., merging captions and merging videos) with the TALC framework for ModelScope and Lumiere using the automatic evaluation in Figure 5. TALC outperforms the baselines without any finetuning. In Figure 5(a), we find that the overall score, average of visual consistency and text adherence, of the multi-scene videos generated using 9 Visual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (0-100) 91.0 65.0 89.9 77.0 89.0 32.4 70.0 47.2 37.5 62.3 61.7 67.5 68.6 57.3 75.6 Merging Captions (Base) Merging Videos (Base) TALC (Base) Merging Captions (F .T.) TALC (F .T.) (a) Performance on ModelScope T2V model. Visual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (%) 94.7 68.0 97.8 34.0 65.0 39.0 64.4 66.5 68.4 Merging Captions (Base) Merging Videos (Base) TALC (Base) (b) Performance on Lumiere T2V model. Figure 5: Automatic evaluation results for (a) ModelScope and (b) Lumiere. In (a), we observe that TALC-finetuned ModelScope model achieves the highest overall score, that is the average of the visual consistency and text adherence scores. In (b), we find that TALC framework with the Lumiere base model outperforms merging captions and merging videos on the overall scores. We report the average performance across the diverse multi-scene prompts and the number of generated scenes. the base ModelScope with TALC (68.6 points), outperforms the overall score achieved by the videos generated using merging captions (61.7 points) and merging videos (67.5 points) with the base ModelScope. Specifically, we observe that the visual consistency of the generated video is high for merging captions (91 points) and TALC (89.9 points) while it is low for merging videos (65 points). This indicates that merging videos independently for the individual scene descriptions does not preserve the background and entity appearances across the different frames. In addition, we observe that the text adherence using TALC outperforms merging captions by 14.8 points, while the text adherence is the highest with a score of 70 points using merging videos. This can be attributed to the design of the merging videos baseline where individual video scenes adhere to the scene-specific descriptions well. Hence, merging videos independently approach can be viewed as an upper bound on the text adherence metric. In Figure 5(b), we observe similar trends for the Lumiere T2V generative model. Specifically, we find that the overall score for TALC outperforms merging captions and merging videos by 4 points and 2 points, respectively. In addition, we observe that merging captions and TALC achieve a high visual consistency score while merging videos independently has poor visual consistency. Further, we find that TALC outperforms merging captions by 5 points on text adherence, while merging videos achieves the highest text adherence 65 points. This highlights that the model more easily generates 10 multi-scene videos that adhere to individual text scripts, whereas adherence to the text diminishes when the model is given descriptions of multiple scenes all at once. Finetuning with TALC achieves the best performance. Earlier, we evaluated the usefulness of the TALC framework with the base model. However, the base models are trained with the singlescene video-text data that might limit their capability for multi-scene video generation. To alleviate this issue, we finetune ModelScope T2V model on the multi-scene video-text data (\u00a73.3). Specifically, we finetune the model using the merging captions method and TALC framework, independently. In Figure 5(a), we find that finetuning with TALC achieves the highest overall score of 75.6 points in comparison to all the baselines. Specifically, we observe that the visual consistency does not change much with finetuning using the TALC method (89.9 points vs 89 points). Interestingly, we observe that finetuning with merging captions reduces the visual consistency by a large margin of 14 points. This can be attributed to the lack of knowledge about the natural alignment between video scenes and individual scene descriptions, which gets lost during the merging of captions. Additionally, we find that the text adherence of the TALC-finetuned model is 15.1 points more than the text adherence of the TALC-base model. Similarly, we find that the text adherence of the merging captions-finetuned model is 5.1 points more than the text adherence of the merging captions-base model. This highlights that finetuning a T2V model with multi-scene video-text data helps the most with enhancing its text adherence capability. Fine-grained Results. To perform fine-grained analysis of the performance, we assess the visual consistency and text adherence scores for the baselines and TALC framework across diverse task prompts and number of scenes on ModelScope. We present their results in Appendix \u00a7E. In our analysis, we find that finetuning with TALC achieves the highest overall score over the baselines across all the scenarios. In addition, we notice that the highest performance is achieved in the scenario that consist of the different entities in a specific visual context. Further, we observe that the performance of the all the methods reduces when the task prompts get more complex i.e., multiscene captions from real videos. In addition, we observe that finetuning with TALC achieves the highest overall score over the baselines across all the number of scenes. Specifically, we observe that the performance of the merging captions and TALC framework reduces as the number of scenes being generated increases. Overall, we show that the TALC strikes a good balance between visual consistency and text adherence to generate high-quality multi-scene videos. 5.3 Human Evaluation Table 1: Human evaluation results on the visual quality of the generated videos from ModelScope. We observe that the visual quality of the generated videos are close to each other for the base model. However, finetuning the model with merging captions reduces the video quality by a large margin while TALC-finetuned model retains the video quality. Method Quality Merging Captions (Base) 80.5 Merging Videos (Base) 86.5 TALC (Base) 84.5 Merging Captions (F.T.) 63.4 TALC (F.T.) 83.3 TALC achieves the best performance in human evaluation. We compare the performance of the baselines and TALC framework for ModelScope using human evaluation in Figure 6. We find that TALC-finetuned model outperforms the merging captions and merging video methods with the base model by 12 points and 15.5 points, respectively, on the overall score. In addition, we find that using TALC framework in the base model outperforms the merging captions and merging video methods with the base model by 7.6 points and 11.1 points, respectively, on the overall score. Further, we observe that the merging captions with the base model achieves the highest visual consistency score of 96.5 points while it is the lowest for merging videos generated from the base model. In addition, we find that the text adherence of the TALCfinetuned and TALC-base model is better than merging captions-finetuned and merging captions-base model, respectively. Our results highlight at the benefit of including the inductive bias of temporal alignment between the video scenes and their scene descriptions for multi-scene video generation. Visual quality of the generated videos. We compare the visual quality of the generated videos using human evaluation in Table 1. We find that the visual quality of videos generated from the base 11 Visual Consistency T ext Adherence Overall Score 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Average performance (%) 96.5 55.0 92.3 80.0 86.4 33.0 67.5 52.5 42.3 67.2 64.8 61.3 72.4 61.1 76.8 Merging Captions (Base) Merging Videos (Base) TALC (Base) Merging Captions (F .T.) TALC (F .T.) Figure 6: Human evaluation results for ModelScope model. We observe that the base model using the TALC framework outperforms the merging captions and merging videos baselines on the overall score. In addition, TALC-finetuned model enhances the text adherence and achieves the highest overall score. We report the average performance across the diverse multi-scene prompts and the number of generated scenes. model ranges from 80.5 \u221286.5 using the baselines and TALC framework. However, we observe that the visual quality of generated videos is quite poor for the model finetuned with merging captions with a score of 63.4 points. This highlights that finetuning a T2V model with multi-scene video-text data by naively merging the scene-specific descriptions in the raw text space leads to undesirable artifacts in the generated video. Finally, we find that the TALC-finetuned model (83.3) achieves a video quality score similar to that of the TALC-base model (84.5), indicating that our finetuning data preserves the visual quality observed during the model\u2019s pretraining. While our work is centered around multi-scene evaluation, we also perform single-scene evaluation in Appendix \u00a7I. 5.4 Qualitative Analysis We provide qualitative examples of generating multi-scene videos using ModelScope (fine-tuned with TALC) and Lumiere (base model with TALC) for diverse scenarios in Figure 12. Our analysis reveals that both ModelScope and Lumiere are capable of producing multi-scene videos that exhibit high text adherence and visual consistency. Considering the case of the same animal engaging in multiple actions (referred to as \"one character multiple contexts\"). The videos generated by ModelScope successfully maintained the same animal while varying the background and action between the scenes. Conversely, the videos generated by Lumiere displayed the same animal performing different actions with minimal background alterations. We believe that this distinction is attributed to ModelScope\u2019s fine-tuning with TALC. Considering different animals within a particular visual setting (referred to as \"multiple-characters same context\"), both ModelScope and Lumiere demonstrated impressive abilities in preserving the consistency of the background across the videos and adhering closely to the provided text. During our analysis, we noticed that the multi-scene captions derived from real videos (referred to as \"open-ended captions\") exhibited a substantial number of changes between the various scenes. In this scenario, Lumiere, when employed without fine-tuning, displayed challenges in adhering to the text, while ModelScope achieved a higher degree of text adherence but was also prone to visual artifacts. 6 Related Work Text-to-Video Generative Modeling. The field of text-to-video (T2V) synthesis has significantly evolved from its inception with models like VGAN [2] and MoCoGAN [49], leveraging the foun12 dational technologies of GANs [50] and VAEs [51] to produce concise, single-scene videos. The narrative depth was further expanded through transformer-based architectures such as CogVideo [52] and VideoGPT [53], enhancing the complexity of video content yet remaining within the confines of single scenes. The advent of diffusion models, exemplified by Imagen Video [54], marked a notable advancement in T2V synthesis. Despite these strides, the challenge of creating multi-scene videos that reflect the complexity of the physical world [1, 2, 3] remains. Our work, TALC, extends the capabilities of T2V models to multi-scene storytelling, filling a crucial gap in the synthesis landscape. Image-to-Video Animation. The exploration of multi-scene video generation, innovative methods such as Lumiere [6] and Make-a-Video [55] have employed a two-step process, transforming text to images and then animating these images into videos. While these approaches have advanced visual quality, they often fall short in weaving seamless multi-scene narratives. This limitation is echoed in the work of Emu Video [8], which underscores the difficulty of achieving narrative coherence across multiple scenes. TALC focuses on direct generation of multi-scene narratives from textual prompts aiming for a narrative flow and visual consistency across scenes. Multi-Scene Video Generation. The pursuit of multi-scene T2V synthesis has been furthered by recent innovations like Phenaki [20] and Stable Video Diffusion [12], which have explored new frontiers in video generation from textual prompts and the scaling of latent diffusion models, respectively. Additionally, Dreamix [56] and Pix2Video [57] have broadened the scope of diffusion models, applying them to video editing and animation. Despite these advancements, the task of generating videos that convey coherent narratives across multiple scenes remains formidable, highlighted by recent works such as VideoPoet [9], ModelScope [10] and Make-A-Scene [58]. TALC tackles this task and offers a framework produces videos spanning multiple scenes. We also introduce nuanced evaluation approach. This approach integrates both automated assessments and human evaluations to rigorously gauge the quality and narrative coherence of the generated content, evaluating text adherence, object consistency and background consistency, contributing to the ongoing refinement of T2V synthesis. 7 Conclusion We introduced TALC, a simple and effective method for improving the text-to-video (T2V) models for multi-scene generation. Specifically, it incorporates the knowledge of the natural alignment between the video segments and the scene-specific descriptions. Further, we show that TALCfinetuned T2V model achieve high visual consistency and text adherence while the baselines suffer from one or both of the metrics. Given its design, our framework can be easily adapted into any diffusion-based T2V model. An important future direction will be to scale the amount of multi-scene video-text data and deploy TALC framework during pretraining of the T2V models. 8 Acknowledgement We would like to thank Ashima Suvarna for providing feedback on the draft. Hritik Bansal is supported in part by AFOSR MURI grant FA9550-22-1-0380."
18
+ }
intro_8K/test_introduction_short_2405.04700v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04700v1",
3
+ "title": "Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures",
4
+ "abstract": "Large Language Models (LLMs) deployed on edge devices learn through\nfine-tuning and updating a certain portion of their parameters. Although such\nlearning methods can be optimized to reduce resource utilization, the overall\nrequired resources remain a heavy burden on edge devices. Instead,\nRetrieval-Augmented Generation (RAG), a resource-efficient LLM learning method,\ncan improve the quality of the LLM-generated content without updating model\nparameters. However, the RAG-based LLM may involve repetitive searches on the\nprofile data in every user-LLM interaction. This search can lead to significant\nlatency along with the accumulation of user data. Conventional efforts to\ndecrease latency result in restricting the size of saved user data, thus\nreducing the scalability of RAG as user data continuously grows. It remains an\nopen question: how to free RAG from the constraints of latency and scalability\non edge devices? In this paper, we propose a novel framework to accelerate RAG\nvia Computing-in-Memory (CiM) architectures. It accelerates matrix\nmultiplications by performing in-situ computation inside the memory while\navoiding the expensive data transfer between the computing unit and memory. Our\nframework, Robust CiM-backed RAG (RoCR), utilizing a novel contrastive\nlearning-based training method and noise-aware training, can enable RAG to\nefficiently search profile data with CiM. To the best of our knowledge, this is\nthe first work utilizing CiM to accelerate RAG.",
5
+ "authors": "Ruiyang Qin, Zheyu Yan, Dewen Zeng, Zhenge Jia, Dancheng Liu, Jianbo Liu, Zhi Zheng, Ningyuan Cao, Kai Ni, Jinjun Xiong, Yiyu Shi",
6
+ "published": "2024-05-07",
7
+ "updated": "2024-05-07",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "cs.DC",
13
+ "cs.IR"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "Retrieval AND Augmented AND Generation AND RAG",
17
+ "gt": "The emerging Large Language Models (LLMs) are deployed pri- marily on centralized cloud platforms [1, 2] (Cloud LLMs), raising concerns about user privacy and trustworthy issues [3]. These is- sues become even more prominent in areas such as healthcare [4], companionship [5], and personal assistance [6], where the user privacy and trustworthiness of LLMs are crucial. To address these issues, the cloud LLMs will eventually transform into personalized LLMs, capable of generating personalized responses, deployed on edge devices (Edge LLMs), where users can keep all their private data and the model learns from those data locally. To better suit the needs of individual users, Edge LLMs must learn from user interactions. However, their capability of learning is constrained by their limited RAM and computational power. Similar to Cloud LLMs, the Edge LLMs primarily learn by fine- tuning their model parameters. Yet, given that these models often contain over 3 billion parameters, updates can be challenging, even with numerous efforts to accelerate them [7\u20139]. For example, using the experimental high-performance embedded system like NVIDIA- AGX, the pockengine method [9] can still take 90 hours to learn from a middle-sized dataset Alpaca with only 52k documents, making this option impractical for normal users. E x : \u201cI am sick?\u201d Sentence Embedding Model E(x) User Query Profile Data Embedding Space S \u2026 P(x, d83) P(x, d29) P(x, d37) Top k (k = 1) DAC ADC \u2026 CiM E(d83) Data: d83, d29, d37 User Query LLM Output NVM Digital Logic \ud835\udc6c(\ud835\udc99) \u2219\ud835\udc6c(\ud835\udc85\ud835\udc8a) = \ud835\udc0f(\ud835\udc31, \ud835\udc85\ud835\udc8a) E(d1) E(d2) E(d3) E(d4) E(dn) Document 2 Document 1 Document n \u2026 Figure 1: The workflow of RAG on edge-based CiM. CiM performs max inner product search (MIPS) to retrieve the top-ranked documents, concatenating them with user query to allow the LLM to generate personalized responses. Retrieval-augmented generation (RAG), on the other hand, is a more resource-efficient choice [10], and hence becoming the de facto learning method for Edge LLMs. In a typical RAG system, it consists of a retriever and a generator. The retriever is commonly backed by max inner product search (MIPS). When the retriever receives a user query, it will retrieve the most relevant document from profile data, as shown in Figure 1. The profile data has many documents, and each document \ud835\udc51\ud835\udc56contains specific information that may be relevant to user queries. The generator can be seen as a LLM, which takes the user query \ud835\udc65and retriever-obtained documents as a prompt and generates a corresponding response. For every document\ud835\udc51\ud835\udc56and the user query \ud835\udc65, RAG utilizes a sentence embedding model shown in Figure 1 to convert them into vectors (i.e., \ud835\udc38(\ud835\udc51\ud835\udc56) and \ud835\udc38(\ud835\udc65), respectively). The vectors for documents can be named as document embeddings and stored as a matrix as shown in Figure 1. The vector for user query, named query embedding \ud835\udc38(\ud835\udc65), will be used in MIPS to perform inner product with every document embedding. The larger the product \ud835\udc43(\ud835\udc65,\ud835\udc51\ud835\udc56), the more semantic similar it will be between the user query and the document. Using RAG, Edge LLMs can provide user-preferred responses by retrieving relevant documents from profile data, and the profile data can be incrementally updated with new documents. This is an efficient learning process without costly updating the model parameters via fine-tuning [11]. Other than the inevitable LLM inference cost, the primary computational cost of RAG is about retrieval, which is more than ten times less than the cost of updating model parameters. While the computational cost of RAG is more edge-friendly, there still exist two issues impeding RAG from being deployed for real-time user interaction on Edge LLMs. Firstly, the growing profile data as stored cannot be unlimited without affecting the access time. If the size of the profile data exceeds the RAM capacity, arXiv:2405.04700v1 [cs.LG] 7 May 2024 Ruiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 it will need to be offloaded into the storage, such as a hard disk drive (HDD) or solid-state drive (SSD). Accessing data from HDD or SSD will significantly increase the data transfer latency [12], rendering real-time user interaction impractical. Secondly, the core retrieval method of RAG, MIPS, may experience decreased efficiency as profile data grows, and it can become potentially prohibitive when dealing with overwhelmingly large datasets. For example, on Raspberry Pi 4B, MIPS can take 5 minutes to find one appropriate profile data among 21M documents [10], which is even longer than the 2-minute inference time of an Edge LLM. Unfortunately, few efforts have been made to optimize RAG towards Edge LLMs. Thus, we propose to utilize the Computing-in-Memory (CiM) architecture to address this issue. As shown in Figure 1, CiM ar- chitectures using memory arrays have shown substantial promise in accelerating matrix-vector multiplication [13], which is the key operation of MIPS. The CiM architectures often utilize massive parallel processing to perform computations directly within the memory array where the data is stored, such that they can minimize the data movement through in-situ data access and significantly increase the throughput [14]. Given the same amount of documents, CiM can finish computation within 50ms [15], which is negligible compared to the computation latency on normal edge devices. Fur- thermore, by incorporating non-volatile memory (NVM) devices, such as phase-change memories (PCMs), resistive random-access memories (RRAMs), and ferroelectric field-effect transistors (Fe- FETs), CiM can outperform conventional MOSFET-based designs in terms of energy efficiency [16]. 0.00 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Level of noise ( ) 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy Citation Movie Rating News DBLP Figure 2: The impact on MIPS accuracy when the RAG\u2019s doc- ument embedding is perturbed by various levels of Gaussian noise caused by the device variations. An accurate retrieval means the document retrieved under the impact of the noise is the same as that retrieved without noise. Unfortunately, simply changing the underlying hardware is not enough, as the non-idealities of the NVM devices in CiM array could greatly deteriorate the RAG performance. First, the operations per- formed in CiM architectures are susceptible to various sources of noise, including electronic noise (thermal, shot, and flicker), device- to-device variability, and line noise from the supporting circuitry [17]. These noise sources can corrupt the computations, especially when the signal levels are close to the noise floor, which is a common scenario in high-precision tasks. Such noise issues are critical in RAG applications where the accuracy and quality of the generated content heavily rely on the precision of the underlying compu- tations. Additionally, the CiM architecture is primarily designed and optimized for low-resolution computation [18]. Moreover, CiM arrays are typically sized at a fixed dimension, such as 64x64 [19], which is different from the documents\u2019 embedding dimension (e.g., 128). Therefore, both RAG\u2019s data precision (typically FP32) and its embedding dimension need to be reduced to fit in the size of CiM\u2019s crossbar arrays. To illustrate the impact of these on RAG, as an example, we present a preliminary study on MIPS performance in Figure 2, where we use a simple yet representative Gaussian noise to simulate the noise from the device variations in CiM. As shown in Figure 2, as the noise level increases, MIPS accuracy (specified in section 4.1.3) drops dramatically, approaching random guessing. To address these issues, we further propose a novel optimization framework for CiM-backed RAG, called Robust CiM-backed RAG (RoCR). The framework consists of three parts. The first part is a contrastive learning method. We use it to optimize the document embedding model. The second part is a novel data construction method to generate both positively and negatively labeled data pairs for contrastive learning. For the profile data, they can be either labeled to indicate the explicit user-preferred response to certain input, or simply statements without explicit labels that only implicitly indicate user preferences. Our data construction method is capable of dealing with both types of profile data. The third part is a noise-aware training method. It goes in tandem with contrastive learning to obtain a sentence embedding model that can generate document and user query embeddings with high noise-resilient capability, while such embeddings can fit into CiM architectures under different designs and configurations. Our major contributions can be summarized as: \u2022 We propose the first work to harvest CiM advantages for RAG acceleration on the edge. We provide a pathway to utilize emerging CiM devices to expand the Edge LLMs\u2019 capability in terms of storing a high volume of profile data with fast MIPS computing. \u2022 We introduce noise-aware training to enhance the noise- resilient capabilities of RAG\u2019s document embedding. The resulting noise-resilient embeddings can be reused robustly, saving resources needed to calibrate and regenerate embed- dings. \u2022 Our experiments on various datasets show that our pro- posed framework can improve the RAG performance on multiple CiM devices up to 35%, approaching to the theo- retical RAG performance. Across a wide device variation (noise) range on a single CiM device, our proposed frame- work can still improve the RAG performance.",
18
+ "main_content": "As shown in the middle part of Figure 1, memory arrays are the key component for vector-matrix multiplication. In this array, matrix values are stored at NVM cells, such as emerging NVM technologies like PCMs, RRAMs, and FeFETs, at the cross-points of vertical and horizontal lines. Simultaneously, vector values flow along the horizontal lines of the array. Operations within the memory array take place in the analog domain by exploiting law of physics directly. However, for other essential functions like shift-and-add for multiple bits and sorting to find the top-k ranked values would be done in the digital domain. Thus, digital-to-analog and analog-to-digital Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures profile data Data Construction Module positive examples anchor examples \u2026 Reshape Module embeddings Contrastive Learning close far Device Variation NVMs Sentence Embedding Model negative examples Flexible Noise-aware Training Module optimize constraints Figure 3: Overview of the proposed Robust CiM-backed RAG framework (RoCR). It optimizes the sentence embedding model to adapt different types of NVMs utilized by CiM. converters (DACs and ADCs) are used to connect these different components. CiM arrays suffer from various sources of variations and noises. Two major ones include spatial variations and temporal variations. Spatial variations result from fabrication defects and have both local and global correlations. FeFET devices also suffer from temporal variations due to the stochasticity in memory switching and also aging, which causes fluctuations in conductance when programmed at different times. Temporal variations are typically independent from device to device and are irrelevant to the value to be programmed [20]. In this work, as a proof of concept, we focus on the impact of temporal variations in the programming process on DNN performance. Temporal variation makes the programmed resistance of a device deviate from what is expected. The proposed framework can also be extended to other sources of variations with modification. Measurement results [21, 22] show that the noise on DNN weights caused by device variations can be safely modeled as a Gaussian noise with zero mean, each with a standard deviation associated with the weight value. A detailed representation is given by: v = v0 + \u0394v, \u0394v \u223cN (0, \ud835\udf0e\ud835\udc63) (1) where v is the actual embedding deployed on the accelerators, v0 is the target embedding value, and \ud835\udf0e\ud835\udc63is a value measured by the experiments. We collect the measurement results from RRAM and FeFET devices and the specific value will be discussed in Section 4.1. 2.2 Past Noise Mitigation Methods Several strategies have been introduced to tackle the challenge of device variations in CiM accelerators. These methods can be separated into software and hardware-based techniques. The software-based techniques are generally developed to obtain more robust DNN models [19, 22\u201324] or recommendation systems [25], and are thus not suitable for generating more robust MIPS solutions. For the hardware techniques, the write-verify procedure [26, 27] is one of the most commonly used approach during programming. Initially, a NVM device is programmed to a set state via a designated pulse pattern. Subsequent to this, the device\u2019s value is verified to ascertain if its conductance aligns with a stipulated range of the desired value, essentially assessing its accuracy. If discrepancies arise, a supplemental update pulse is initiated to reset the device conductance nearer to the target. This loop persists until the disparity between the programmed device value and the target value diminishes to a satisfactory margin, typically taking a handful of cycles. Cutting-edge research suggests that by selectively applying write-verify to a subset of pivotal devices, one can uphold the average accuracy of a DNN [21]. Additionally, a variety of circuit design initiatives [18, 28] have been put forth to counteract device variations. 3 PROPOSED WORK 3.1 Framework Overview As shown in Figure 3, our proposed framework, Robust CiM-backed RAG (RoCR), consists of three stages. First, we apply contrastive learning to utilize the training data to optimize the training module. To do that, in the second stage, we take the profile data and construct via a data construction module to obtain contrastive training data pairs, which are then used in the flexible noise-aware training module. In the third stage, we obtain the constraints of NVMs in CiM via profiling. These constraints will be encoded into the flexible noise-aware training module and used to train the sentence embedding model so that it can generate embedding that are robust against device variation of the target NVMs. After training, the training module can be turned into a new sentence embedding model and generate CiM-friendly embeddings. 3.2 Contrastive Learning: Triplet Loss Function When we apply RAG using CiM, we first need to store embeddings into NVMs as shown in Figure 1. Such embeddings are generated by the sentence embedding model, and they are the numerical representations of profile data. Each single document in the profile data can have its unique embedding, which is a vector. The embeddings stored on NVMs can consist of a matrix as the orange blocks shown in Figure 1. Given a user query, which will also be converted into an embedding, CiM can operate MIPS between this user query embedding and all profile embeddings simultaneously via vector-matrix multiplication. The top-ranked values in the product will be used as the index to retrieve the corresponding document data, as the pink block shown in Figure 1. This retrieved user-relevant document is the output of MIPS. However, as we have explained in Section 2.1, writing the document embeddings into NVMs can cause them to suffer from temporal variations (device variations). Then, the NVM-stored embeddings will be different from the original sentence embedding model generated embeddings. As shown in Figure 4, the vanilla embedding model generates desired embedding, which will deviate to the noise embedding under device variation, such that the irrelevant embedding is ranked higher than desired embedding due to its larger inner product. Contrastive learning can learn the representations via push away dissimilar examples and pull close similar examples [29]. In particular, the contrastive loss function can be used to increase the distance between dissimilar examples. In our work, we propose to improve the noise-resilient capability by contrastive learning. By increasing the distance between Ruiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 noise embedding irrelevant embedding query retrieve the wrong data irrelevant embedding query NVMs Device Variation lead to Vanilla CiM-backed RAG Robust CiM-backed RAG Our embedding model noise-resilient embeddings user profile data NVMs Device Variation desired embedding vanilla embedding model user profile data retrieve the desired data embeddings Figure 4: Improvement by our Robust CiM-backed RAG. Our framework generates noise-resilient embeddings, as shown the orange and blue point in right subfigure dissimilar examples, as shown the right subfigure in Figure 4, deviated desired embedding will still have a larger inner product with the query compared to the irrelevant embedding. Our contrastive learning loss function is based on Weinberger et al. [30]. For each example \ud835\udc65\ud835\udc56in a mini-batch of N anchor examples, our data construction method will construct \ud835\udc3epositive and \ud835\udc3enegative examples corresponding to \ud835\udc65\ud835\udc56. We can have {{(\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56)\ud835\udc58}\ud835\udc56=1,...,\ud835\udc41}\ud835\udc58=1,...,\ud835\udc3e, in which \ud835\udc65\u2212and \ud835\udc65+ are negative and positive examples corresponding to \ud835\udc65\ud835\udc56, where \ud835\udc65\ud835\udc56is closer to \ud835\udc65+ \ud835\udc56compared to \ud835\udc65\u2212 \ud835\udc56. Also, \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) represents the learned embedding of \ud835\udc65\ud835\udc56. Then the loss function L can be defined as: L = \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 1 \ud835\udc3e \ud835\udc3e \u2211\ufe01 \ud835\udc58=1 max \u0010 0, d(\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58)) \u2212d(\ud835\udc65\ud835\udc56,\ud835\udc65+ \ud835\udc56(\ud835\udc58)) + \ud835\udc5a \u0011 , d(\ud835\udc65\ud835\udc4e,\ud835\udc65\ud835\udc4f) = sim(emb(\ud835\udc65\ud835\udc4e), emb(\ud835\udc65\ud835\udc4f)) (2) The distance \ud835\udc51(\ud835\udc65\ud835\udc4e,\ud835\udc65\ud835\udc4f) is calculated by the Euclidean distance between embeddings of two data \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc4e) and \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc4f). The function \ud835\udc60\ud835\udc56\ud835\udc5a() calculate the semantic similarity. 3.3 Data Construction To train the sentence embedding model via contrastive learning, it is critical to construct pairs of examples where the positive examples and negative examples need to be distinct from each other [31]. In our work, since we use triplet contrastive loss, instead of pairs of examples, we will construct trios of examples where each triplet contains an anchor, positive, and negative example. We use profile data to construct triplets of examples. For the profile data, it is generated by the user during the user-LLM interaction and contains the user preference information. There exists two situations for such data. First, the profile data can contain explicit labels indicating the user preferred response to the corresponding content. Second, the profile data also can be statements containing the user-related information but without explicit user preferences As shown in Figure 5, to deal with the two situations, we come up with two data construction methods: Construction Data with Explicit labels (CDE) and Construction Data with Implicit labels (CDI). \u201cJake Blues, just released from prison, puts his old band back together to save the Catholic home where he and his brother Elwood were raised.\u201d is \u201cdystopia\u201d negative r = 0.1 r = 0 r = 0 \u201cFresh out of prison, Jake Blues rallies his old band to save their childhood Catholic home\u201d is \u201cclassic\u201d positive example (embedding) \u201cJake Blues, just released\u2026\u201d is \u201cclassic\u201d anchor example (embedding) \u201cJake Blues, just released \u2026\u201d is \u201cdystopia\u201d negative example (embedding) r = 0.1 \u201cVictims of traumatized \u2026\u201d r = 0 r = 0.9 CDE CDI E anchor/positive example negative example \u201cTwo victims of traumatized childhoods become lovers and serial murderers irresponsibly glorified by the mass media.\u201d anchor/positive/negative example \u201cTwo people with traumatic pasts turn into a couple on a crime spree, mistakenly idolized by the media.\u201d \u201cIndividuals, mired in traumas, unite *() crime-ridden bond, enthrall\u2606\u2609\u00a7ing the media's distorted spotlight.\" \u201cJake Blues, just released from prison, puts his old band back together to save the Catholic home where he and his brother Elwood were raised.\u201d is \u201cclassic\u201d explicit label Statement/implicit label Figure 5: Examples of the two data construction methods. For data with explicit labels, CDE is used to construct the training data. For data without explicit labels (implicit labeled data), CDI is used to construct the training data. 3.3.1 Construction Trios via Data with Explicit Labels (CDE). For the data with explicit labels, each of the data consists of a textual content c and its corresponding label l which indicates the user preferred response regarding to the content c. As shown in the CDE part in Figure 5, there exists explicit label circled by dashed line. Using the profile data, we will construct triplet examples in the format of (\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56). Given a dataset D with size of \ud835\udc5bprofile documents, each piece of data consists of a content \ud835\udc50\ud835\udc56and the corresponding label \ud835\udc59\ud835\udc56where \ud835\udc56\u2208{1, 2, ...,\ud835\udc5b}. The anchor example \ud835\udc65\ud835\udc56can be constructed as: \ud835\udc65\ud835\udc56= \ud835\udc50\ud835\udc56\u2295\ud835\udc59\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (3) where \u2295denotes a concatenation operation, specifically used here to combine label and content. Negative examples \ud835\udc65\u2212 \ud835\udc56can be constructed by concatenating \ud835\udc50\ud835\udc56with a random label \ud835\udc59\ud835\udc57that is different from \ud835\udc59\ud835\udc56as follows: \ud835\udc65\u2212 \ud835\udc56= \ud835\udc50\ud835\udc56\u2295\ud835\udc59\ud835\udc57, where \ud835\udc59\ud835\udc56\u2260\ud835\udc59\ud835\udc57. (4) Randomly assigning a different label ensures diversity in the negative examples while maintaining the same content from the anchor. Different from constructing anchor and its negative examples, it is challenging to construct positive examples corresponding to the anchor examples since it is more difficult to formalize semantically similar data than to formalize semantically dissimilar data. To construct positive examples, we follow the SimCSE method [32] to add a dropout rate \ud835\udc5finto the sentence embedding model M. The process for constructing positive examples involves two main steps. First, the textual positive example is formalized as: \ud835\udc65+ \ud835\udc56= \ud835\udc65\ud835\udc56, for \ud835\udc56= 1, 2, ...,\ud835\udc5b (5) where we align each anchor with the corresponding positive example. This step effectively duplicates the anchor data as a starting point for generating the embeddings. Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures Second, the embedding generation process varies based on the dropout rate applied within the model M. When model M is utilized to generate embeddings for anchor and negative examples, the dropout rate is set to 0. In contrast, for generating embeddings for positive examples, a non-zero dropout rate \ud835\udc5fis used. The anchor, negative, positive examples, as shown in Figure 5, can be constructed as: \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) = M(\ud835\udc65\ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= 0) \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\u2212 \ud835\udc56) = M(\ud835\udc65\u2212 \ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= 0) \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65+ \ud835\udc56) = M(\ud835\udc65+ \ud835\udc56,\ud835\udc51\ud835\udc5f\ud835\udc5c\ud835\udc5d\ud835\udc5c\ud835\udc62\ud835\udc61= \ud835\udc5f) (6) The condition of \ud835\udc5f\u22600 can induce variation in the embeddings, enhancing the model\u2019s ability to recognize semantically similar yet variably expressed content. Given the construction factor \ud835\udc3e, we can construct the triplet data examples as: D\ud835\udc61\ud835\udc5f\ud835\udc56\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc61= \ud835\udc41 \u00d8 \ud835\udc56=1 n (\ud835\udc65\ud835\udc56(\ud835\udc58),\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58),\ud835\udc65+ \ud835\udc56(\ud835\udc58)) : \ud835\udc58= 1, 2, . . . , \ud835\udc3e o (7) For the triplet data examples D\ud835\udc61\ud835\udc5f\ud835\udc56\ud835\udc5d\ud835\udc59\ud835\udc52\ud835\udc61, their embeddings for each augmentation \ud835\udc58are given by: E = \ud835\udc41 \u00d8 \ud835\udc56=1 n (\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56(\ud835\udc58)),\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\u2212 \ud835\udc56(\ud835\udc58)),\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65+ \ud835\udc56(\ud835\udc58)) : \ud835\udc58= 1, 2, . . . , \ud835\udc3e o (8) As shown in Figure 5, for data with explicit labels, a content\ud835\udc50can concatenate with its corresponding label \ud835\udc59to formalize the positive and anchor example. That content \ud835\udc50can also concatenate with other labels \ud835\udc59\u2032 to formalize the negative example. The positive example can be finally obtained from the sentence embedding model with dropout rate \ud835\udc5f. The anchor and negative example can be finally obtained from the sentnece embedding model with \ud835\udc5f= 0. 3.3.2 Construction Trios via Data with Implicit Labels (CDI). For data with implicit labels, each of the data consists of solely textual content c. As shown of the CDI part in Figure 5, there is no explicit label to indicate user preferences. Instead, the data can be seen as a statement containing some user-related information. To construct the anchor examples and positive examples, we can use the exact same method in EDC. Given a dataset D with size of n profile data, each piece of data consits of a content \ud835\udc50\ud835\udc56. The anchor data \ud835\udc65\ud835\udc56can be constructed as: \ud835\udc65\ud835\udc56= \ud835\udc50\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (9) For each anchor data \ud835\udc65\ud835\udc56, constructing its corresponding negative example is not as simple as merely concatenating the content\ud835\udc50\ud835\udc56with a non-corresponding label \ud835\udc59\ud835\udc58. To construct negative examples, we employ a reciprocal approach with the positive examples, applying a similar method to both. We first initialize the negative example and positive example following the equation 5: \ud835\udc65\u2212 \ud835\udc56= \ud835\udc65+ \ud835\udc56= \ud835\udc65\ud835\udc56, for \ud835\udc56= 1, 2, . . . ,\ud835\udc5b (10) For the positive example \ud835\udc65+ \ud835\udc56, it can be finalized by incorporating a dropout rate \ud835\udc5finto the sentence embedding model M, where a rate of 0 < \ud835\udc5f\u22640.2 can generate a sentence embedding with a semantic representation similar to \ud835\udc65\ud835\udc56and ensure good model training performance [32]. Increasing the dropout rate to a higher value, such as 0.5, can distort the semantic representation of \ud835\udc65+ \ud835\udc56, making it dissimilar to that of \ud835\udc65\ud835\udc56. Training the model with such positive examples can result in poorer performance. For positive examples in training the sentence embedding model, the higher dropout rate performs more like a noise rather than a data augmentation method. In our work, we train the sentence embedding model to generate embeddings that maintain their integrity under noisy conditions, such as during writing into Compute-in-Memory (CiM). The noise can alter or fragment the original semantic representations. For instance, as illustrated in Figure 5, using a high dropout rate \ud835\udc5f= 0.9 can lead to a negative example with a corrupted representation. Although it may lack certain informative content, this negative example becomes semantically distinct from both the anchor and positive examples, effectively simulating the effect of CiM corruption. This approach not only differentiates the negative examples semantically but also aligns them with the corrupted data scenarios for noise-aware training. Given the triple examples (\ud835\udc65\ud835\udc56,\ud835\udc65\u2212 \ud835\udc56,\ud835\udc65+ \ud835\udc56), for \ud835\udc56= 1, 2, ...,\ud835\udc5bas shown in equation 10, we have the dropout rate \ud835\udc5ffor formalizing the positive examples where 0 < \ud835\udc5f\u22640.2. Correspondingly, the dropout rate for formailzing the negative examples can be 1 \u2212\ud835\udc5f. Given the sentence embedding model M, the anchor example, positive example, and negative example can be constructed as: emb(\ud835\udc65\ud835\udc56) = M(\ud835\udc65\ud835\udc56, dropout = 0) emb(\ud835\udc65\u2212 \ud835\udc56) = M(\ud835\udc65\u2212 \ud835\udc56, dropout = 1 \u2212\ud835\udc5f) emb(\ud835\udc65+ \ud835\udc56) = M(\ud835\udc65+ \ud835\udc56, dropout = \ud835\udc5f) (11) 3.4 Flexible Noise-aware Training In the previous two stages, we construct the data to train the sentence embedding model based on contrastive learning. Meanwhile, the training can be more effective when injecting the simulated device variation [33] so that the model can be optimized with consideration of the device variation. Additionally, the sentence embedding model needs to produce embeddings that can fit with the different CiMs, which might have various NVM designs. To do that, we need the sentence embedding model reshapes its output embeddings into certain dimensions and precision. Hence, we propose a flexible noise-aware training method, which can generate the noise-resilient embedding, fitting to various CiMs. As shown in Figure 3, in the flexible noise-aware training module, the embedding generated by sentence embedding model will be shaped based on the CiM\u2019s NVMs constraints where required dimension is \ud835\udc51and required precision is \ud835\udc5d, and being injected device variation to formalize the embeddings. The reshape module, shown in Figure 3, seen as an autoencoder to reconstruct its input embedding [34], can be expressed as \ud835\udc60\u210e\ud835\udc5d(), initialized by \ud835\udc51 and \ud835\udc5d, takes the anchor embedding \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56) as input. We can have \ud835\udc60\u210e\ud835\udc5d(\ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)) = \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d. Based on the device variation shown as Table 2, we can have: \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d \ud835\udf0e = (\ud835\udc52\u2032 \u2217\ud835\udc3f0 + \ud835\udc52\u2032 \u2217\ud835\udc3f1 + \ud835\udc52\u2032 \u2217\ud835\udc3f2 + \ud835\udc52\u2032 \u2217\ud835\udc3f3) \u2217\ud835\udf0e, (12) Ruiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 Table 1: Performance comparison between our framework and four baselines on five CiM devices with device variation specified in Table 2 across five datasets. Evaluate the performance of our framework using EDC (RoCR-EDC) and using IDC (RoCR-IDC) to optimize the performance of RAG, which utilizes Gemma-2 as its LLM. Dataset Citation Movie Rating News DBLP CiM Method Acc \u2191 F1 \u2191 Acc \u2191 F1 \u2191 MAE \u2193 RMSE \u2193 ROUGE-1 \u2191 ROUGE-L \u2191 ROUGE-1 \u2191 ROUGE-L \u2191 Device-1 SWV 0.4208 0.3339 0.1305 0.1974 0.3850 0.8093 0.0754 0.0731 0.1709 0.1590 CxDNN 0.4223 0.3576 0.1516 0.1762 0.4404 0.9135 0.0640 0.0632 0.1646 0.1449 CorrectNet 0.4155 0.3791 0.0996 0.1305 0.3609 0.7071 0.0512 0.0764 0.1603 0.1538 Vanilla RAG 0.4401 0.3476 0.1017 0.0838 0.3903 0.8944 0.0754 0.0731 0.1731 0.1473 RoCR-CDE 0.5536 0.3956 0.2242 0.2303 0.3108 0.6856 0.1041 0.0987 0.2066 0.1924 RoCR-CDI 0.5409 0.5117 0.2273 0.2487 0.2767 0.6083 0.0831 0.0808 0.2317 0.2176 Device-2 SWV 0.1831 0.1552 0.1992 0.1957 0.4205 0.8775 0.0296 0.0289 0.1968 0.1874 CxDNN 0.4013 0.3557 0.2167 0.2019 0.4423 0.8367 0.0604 0.0791 0.1517 0.1401 CorrectNet 0.3827 0.3209 0.1625 0.1909 0.3762 0.8062 0.0513 0.0505 0.2042 0.1945 Vanilla RAG 0.4801 0.3462 0.1576 0.2079 0.4153 0.9354 0.0296 0.0289 0.1618 0.1353 RoCR-CDE 0.5407 0.4396 0.2924 0.2509 0.2553 0.5385 0.1209 0.0946 0.2025 0.1906 RoCR-CDI 0.5299 0.4591 0.2971 0.2386 0.2124 0.5763 0.0884 0.0853 0.2240 0.2098 Device-3 SWV 0.2450 0.2564 0.1695 0.1641 0.3460 0.7416 0.0725 0.069 0.1018 0.0954 CxDNN 0.4811 0.4006 0.2367 0.2113 0.2851 0.6928 0.0761 0.0707 0.1425 0.1111 CorrectNet 0.4510 0.3918 0.0792 0.1029 0.3704 0.7937 0.0585 0.0555 0.1715 0.1346 Vanilla RAG 0.4852 0.3618 0.1614 0.1636 0.3255 0.7649 0.0725 0.0690 0.1647 0.1437 RoCR-CDE 0.5139 0.4116 0.2242 0.2215 0.3208 0.6481 0.0825 0.0805 0.1893 0.1754 RoCR-CDI 0.5515 0.4984 0.2152 0.2131 0.2916 0.6245 0.1099 0.1049 0.2294 0.2140 Device-4 SWV 0.5135 0.4260 0.1271 0.1178 0.3610 0.8196 0.0259 0.0256 0.1871 0.1786 CxDNN 0.4733 0.3964 0.1267 0.2158 0.3468 0.7616 0.0646 0.0634 0.1603 0.1538 CorrectNet 0.4628 0.4019 0.1592 0.1847 0.4013 0.9274 0.0705 0.0750 0.1628 0.1292 Vanilla RAG 0.2101 0.2401 0.1219 0.2019 0.4015 0.8544 0.0505 0.0489 0.1929 0.1814 RoCR-CDE 0.5836 0.5555 0.1706 0.2817 0.3139 0.6856 0.0873 0.0851 0.1984 0.1882 RoCR-CDI 0.5352 0.4289 0.1642 0.2445 0.2706 0.5916 0.1154 0.1128 0.2148 0.1978 Device-5 SWV 0.4320 0.3541 0.1250 0.1076 0.3652 0.7616 0.0434 0.0427 0.0985 0.0923 CxDNN 0.4301 0.0538 0.0751 0.0458 0.3503 0.8185 0.0707 0.0682 0.2042 0.1945 CorrectNet 0.4145 0.3926 0.1083 0.1395 0.5526 0.8185 0.0735 0.0776 0.2096 0.1879 Vanilla RAG 0.4256 0.3522 0.0847 0.0863 0.3951 0.8515 0.0676 0.0653 0.2018 0.1846 RoCR-CDE 0.5698 0.5223 0.2152 0.1669 0.2959 0.6245 0.0936 0.0891 0.1946 0.1844 RoCR-CDI 0.5254 0.4504 0.2394 0.2458 0.2624 0.6325 0.0799 0.0764 0.2238 0.2095 where \ud835\udc52\u2032 = \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d. The device variation, as noise, is injected into embeddings to formalize \ud835\udc52\ud835\udc5a\ud835\udc4f(\ud835\udc65\ud835\udc56)\ud835\udc51\u2217\ud835\udc5d \ud835\udf0e , which will be used in contrastive learning to train the sentence embedding model, as shown in Figure 3. 4 EXPERIMENTAL EVALUATION 4.1 Experimental Setup 4.1.1 Datasets. To demonstrate our robust CiM-backed RAG, we employ five datasets with different tasks and domains, including Citation Identification [35] (Citation), Movie Tagging [36] (Movie), Product Rating [37] (Rationg), News Headline Generation [38] (News), and DBLP-Citation-network V14 [39] (DBLP) to evaluate the proposed framework. The data in each dataset consists of query data and profile data. In our evaluation, the profile data will be used to formalize user history, and the profile corresponding query data will be used as the user input. The first three datasets contain binary, five-class, and fifteen-class classification tasks respectively. The last two datasets contain text generation tasks. In the Citation Identification dataset, every piece of query data consists of a paper title and two references, and the correct reference is provided. RAG uses the profile data corresponding to the paper titles with their detailed contents to choose the appropriate reference. In the Movie Tagging dataset, each query data contains a description of a movie, and RAG uses a similar description and its corresponding tag in the profile data to tag the query data. The Product Rating dataset has a similar structure as the Movie Tagging dataset. In News Headline Generation and DBLP datasets, each query data contains an abstract, which can be summarized into a title. RAG uses a similar abstract and its corresponding title in profile data to generate the title for query data. All five datasets have labels in their query data. 4.1.2 Default Experimental Setting. Our framework chooses all-MiniLM-L6-v2 [40] as the sentence embedding model. For each dataset, we randomly select 2000 documents from profile data as the anchor examples. To examine the data construction method of CDE, we set the augmentation factor \ud835\udc58= 5 to obtain 10000 negative and positive examples. We set dropout rate as 0.1 to obtain the positive examples while maintain it as 0 when process anchor and negative examples. To examine the data construction method CDI, we set dropout rate for positive examples as 0.1 and dropout rate for negative examples as 0.9. To align with experiments for CDE, we also set \ud835\udc58= 5 in the experiments for CDI. For the experimental results, we run five times and get the average. In experiments, we set the device variation \ud835\udf0e= 0.1 and shape embeddings into dimension of 64 with precision of \ud835\udc56\ud835\udc5b\ud835\udc618. The learning rate is 2\ud835\udc52\u22125. In all experiments, we adhere to the device variation model previously described. The specific parameters are abstracted and then simplified from three representative NVM devices, two of them Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (a) Citation on Gemma-2B D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (b) Citation on Phi-2 D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (c) Citation on Mistral-7B D-1 D-2 D-3 D-4 D-5 0.2 0.4 0.6 0.8 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (d) Citation on Llama-2-3B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (e) Movie on Gemma-2B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (f) Movie on Phi-2 D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (g) Movie on Mistral-7B D-1 D-2 D-3 D-4 D-5 0.0 0.1 0.2 0.3 0.4 0.5 Accuracy SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI (h) Movie on Llama-2-3B Figure 6: Performance comparison between our framework and four baselines on RAG utilizing the LLMs including Gemma-2B, Phi-2, Mistral-7B, and Llama-2-3B with device variation specified in Table 2, given dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band \ud835\udc40\ud835\udc5c\ud835\udc63\ud835\udc56\ud835\udc52. Table 2: Device non-ideality modeling for different real and synthesized devices. For devices with more than two levels, the device variation for each level is depicted as \ud835\udc3f\ud835\udc65. Name # of Levels Device Variations \ud835\udf0e\ud835\udc63 \ud835\udc3f0 \ud835\udc3f1 \ud835\udc3f2 \ud835\udc3f3 \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc401 (Device-1) 1 0.0100 0.0100 0.0100 0.0100 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472 (Device-2) 4 0.0067 0.0135 0.0135 0.0067 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc473 (Device-3) 4 0.0049 0.0146 0.0146 0.0049 \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc404 (Device-4) 4 0.0038 0.0151 0.0151 0.0038 \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc476 (Device-5) 4 0.0026 0.0155 0.0155 0.0026 are resistive random-access memory (RRAM) devices extracted from [27, 41] and the other is a ferroelectric field effect transistor (FeFET) device extracted from [42]. We name them \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc401, \ud835\udc45\ud835\udc45\ud835\udc34\ud835\udc404 and \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472, respectively. We also extrapolate the modeling data to obtain two synthesized \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc473 and \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc476 devices. Detailed device modeling results are demonstrated in Table 2. A \ud835\udc65-level device means this device can represent \ud835\udc65distinct values and \ud835\udf0e\ud835\udc3f2 = 0.01 means the variation of this device is 0.01 when it is representing the level value 2. Using the device variations obtained from real CiM devices, we perform our experiments on a single Nvidia A10 GPU. Document embeddings are shaped based on different CiM devices and stored as parallel arrays, similar to how they would be mapped to multiple NVM devices in practical scenarios. For example, if an embedding is shaped to contain all uint8 values, when it is mapped to 4-level (2-bit) devices such as \ud835\udc39\ud835\udc52\ud835\udc39\ud835\udc38\ud835\udc472, each element of the vector is represented by four devices. 4.1.3 Evaluation Methods. Our first three datasets examine the model classification capability, and the rest of two datasets examine the text generation capability. In particular, dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band \ud835\udc40\ud835\udc5c\ud835\udc63\ud835\udc56\ud835\udc52has two and fifteen labels respectively. We can examine the binary and multiclass classification capabilities of the LLMs enhanced by our framework. In this way, we use accuracy to examine the ability of the models to correctly classify instances across different classes, and we use F1 score to examine the balance between precision and recall in classification tasks. For dataset \ud835\udc45\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5b\ud835\udc54, while it has five labels and also examine the multiclass classification, we use mean absolute error (MAE) and root mean square error (RMSE) to evaluate from from a regression perspective [43]. For MAE, it measures the average magnitude of errors in the predictions, providing a straightforward assessment of the model\u2019s overall accuracy in predicting the rating values. For RMSE, it captures the square root of the average squared differences between predicted and actual ratings, offering a metric sensitive to larger errors, which can highlight significant discrepancies between the model\u2019s predictions and true values. For dataset \ud835\udc41\ud835\udc52\ud835\udc64\ud835\udc60and \ud835\udc37\ud835\udc35\ud835\udc3f\ud835\udc43, their labels are sentences. Such datasets examine the text generation capabilities. We use ROUGE1 and ROUGE-L to evaluate the overlap between generated texts and reference texts [44], capturing both the precision and recall of individual words (ROUGE-1) and the longest matching sequence (ROUGE-L), ensuring a comprehensive evaluation of the text generation quality. For accuracy, F1, ROUGE-1 and ROUGE-L, their higher values reflect the better performance. For MAE and RMSE, their lower value represent the better performance. Additionally, we use accuracy to measure the MIPS performance (MIPS accuracy), representing the ratio of MIPS results under device variation and MIPS results without device variation (references). 4.1.4 Baselines. As this is the first work to improve the RAG robustness on Edge-based CiM, we do not have state-of-the-art for comparison. As such, we construct baselines from the past noise mitigation methods originally designed to boost DNN robustness. The first baseline is selective write verify [21] (SWV). While it originally utilizes the second derivation to evaluate the device variation impact on neural network weights, we use the second derivation to measure the embedding deviation between the ground truth embedding and the embedding under device variation. The second baseline is (CxDNN) [45]. While they use compensation factor to improve the robustness of vector-matrix multiplication, we use the compensation factor the calibrate the embedding impacted by device variation. The third baseline is CorrectNet [46], where it utilizes the cross entropy loss and regularization to improve the robustness of neural networks in CiM. To use it as a baseline, we also use the cross entropy loss the regularization as the loss function to Ruiyang Qin1, Zheyu Yan1, Dewen Zeng1, Zhenge Jia1, Dancheng Liu2, Jianbo Liu1, Ahmed Abbasi1,Zhi Zheng1, Ningyuan Cao1, Kai Ni1, Jinjun Xiong2, Yiyu Shi1 calibrate the device output embedding. Additionally, we examine the Vanilla RAG, which contains no noise mitigation methods, as our fourth baseline. The baselines use the same experimental setting as our framework does. 4.2 Results For RAG, it can be simplified as the combination of MIPS and LLM, where the MIPS as a retriever searches the appropriate information and the LLM as a generator processes the searched results. Hence, in our experiments, we first evaluate the performance of MIPS under the device variation of device-1. We take the MIPS results obtained without device variation as the references (i.e., ground truth). Using the metric of MIPS accuracy, we examine how many MIPS results under device variation will match the references. Since the quality of retrieved content largely depends on the base sentence embedding model, and we focus on mitigating the device variation impact on the embedding model, we do not assess the quality of references. As shown in Table 3, our framework using the two data construction methods outperforms the four baselines across five datasets. It shows that our framework can mitigate the embedding perturbation due to device variation. These results can also correspond to the preliminary study shown in Figure 2, where the increment of \ud835\udf0e in naive Gaussian noise will jeopardize the MIPS performance. Table 3: Performance (MIPS accuracy) comparison between our framework and baselines. Accuracy is computed based on MIPS-retrieved documents under device variation of device-1 and the these retrieved without device variation. Dataset Citation Movie Rating News DBLP SWV 0.4200 0.1728 0.1050 0.0855 0.2295 CxDNN 0.4401 0.2017 0.0503 0.0754 0.1681 CorrectNet 0.4013 0.0699 0.0509 0.0533 0.1609 Vanilla RAG 0.4547 0.1694 0.0933 0.0649 0.1747 RoCR-CDE 0.9231 0.4639 0.1583 0.1921 0.2750 RoCR-CDI 0.9344 0.4355 0.1266 0.1708 0.2905 After we compare the MIPS performance of our framework and baselines, we further present a comprehensive evaluation to show the RAG performance of them. We use Gemma-2B as the LLM in RAG. Additionally, with Gemma-2B, we run RAG without device variation to obverse its ideal performance, where we get 0.5200 of accuracy for Citation, 0.3728 of accuracy for Movie, 0.3150 of MAE for Rating, 0.0855 of ROUGE-1 for News, and 0.2295 of ROUGE-1 for DBLP. On five CiM devices, whose device variations have been shown in Table 2, we examine RAG with five datasets. As shown in Table 1, given the same datasets, it is clear that each device variation significantly compromises the RAG robustness, whereas our framework can mitigate the different device variation. For example, the RAG performance for Citation dataset on Device-2 can range from 0.18 to 0.48, while our framework can boost the accuracy performance of Citation dataset above 0.5 for all five devices. Compared to the four baselines whose performances are relatively worse than the ideal performance, our framework significantly approaches and sometimes outperforms the ideal performance via generating better sentence embeddings. This is because RoCR also serves as a regularization to improve the model\u2019s generalization. In addition, we evaluate the impact of different LLMs on the performance of our framework. As Figure 1 shown, the LLM takes the concatenation of MIPS searched data and user query as the input and generates the response regarding the user query. Since different LLMs may have different response given the same query, we select four emerging edge-friendly medium-size LLMs in our experiments to examine the performance of our framework. Gemma-2B [47] is a new SOTA open model introduced by Google, with 4.95G model weights. According to Google, Gemma can outperform the same sized Llama-2 in reasoning capabilities. Hence, we also use Llama2-3B [48], one of the earliest open LLMs introduced by Meta, with 6.85G model weights. Similarly, Phi-2 [49] released by Microsoft, is a powerful small LLM with 5G model weights. Additionally, Mistral7B-GPTQ [50] made by Mistral AI, is a well-performed LLM after Llama model. We select dataset \ud835\udc36\ud835\udc56\ud835\udc61\ud835\udc4e\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5band dataset \ud835\udc40\ud835\udc5c\ud835\udc56\ud835\udc63\ud835\udc52. We use the default experimental setting with \ud835\udf0e= 0.1 and use CiM Device-1 as the experimental environment. The results are shown on Figure 6. It is evident that our framework outperforms each baseline across five CiM devices. Besides, the performance of each baseline on the same dataset can be largely different given different device, while our framework can produce a more robust performance. 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Device Variation ( ) 0.10 0.15 0.20 0.25 0.30 ROUGE-1 SWV CxDNN CorrectNet Vanilla RAG RoCR-CDE RoCR-CDI Figure 7: Performance comparison between our framework and four baselines on CiM device-1 with different device variation \ud835\udf0e, given dataset DBLP. By default, we use \ud835\udf0e= 0.1 to calculate the device variation of the five CiM devices. We also conduct an additional study to evaluate our framework given different \ud835\udf0evalues. Since we have already use dataset Citation and dataset Movie to study the performance of our frameworks seen in Figure 6, we choose a different dataset DBLP, using ROUGE-1 as the metric. For the LLM in RAG, we choose Mistral-7B. We examine the \ud835\udf0evalues higher and lower than 0.1, including 0, 0.025, 0.05, 0.075, 0.125, and 0.15. The case of \ud835\udf0e = 0 reflects the ideal performance. For the CiM device, we use CiM device-1. As shown in Figure 7, our framework outperforms baselines across different device variation values. Finally, RoCR is a training method that generates more robust weights for the sentence embedding model. It does not change the model structure. Thus, there is no hardware (e.g., energy and latency) overhead during inference. 5 CONCLUSION In this paper, we present a novel framework for retrieval-augmented generation (RAG) acceleration via computing-in-memory (CiM) architectures. Our approach provide a solution to free RAG from Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures the constraints of latency and scalability on edge devices. By optimizing the sentence embedding model, our framework enable the utilization of CiM devices in storing and processing the document embeddings, minimizing the impact of CiM device variations. Experimental results show that our framework achieves superior RAG performance and largely mitigates the impact of device variations. This paper marks the first RAG acceleration via CiM framework."
19
+ }
intro_8K/test_introduction_short_2405.04781v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04781v1",
3
+ "title": "CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization",
4
+ "abstract": "Large language models (LLMs) have demonstrated astonishing capabilities in\nnatural language processing (NLP) tasks, sparking interest in their application\nto professional domains with higher specialized requirements. However,\nrestricted access to closed-source LLMs via APIs and the difficulty in\ncollecting massive high-quality datasets pose obstacles to the development of\nlarge language models in education fields of various courses. Given these\nchallenges, we propose CourseGPT-zh, a course-oriented education LLM that\nsupports customization and low-cost deployment. To address the\ncomprehensiveness and diversity requirements of course-specific corpora, we\ndesign a high-quality question-answering corpus distillation framework\nincorporating prompt optimization, which effectively mines textbook knowledge\nand enhances its diversity. Moreover, considering the alignment of LLM\nresponses with user needs, a novel method for discrete prompt optimization\nbased on LLM-as-Judge is introduced. During optimization, this framework\nleverages the LLM's ability to reflect on and exploit error feedback and\npatterns, allowing for prompts that meet user needs and preferences while\nsaving response length. Lastly, we obtain CourseGPT-zh based on the open-source\nLLM using parameter-efficient fine-tuning. Experimental results show that our\ndiscrete prompt optimization framework effectively improves the response\nquality of ChatGPT, and CourseGPT-zh exhibits strong professional capabilities\nin specialized knowledge question-answering, significantly outperforming\ncomparable open-source models.",
5
+ "authors": "Zheyan Qu, Lu Yin, Zitong Yu, Wenbo Wang, Xing zhang",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CL",
9
+ "cats": [
10
+ "cs.CL"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
14
+ "gt": "Large language models, such as ChatGPT [1], GPT4 [2], LLaMA [3], and ChatGLM [4], have demonstrated remarkable performance and generalization capabilities across various NLP tasks, significantly expanding the boundaries of language applications. With the increase in model parameters and pretraining corpus size, capabilities such as logical reasoning, instruction following, and In-Context Learning [5],[6],[7] have emerged. Based on these breakthroughs, the latest LLMs have shown profound understanding and professionalism in various fields, such as virtual assistants, text generation, and code annotation. Utilizing LLMs to disrupt industries has become an inevitable trend, including the field of education[8],[9]. Recently, there has been a desire to leverage the extensive knowledge of large language models to construct domain- specific LLMs in various vertical fields, which require greater expertise and accuracy. To address the issue that general-purpose LLMs cannot meet specific domain requirements, a variety of methods have been proposed. For instance, steering foundation models through role-playing or prompt engineering have been used to tap into the knowledge learned during the pre-training phase, which can unleash their deep-seated expert capabilities [10],[11]. Other approaches involve pretraining or continual pre-training with domain-specific corpus to incorporate domain- specific knowledge into large language models [8],[12],[13],[14]. In addition, to reduce the hallucination during the response generation, retrieval augmentation has also been applied to provide reliable references [8],[15]. Based on these \u2217Xing zhang is the corresponding author. arXiv:2405.04781v1 [cs.CL] 8 May 2024 approaches, successful implementations such as MedAgents [10], ChatLaw [15], EduChat [8], and FinGPT [16] have demonstrated the potential of LLMs to provide professional responses and insights in various vertical fields, including healthcare, law, finance, and education. However, constructing domain-specific large language models is still labor-consuming and expensive. To begin with, for closed-source large language models like ChatGPT, the high costs of text generation and fine-tuning services are often prohibitive. As for open-source LLMs, there is a significant gap in parameter size and pre-training corpus compared to closed-source LLMs, resulting in significantly weaker general capabilities such as reasoning, and domain-specific knowledge extraction [9],[17],[18],[19]. Faced with complex professional terminology, open-source large language models often fail to meet user requirements for domain knowledge. In this context, it often requires a large amount of in-domain pre-training corpus or expertise datasets to enhance professionalism in vertical fields. Although various existing works have developed specialized datasets and evaluation criteria for various fields such as philosophy, medicine, and law, as well as for scenarios including network operation and geospatial semantics [17],[18],[19],[20],[21], there is still a considerable demand for manual effort in constructing datasets for courses or privatized scenarios that are not covered by these datasets. This challenge is particularly pronounced when accessible corpora in the field are scarce, making it extremely difficult to construct tens of thousands of specialized instruction data. Furthermore, the majority of models are primarily pre-trained on English corpora, which may lead to a degradation in their performance in other languages [22],[23]. In addition to the challenges of constructing specialized corpora, the high cost of inference incurred by open-source large language models cannot be overlooked. Compared to the concise responses provided by humans, the responses generated by large language models, while more comprehensive, also include a significant amount of redundant information, resulting in unnecessary inference overhead. Typically, to further align the responses of large language models with specific preferences, methods such as RLHF (Reinforcement Learning from Human Feedback)[24] are introduced for fine-tuning models. However, this approach still requires a substantial amount of human-labeled preference data. Consequently, promoting alignment between the responses and human preferences, as well as reducing inference costs, is also a key factor in fostering the widespread adoption of open-source large models in specialized vertical domains. Targeted at these issues, we propose CourseGPT-zh, an open-source education large language model, and design a pipeline for constructing high-quality question-answer pairs through mining textbook knowledge. By utilizing the constructed diverse question-answer pairs, we perform parameter-efficient fine-tuning on the open-source model to mitigate the resource constraints required for deployment. In addition, in the data construction process, we incorporate LLM-as-Judge and utilize discrete prompt optimization to generate optimal prompts, steering ChatGPT to produce high-quality training data aligned with human preferences. Through this method, we ensure high-quality responses while reducing the deployment costs associated with response length. Our main contributions can be summarized as: \u2022 In this paper, we propose CourseGPT-zh, an open-source education large language model, with a pipeline for constructing high-quality and diverse question-answer pairs. Based on textbooks, we guide the model to conduct thorough exploration and questioning of textbooks, extracting knowledge from both closed-source large language models and specialized texts. Additionally, we employ a method inspired by self-instruct to guide the large language models in generating related questions, further enhancing the diversity. \u2022 Considering that although large language models can generate comprehensive answers, some content may be redundant or incorrect. Therefore, we employ prompt engineering to guide ChatGPT in generating responses that align with human preferences. To obtain the optimal prompts, we have designed an iterative discrete prompt optimization framework, which incorporates LLM-as-Judge to facilitate automatic evaluation of the quality of responses guided by prompts. Furthermore, the optimized prompt allows the large language model to achieve a balance between the quality of responses and their length, achieving information compression in responses. \u2022 A parameter-efficient fine-tuning method of the ChatGLM3 model is conducted based on constructed high- quality question-answering data, resulting in the CourseGPT-zh. Experimental evidence has shown that CourseGPT-zh exhibits improved alignment with human responses, and delivers more concise answers while maintaining a high level of response quality. On various NLP task evaluation metrics, CourseGPT-zh significantly outperforms other open-source large models. 2",
15
+ "main_content": "With fierce competition and rapid development, large language models ranging from billions to trillions of parameters have achieved remarkable performance across various NLP tasks after being pre-trained on massive amounts of text. Represented by LLMs such as ChatGPT, GPT4, and GPT4-Turbo, the OpenAI model family has successively reset the benchmarks for NLP tasks, being regarded as one of the greatest inventions in history. Concurrently, a multitude of open-source large language models, including llama-2-13b, ChatGLM3-6b, and Mistral-8x7B-MoE[25], have also shown astonishing improvements, even surpassing the level of ChatGPT on some dimensions. More importantly, they can be deployed on a single to several GPUs and can be flexibly customized through fine-tuning. 2.1 Domain-specific LLMs Although general-purpose large language models have achieved exceptional performance on generic NLP tasks, they often fall short in vertical domains that necessitate extensive specialized knowledge and high accuracy requirements. The performance of zero-shot large language models in these domains is typically inadequate, thereby granting domainspecific LLMs significant attention. Closed-source large language models, while exhibiting superior performance across various capabilities, present challenges for continual pre-training and fine-tuning with private corpora. Therefore, the construction of domain-specific models based on closed-source LLMs frequently leverages role-playing or collaboration abilities to extract knowledge in the specialized field during the pre-training phase. In contrast, open-source LLMs can be further pre-trained or fine-tuned with extensive high-quality domain-specific data, and they have achieved multiple successful applications in fields such as medicine, law, education, finance, etc. HuatuoGPT [26] employs a mixed dataset comprising distilled data from ChatGPT and real-world data provided by physicians\u2019 medical advice to fine-tune an open-source model. Furthermore, it aligns the model\u2019s response with human preferences through RLAIF (Reinforcement Learning from Artificial Intelligence Feedback). By learning from the response styles of real-world doctor-patient interactions, the fine-tuned model can engage with users in a human-like manner and significantly surpasses other models at a similar level across various metrics. MedChatZH [12] has developed a dialogue model specifically designed for Traditional Chinese Medicine, incorporating extensive Chinese medical literature for continual pre-training. After fine-tuning millions of question-answer data from the Internet and various Chinese hospitals, the model achieves state-of-the-art performance in the field of Chinese medicine. ChatLaw [15], targeting the legal domain, not only provides professional responses concerning legal knowledge but also acquires problem-solving abilities through training on multiple-choice question data. Furthermore, it employs a method combining vector database retrieval with keyword search, effectively reducing the hallucination in responses. EduChat [8] offers a range of functionalities, including open-ended question answering, paper assessment, and Socratic teaching, enhancing various skills through fine-tuning and the integration of tools. The model gains interdisciplinary knowledge through continual pre-training and strengthens its question-answering and instruction-following capabilities with large-scale instruction and open-domain dialogue datasets. FinGPT [16] adopts a data-centric approach, focusing on automated data management pipelines and lightweight adaptive technologies, establishing a comprehensive framework from data processing to feature engineering and application, while also enhancing the transparency of the overall framework. One of its strengths lies in its ability to integrate seamlessly with both open-source and closed-source large language models without the need for further training. 2.2 Discrete prompt engineering Prompt engineering aims to guide large language models to fully leverage their potential through the meticulous design of prompts. Extensive research has demonstrated that well-crafted prompts can significantly enhance the ability of large language models to improve their performance across various NLP tasks [27],[28]. Prompt engineering encompasses continuous prompt learning and discrete prompt optimization. Continuous prompt learning aims to adapt large language models to various tasks by incorporating learnable parameters within the prompts [29], [30]. However, continuous prompt learning typically requires access to the gradient vectors of the LLMs, which restricts its application in closed-source models that are accessed only through APIs. For discrete prompts, traditional methods often rely on meticulous manual design, which not only demands considerable human effort but also may not necessarily maximize the model\u2019s performance. Consequently, numerous methods for automatically generating optimal discrete prompts have been explored, leveraging the large model itself as an optimizer to autonomously enhance its performance in NLP tasks. Recently, several leading automated discrete prompt optimization frameworks have been proposed. EVOPROMPT[31] draws on the principles of evolutionary algorithms (EAs) to iteratively guide LLMs to generate new prompts through evolutionary operators. It does not require any gradient information from LLMs and can achieve a balance between exploration and exploitation. Experiments on nine datasets have shown that optimized prompts can significantly improve task performance. APE[32], inspired by program synthesis, represents discrete prompting optimization as 3 Open-source Pre-trained Model Course-oriented Chat Model Factual Accuracy User Satisfaction Clarity Condensability Paragraphs Reflection Resample"
16
+ }
intro_8K/test_introduction_short_2405.04795v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04795v1",
3
+ "title": "Variational Schr\u00f6dinger Diffusion Models",
4
+ "abstract": "Schr\\\"odinger bridge (SB) has emerged as the go-to method for optimizing\ntransportation plans in diffusion models. However, SB requires estimating the\nintractable forward score functions, inevitably resulting in the costly\nimplicit training loss based on simulated trajectories. To improve the\nscalability while preserving efficient transportation plans, we leverage\nvariational inference to linearize the forward score functions (variational\nscores) of SB and restore simulation-free properties in training backward\nscores. We propose the variational Schr\\\"odinger diffusion model (VSDM), where\nthe forward process is a multivariate diffusion and the variational scores are\nadaptively optimized for efficient transport. Theoretically, we use stochastic\napproximation to prove the convergence of the variational scores and show the\nconvergence of the adaptively generated samples based on the optimal\nvariational scores. Empirically, we test the algorithm in simulated examples\nand observe that VSDM is efficient in generations of anisotropic shapes and\nyields straighter sample trajectories compared to the single-variate diffusion.\nWe also verify the scalability of the algorithm in real-world data and achieve\ncompetitive unconditional generation performance in CIFAR10 and conditional\ngeneration in time series modeling. Notably, VSDM no longer depends on warm-up\ninitializations and has become tuning-friendly in training large-scale\nexperiments.",
5
+ "authors": "Wei Deng, Weijian Luo, Yixin Tan, Marin Bilo\u0161, Yu Chen, Yuriy Nevmyvaka, Ricky T. Q. Chen",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Diffusion models have showcased remarkable proficiency across diverse domains, spanning large-scale generations *Equal contribution (Alphabetical) 1Machine Learning Re- search, Morgan Stanley 2Peking University 3Duke Univer- sity 4Meta AI (FAIR). Correspondence to: Wei Deng <wei- [email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). of image, video, and audio, conditional text-to-image tasks, and adversarial defenses (Dhariwal & Nichol, 2022; Ho et al., 2022; Kong et al., 2021; Ramesh et al., 2022; Zhang et al., 2024). The key to their scalability lies in the closed- form updates of the forward process, highlighting both sta- tistical efficiency (Koehler et al., 2023) and diminished de- pendence on dimensionality (Vono et al., 2022). Neverthe- less, diffusion models lack a distinct guarantee of optimal transport (OT) properties (Lavenant & Santambrogio, 2022) and often necessitate costly evaluations to generate higher- fidelity content (Ho et al., 2020; Salimans & Ho, 2022; Lu et al., 2022; Xue et al., 2023; Luo, 2023). Alternatively, the Schr\u00a8 odinger bridge (SB) problem (L\u00b4 eonard, 2014; Chen & Georgiou, 2016; Pavon et al., 2021; Caluya & Halder, 2022; De Bortoli et al., 2021), initially rooted in quantum mechanics (L\u00b4 eonard, 2014), proposes optimizing a stochastic control objective through the use of forward-backward stochastic differential equations (FB- SDEs) (Chen et al., 2022b). The alternating solver gives rise to the iterative proportional fitting (IPF) algorithm (Kull- back, 1968; Ruschendorf, 1995) in dynamic optimal trans- port (Villani, 2003; Peyr\u00b4 e & Cuturi, 2019). Notably, the intractable forward score function plays a crucial role in providing theoretical guarantees in optimal transport (Chen et al., 2023c; Deng et al., 2024). However, it simultane- ously sacrifices the simulation-free property and largely relies on warm-up checkpoints for conducting large-scale experiments (De Bortoli et al., 2021; Chen et al., 2022b). A natural follow-up question arises: Can we train diffusion models with efficient transport? To this end, we introduce the variational Schr\u00a8 odinger diffu- sion model (VSDM). Employing variational inference (Blei et al., 2017), we perform a locally linear approximation of the forward score function, and denote it by the variational score. The resulting linear forward stochastic differential equations (SDEs) naturally provide a closed-form update, significantly enhancing scalability. Compared to the single- variate score-based generative model (SGM), VSDM is a multivariate diffusion (Singhal et al., 2023). Moreover, hy- perparameters are adaptively optimized for more efficient transportation plans within the Schr\u00a8 odinger bridge frame- work (Chen et al., 2022b). 1 arXiv:2405.04795v1 [cs.LG] 8 May 2024 Variational Schr\u00a8 odinger Diffusion Models Theoretically, we leverage stochastic approximation (Rob- bins & Monro, 1951) to demonstrate the convergence of the variational score to the optimal local estimators. Al- though the global transport optimality is compromised, the notable simulation-free speed-ups in training the backward score render the algorithm particularly attractive for train- ing various generation tasks from scratch. Additionally, the efficiency of simulation-based training for the linearized variational score significantly improves owing to computa- tional advancements in convex optimization. We validate the strength of VSDM through simulations, achieving com- pelling performance on standard image generation tasks. Our contributions unfold in four key aspects: \u2022 We introduce the variational Schr\u00a8 odinger diffusion model (VSDM), a multivariate diffusion with optimal variational scores guided by optimal transport. Addi- tionally, the training of backward scores is simulation- free and becomes much more scalable. \u2022 We study the convergence of the variational score using stochastic approximation (SA) theory, which can be further generalized to a class of state space diffusion models for future developments. \u2022 VSDM is effective in generating data of anisotropic shapes and motivates straighter transportation paths via the optimized transport. \u2022 VSDM achieves competitive unconditional generation on CIFAR10 and conditional generation in time series modeling without reliance on warm-up initializations.",
15
+ "main_content": "Flow Matching and Beyond Lipman et al. (2023) utilized the McCann displacement interpolation (McCann, 1997) to train simulation-free CNFs to encourage straight trajectories. Consequently, Pooladian et al. (2023); Tong et al. (2023) proposed straightening by using minibatch optimal transport solutions. Similar ideas were achieved by Liu (2022); Liu et al. (2023) to iteratively rectify the interpolation path. Albergo & Vanden-Eijnden (2023); Albergo et al. (2023) developed the stochastic interpolant approach to unify both flow and diffusion models. However, \u201cstraighter\u201d transport maps may not imply optimal transportation plans in general and the couplings are still not effectively optimized. Dynamic Optimal Transport Finlay et al. (2020); Onken et al. (2021) introduced additional regularization through optimal transport to enforce straighter trajectories in CNFs and reduce the computational cost. De Bortoli et al. (2021); Chen et al. (2022b); Vargas et al. (2021) studied the dynamic Schr\u00a8 odinger bridge with guarantees in entropic optimal transport (EOT) (Chen et al., 2023c); Shi et al. (2023); Peluchetti (2023); Chen et al. (2023b) generalized bridge matching and flow matching based EOT and obtained smoother trajectories, however, scalability remains a significant concern for Schr\u00a8 odinger-based diffusions. 3. Preliminaries 3.1. Diffusion Models The score-based generative models (SGMs) (Ho et al., 2020; Song et al., 2021b) first employ a forward process (1a) to map data to an approximate Gaussian and subsequently reverse the process in Eq.(1b) to recover the data distribution. d\u2212 \u2192 x t = f t(\u2212 \u2192 x t)dt + \ufffd d\u2190 \u2212 x t = \ufffd f t(\u2190 \u2212 x t) \u2212\u03b2t\u2207l \u03b2td\u2212 \u2192 wt (1a) \ufffd \ufffd\ufffd \ufffd d\u2212 \u2192 x t = f t(\u2212 \u2192 x t)dt + \ufffd \u03b2td\u2212 \u2192 wt d\u2190 \u2212 x t = \ufffd f t(\u2190 \u2212 x t) \u2212\u03b2t\u2207log \u03c1t \ufffd\u2190 \u2212 x t \ufffd\ufffd dt + \ufffd here \u2190 \u2212 x t, \u2212 \u2192 x t \u2208Rd; \u2212 \u2192 x 0 \u223c\u03c1data and \u2190 \u2212 x T \u223c \u03b2td\u2190 \u2212 wt, (1b) \ufffd \u2212\u2207 \ufffd \ufffd\ufffd \ufffd where \u2190 \u2212 x t, \u2212 \u2192 x t \u2208Rd; \u2212 \u2192 x 0 \u223c\u03c1data and \u2190 \u2212 x T \u223c\u03c1prior; f t denotes the vector field and is often set to 0 (a.k.a. VE-SDE) or linear in x (a.k.a. VP-SDE); \u03b2t > 0 is the time-varying scalar; \u2212 \u2192 wt is a forward Brownian motion from t \u2208[0, T] with \u03c1T \u2248\u03c1prior; \u2190 \u2212 wt is a backward Brownian motion from time T to 0. The marginal density \u03c1t of the forward process (1a) is essential for generating the data but remains inaccessible in practice due to intractable normalizing constants. Explicit Score Matching (ESM) Instead, the conditional score function \u2207log \u03c1t|0 (\u00b7) \u2261\u2207log \u03c1t \ufffd \u00b7|\u2212 \u2192 x 0 \ufffd is estimated by minimizing a user-friendly ESM loss (weighted by \u03bb) between the score estimator st \u2261s\u03b8(\u00b7, t) and exact score (Song et al., 2021b) such that \ufffd \ufffd \ufffd \ufffd Song et al., 2021b) such that Et \ufffd \u03bbtE\u2212 \u2192 x 0E\u2212 \u2192 x t|\u2212 \u2192 x 0[\u2225st(\u2212 \u2192 x t) \u2212\u2207log \u03c1t|0 \ufffd\u2212 \u2192 x t \ufffd \u22252 2] \ufffd . (2) otably, both VPand VE-SDEs yield closed-form expresons for any \u2212 \u2192 given \u2212 \u2192 in the forward process (Song \ufffd | \u2225 \u2212\u2207| \ufffd \ufffd \u2225 \ufffd Notably, both VPand VE-SDEs yield closed-form expressions for any \u2212 \u2192 x t given \u2212 \u2192 x 0 in the forward process (Song et al., 2021b), which is instrumental for the scalability of diffusion models in real-world large-scale generation tasks. Implicit Score Matching (ISM) By integration by parts, ESM is equivalent to the ISM loss (Hyv\u00a8 arinen, 2005; Huang et al., 2021; Luo et al., 2024b) and the evidence lower bound (ELBO) follows \ufffd \ufffd BO) follows log \u03c10 (x0) \u2265E\u03c1T |0(\u00b7) \ufffd log \u03c1T |0 (xT ) \ufffd \u22121 \ufffdT E\u03c1(\u00b7) \ufffd \u03b2t \u2225st\u22252 + 2\u2207\u00b7 (\u03b2ts \u22121 2 2 0 \ufffdT 0 natur \u2265 |\u00b7 \ufffd | \ufffd T 0 E\u03c1t|0(\u00b7) \ufffd \u03b2t \u2225st\u22252 2 + 2\u2207\u00b7 (\u03b2tst \u2212f t) \ufffd dt. aturally connected to Song et al. (2020), wh \ufffd \ufffd \ufffd ISM is naturally connected to Song et al. (2020), which supports flexible marginals and nonlinear forward processes but becomes significantly less scalable compared to ESM. 3.2. Schr\u00a8 odinger Bridge The dynamic Schr\u00a8 odinger bridge aims to solve a full bridge inf P\u2208D(\u03c1data,\u03c1prior) KL(P|Q), (3) 2 Variational Schr\u00a8 odinger Diffusion Models where D(\u03c1data, \u03c1prior) is the family of path measures with marginals \u03c1data and \u03c1prior at t = 0 and t = T, respectively; Q is the prior process driven by dxt = f t(xt)dt+\u221a2\u03b2t\u03b5d\u2212 \u2192 wt. It also yields a stochastic control formulation (Chen et al., 2021; Pavon et al., 2021; Caluya & Halder, 2022). inf u\u2208U E \u001a Z T 0 1 2\u2225ut(\u2212 \u2192 x t)\u22252 2dt \u001b s.t. d\u2212 \u2192 x t = h f t(\u2212 \u2192 x ) + p \u03b2tut(\u2212 \u2192 x ) i dt + p 2\u03b2t\u03b5d\u2212 \u2192 wt (4) \u2212 \u2192 x 0 \u223c\u03c1data, \u2212 \u2192 x T \u223c\u03c1prior, where U is the family of controls. The expectation is taken w.r.t \u2212 \u2192 \u03c1 t(\u00b7), which denotes the PDF of the controlled diffusion (4); \u03b5 is the temperature of the diffusion and the regularizer in EOT (Chen et al., 2023c). Solving the underlying Hamilton\u2013Jacobi\u2013Bellman (HJB) equation and invoking the time reversal (Anderson, 1982) with \u03b5 = 1 2, Schr\u00a8 odinger system yields the desired forward-backward stochastic differential equations (FBSDEs) (Chen et al., 2022b): d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2t\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t) i dt + p \u03b2td\u2212 \u2192 wt, (5a) d\u2190 \u2212 x t = \u0002 f t(\u2190 \u2212 x t) \u2212\u03b2t\u2207log \u2190 \u2212 \u03c6 t(\u2190 \u2212 x t) \u0003 dt + p \u03b2td\u2190 \u2212 wt, (5b) where \u2212 \u2192 \u03c8 t(\u00b7)\u2190 \u2212 \u03c6 t(\u00b7) = \u2212 \u2192 \u03c1 t(\u00b7), \u03c10(\u00b7) \u223c\u03c1data, \u03c1T (\u00b7) \u223c\u03c1prior. To solve the optimal controls (scores) (\u2207log \u2212 \u2192 \u03c8 , \u2207log \u2190 \u2212 \u03c6 ), a standard tool is to leverage the nonlinear Feynman-Kac formula (Ma & Yong, 2007; Karatzas & Shreve, 1998; Chen et al., 2022b) to learn a stochastic representation. Proposition 1 (Nonlinear Feynman-Kac representation). Assume Lipschitz smoothness and linear growth condition on the drift f and diffusion g in the FB-SDE (5). Define \u2212 \u2192 y t = log \u2212 \u2192 \u03c8 t(xt) and \u2190 \u2212 y t = log \u2190 \u2212 \u03c6 t(xt). Then the stochastic representation follows \u2190 \u2212 y s = E \u0014 \u2190 \u2212 y T \u2212 Z T s \u0393\u03b6(\u2190 \u2212 z t; \u2212 \u2192 z t)dt \f \f \f \f\u2212 \u2192 x s = xs \u0015 , \u0393\u03b6(\u2190 \u2212 z t; \u2212 \u2192 z t)\u22611 2\u2225\u2190 \u2212 z t\u22252 2 + \u2207\u00b7 \u0000p \u03b2t\u2190 \u2212 z t \u2212f t \u0001 + \u03b6\u27e8\u2190 \u2212 z t, \u2212 \u2192 z t\u27e9, (6) where \u2212 \u2192 z t = \u221a\u03b2t\u2207\u2212 \u2192 y t, \u2190 \u2212 z t = \u221a\u03b2t\u2207\u2190 \u2212 y t, and \u03b6 = 1. 4. Variational Schr\u00a8 odinger Diffusion Models SB outperforms SGMs in the theoretical potential of optimal transport and an intractable score function \u2207log \u2212 \u2192 \u03c8 t(xt) is exploited in the forward SDE for more efficient transportation plans. However, there is no free lunch in achieving such efficiency, and it comes with three notable downsides: \u2022 Solving \u2207log \u2212 \u2192 \u03c8 t in Eq.(5a) for optimal transport is prohibitively costly and may not be necessary (Marzouk et al., 2016; Liu et al., 2023). \u2022 The nonlinear diffusion no longer yields closed-form expression of \u2212 \u2192 x t given \u2212 \u2192 x 0 (Chen et al., 2022b). \u2022 The ISM loss is inevitable and the estimator suffers from a large variance issue (Hutchinson, 1989). 4.1. Variational Inference via Linear Approximation FB-SDEs naturally connect to the alternating-projection solver based on the IPF (a.k.a. Sinkhorn) algorithm, boiling down the full bridge (3) to a half-bridge solver (Pavon et al., 2021; De Bortoli et al., 2021; Vargas et al., 2021). With P1 given and k = 1, 2, ..., we have: P2k := arg min P\u2208D(\u03c1data, \u00b7) KL(P\u2225P2k\u22121), (7a) P2k+1 := arg min P\u2208D(\u00b7, \u03c1prior) KL(P\u2225P2k). (7b) More specifically, Chen et al. (2022b) proposed a neural network parameterization to model (\u2190 \u2212 z t, \u2212 \u2192 z t) using (\u2190 \u2212 z \u03b8 t , \u2212 \u2192 z \u03c9 t ), where \u03b8 and \u03c9 refer to the model parameters, respectively. Each stage of the half-bridge solver proposes to solve the models alternatingly as follows \u2190 \u2212 L (\u03b8) = \u2212 Z T 0 E\u2212 \u2192 x t\u223d(5a) \u0014 \u03931(\u2190 \u2212 z \u03b8 t ; \u2212 \u2192 z \u03c9 t )dt \f \f \f \f\u2212 \u2192 x 0 = x0 \u0015 (8a) \u2212 \u2192 L (\u03c9) = \u2212 Z T 0 E\u2190 \u2212 x t\u223d(5b) \u0014 \u03931(\u2212 \u2192 z \u03c9 t ; \u2190 \u2212 z \u03b8 t )dt \f \f \f \f\u2190 \u2212 x T = xT \u0015 , (8b) where \u03931 is defined in Eq.(6) and \u223ddenotes the approximate simulation parametrized by neural networks * However, solving the backward score in Eq.(8a) through simulations, akin to the ISM loss, is computationally demanding and affects the scalability in generative models. To motivate simulation-free property, we leverage variational inference (Blei et al., 2017) and study a linear approximation of the forward score \u2207log \u2212 \u2192 \u03c8 (x, t) \u2248Atx with f t(\u2212 \u2192 x t) \u2261\u22121 2\u03b2t\u2212 \u2192 x t, which ends up with the variational FB-SDE (VFB-SDE): d\u2212 \u2192 x t = \u0014 \u22121 2\u03b2t\u2212 \u2192 x t + \u03b2tAt\u2212 \u2192 x t \u0015 dt + p \u03b2td\u2212 \u2192 wt, (9a) d\u2190 \u2212 x t = \u0014 \u22121 2\u03b2t\u2190 \u2212 x t \u2212\u03b2t\u2207log \u2212 \u2192 \u03c1 t(\u2190 \u2212 x t) \u0015 dt + p \u03b2td\u2190 \u2212 wt, (9b) where t \u2208[0, T] and \u2207log \u2212 \u2192 \u03c1 t is the score function of (9a) and the conditional version is to be derived in Eq.(15). The half-bridge solver is restricted to a class of OU processes OU(\u03c1data, \u00b7) with the initial marginal \u03c1data. arg min P\u2208D(\u03c1data,\u00b7) KL(P\u2225P2k\u22121) \u21d2 arg min b P\u2208OU(\u03c1data,\u00b7) KL(b P\u2225P2k\u22121). *\u223c(resp. \u223d) denotes the exact (resp. parametrized) simulation. 3 Variational Schr\u00a8 odinger Diffusion Models By the mode-seeking property of the exclusive (reverse) KL divergence (Chan et al., 2022), we can expect the optimizer b P to be a local estimator of the nonlinear solution in (7a). Additionally, the loss function (8b) to learn the variational score At, where t \u2208[0, T], can be simplified to \u2212 \u2192 L (A) = \u2212 Z T 0 Ext\u223d(9b) \u0014 \u0393\u03b6(Atxt; \u2190 \u2212 z \u03b8 t )dt \f \f \f \f\u2190 \u2212 x T = xT \u0015 , (10) where \u0393\u03b6 is defined in Eq.(6). Since the structure property \u2212 \u2192 \u03c8 t\u2190 \u2212 \u03c6 t = \u2212 \u2192 \u03c1 t in Eq.(5) is compromised by the variational inference, we propose to tune \u03b6 in our experiments. 4.2. Closed-form Expression of Backward Score Assume a prior knowledge of At is given, we can rewrite the forward process (9a) in the VFB-SDE and derive a multivariate forward diffusion (Singhal et al., 2023): d\u2212 \u2192 x t = \u0014 \u22121 2\u03b2tI + \u03b2tAt \u0015 \u2212 \u2192 x tdt + p \u03b2td\u2212 \u2192 wt = \u22121 2Dt\u03b2t\u2212 \u2192 x tdt + p \u03b2td\u2212 \u2192 wt, (11) where Dt = I \u22122At \u2208Rd\u00d7d is a positive-definite matrix \u2020. Consider the multivariate OU process (11). The mean and covariance follow d\u00b5t|0 dt = \u22121 2\u03b2tDt\u00b5t|0 (12a) d\u03a3t|0 dt = \u22121 2\u03b2t \u0000Dt\u03a3t|0 + \u03a3t|0D\u22ba t \u0001 + \u03b2tI. (12b) Solving the differential equations with the help of integration factors, the mean process follows \u00b5t|0 = e\u22121 2 [\u03b2D]tx0, (13) where [\u03b2D]t = R t 0 \u03b2sDsds. By matrix decomposition \u03a3t|0 = CtH\u22121 t (S\u00a8 arkk\u00a8 a & Solin, 2019), the covariance process follows that: \u0012Ct Ht \u0013 = exp \" \u0012\u22121 2[\u03b2D]t [\u03b2I]t 0 1 2[\u03b2D\u22ba]t \u0013 # \u0012\u03a30 I \u0013 , (14) where the above matrix exponential can be easily computed through modern computing libraries. Further, to avoid computing the expensive matrix exponential for highdimensional problems, we can adopt a diagonal and timeinvariant Dt. Suppose \u03a3t|0 has the Cholesky decomposition \u03a3t|0 = LtL\u22ba t for some lower-triangular matrix Lt. We can have a closed-form update that resembles the SGM. \u2212 \u2192 x t = \u00b5t|0 + Lt\u03f5, \u2020Dt = \u22122At \u2208Rd\u00d7d when the forward SDE is VE-SDE. where \u00b5t|0 is defined in Eq.(13) and \u03f5 is the standard ddimensional Gaussian vector. The score function follows \u2207log \u2212 \u2192 \u03c1 t|0(\u2212 \u2192 x t) = \u22121 2\u2207[(\u2212 \u2192 x t \u2212\u00b5t)\u22ba\u03a3\u22121 t|0(\u2212 \u2192 x t \u2212\u00b5t)] = \u2212\u03a3\u22121 t|0(\u2212 \u2192 x t \u2212\u00b5t) (15) = \u2212L\u2212\u22ba t L\u22121 t Lt\u03f5 := \u2212L\u2212\u22ba t \u03f5. Invoking the ESM loss function in Eq.(2), we can learn the score function \u2207log \u2212 \u2192 \u03c1 t|0(\u2212 \u2192 x t|\u2212 \u2192 x 0) using a neural network parametrization st(\u00b7) and optimize the loss function: \u2207A\u2225L\u2212\u22ba t \u03f5 \u2212st(xt)\u22252 2. (16) One may further consider preconditioning techniques (Karras et al., 2022) or variance reduction (Singhal et al., 2023) to stabilize training and accelerate training speed. Speed-ups via time-invariant and diagonal Dt If we parametrize Dt as a time-invariant and diagonal positivedefinite matrix, the formula (14) has simpler explicit expressions that do not require calling matrix exponential operators. We present such a result in Corollary 1. For the image generation experiment in Section 7.3, we use such a diagonal parametrization when implementing the VSDM. Corollary 1. If Dt = \u039b := diag(\u03bb), where \u03bbi \u22650, \u22001 \u2264 i \u2264d. If we denote the \u03c32 t := R t 0 \u03b2sds, then matrices Ct and Ht has simpler expressions with Ct = \u039b\u22121\b exp(1 2\u03c32 t \u039b) \u2212exp(\u22121 2\u03c32 t \u039b) \t Ht = exp(1 2\u03c32 t \u039b), which leads to CtH\u22121 t = \u039b\u22121\b I \u2212exp(\u2212\u03c32 t \u039b) \t . As a result, the corresponding forward transition writes \u00b5t|0 = exp(\u22121 2\u03c32 t \u039b)x0, Lt = \u039b\u22121 2 q I \u2212exp(\u2212\u03c32 t \u039b). In Corrolary 1 detailed in Appendix A, since the matrix \u039b = diag(\u03bb) is diagonal and time-invariant, the matrix exponential and square root can be directly calculated elementwise on each diagonal elements \u03bbi independently. 4.2.1. BACKWARD SDE Taking the time reversal (Anderson, 1982) of the forward multivariate OU process (11), the backward SDE satisfies d\u2190 \u2212 x t = (\u22121 2Dt\u03b2t\u2190 \u2212 x t \u2212\u03b2tst(\u2190 \u2212 x t))dt + p \u03b2td\u2190 \u2212 wt. (17) Notably, with a general PD matrix Dt, the prior distribution follows that xT \u223cN(0, \u03a3T |0)\u2021. We also note that the prior is now limited to Gaussian distributions, which is not a general bridge anymore. \u2021See the Remark on the selection of \u03c1prior in section B.1. 4 Variational Schr\u00a8 odinger Diffusion Models 4.2.2. PROBABILITY FLOW ODE We can follow Song et al. (2021b) and obtain the deterministic process directly: d\u2190 \u2212 x t = \u0012 \u22121 2Dt\u03b2t\u2190 \u2212 x t \u22121 2\u03b2tst(\u2190 \u2212 x t) \u0013 dt, (18) where xT \u223cN(0, \u03a3T |0) and the sample trajectories follow the same marginal densities \u2212 \u2192 \u03c1 t(xt) as in the SDE. 4.3. Adaptive Diffusion via Stochastic Approximation Our major goal is to generate high-fidelity data with efficient transportation plans based on the optimal A\u22c6 t in the forward process (11). However, the optimal A\u22c6 t is not known a priori. To tackle this issue, we leverage stochastic approximation (SA) (Robbins & Monro, 1951; Benveniste et al., 1990) to adaptively optimize the variational score A(k) t through optimal transport and simulate the backward trajectories. (1) Simulate backward trajectoriest {\u2190 \u2212 x (k+1) nh }N\u22121 n=0 via the Euler\u2013Maruyama (EM) scheme of the backward process (17) with a learning rate h. (2) Optimize variational scores \b A(k) nh }N\u22121 n=0 : A(k+1) nh = A(k) nh \u2212\u03b7k+1\u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ), where \u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ) is the loss function (10) at time nh and is known as the random field. We expect that the simulation of backward trajectories {\u2190 \u2212 x (k+1) nh }N\u22121 n=0 given s(k+1) nh helps the optimization of A(k+1) nh and the optimized A(k+1) nh in turn contributes to a more efficient transportation plan for estimating s(k+2) nh and simulating the backward trajectories {\u2190 \u2212 x (k+2) nh }N\u22121 n=0 . Trajectory Averaging The stochastic approximation algorithm is a standard framework to study adaptive sampling algorithms (Liang et al., 2007). Moreover, the formulation suggests to stabilize the trajectories (Polyak & Juditsky, 1992) with averaged parameters A (k) nh as follows A (k) nh = k X i=1 A(i) nh = \u0012 1 \u22121 k \u0013 A (k\u22121) nh + 1 k A(k) nh , where A (k) nh is known to be an asymptotically efficient (optimal) estimator (Polyak & Juditsky, 1992) in the local state space A by assumption A1. Exponential Moving Average (EMA) Despite guarantees in convex scenarios, the parameter space differs tremendously in different surfaces in non-convex state space A. Empirically, if we want to exploit information from multiple modes, a standard extension is to employ the EMA technique (Trivedi & Kondor, 2017): A (k) nh = (1 \u2212\u03b7)A (k\u22121) nh + \u03b7A(k) nh , where \u03b7 \u2208(0, 1). The EMA techniques are widely used empirically in diffusion models and Schr\u00a8 odinger bridge (Song & Ermon, 2020; De Bortoli et al., 2021; Chen et al., 2022b) to avoid oscillating trajectories. Now we are ready to present our methodology in Algorithm 1. Computational Cost Regarding the wall-clock computational time: i) training (linear) variational scores, albeit in a simulation-based manner, becomes significantly faster than estimating nonlinear forward scores in Schr\u00a8 odinger bridge; ii) the variational parametrization greatly reduced the number of model parameters, which yields a muchreduced variance in the Hutchinson\u2019s estimator (Hutchinson, 1989); iii) since we don\u2019t need to update At as often as the backward score model, we can further amortize the training of At. In the simulation example in Figure.9(b), VSDM is only 10% slower than the SGM with the same training complexity of backward scores while still maintaining efficient convergence of variational scores. 5. Convergence of Stochastic Approximation In this section, we study the convergence of A(k) t to the optimal A\u22c6 t , where t \u2208[0, T] \u00a7. The primary objective is to show the iterates (19) follow the trajectories of the dynamical system asymptotically: dAt = \u2207\u2212 \u2192 L t(At)ds, (20) where dAt ds = lim\u03b7\u21920 A(k+1) t \u2212A(k) t \u03b7 and \u2207\u2212 \u2192 L t(\u00b7) is the mean field at time t: \u2207\u2212 \u2192 L t(At) = Z X \u2207\u2212 \u2192 L t(At; \u2190 \u2212 x (\u00b7) t )\u2190 \u2212 \u03c1 t(d\u2190 \u2212 x (\u00b7) t ), (21) where X denotes the state space of data x and \u2207\u2212 \u2192 L t denotes the gradient w.r.t. At; \u2190 \u2212 \u03c1 t is the distribution of the continuous-time interpolation of the discretized backward SDE (22) from t = T to 0. We denote by A\u22c6 t one of the solutions of \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0. The aim is to find the optimal solution A\u22c6 t to the mean field \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0. However, we acknowledge that the equilibrium is not unique in general nonlinear dynamical systems. To tackle this issue, we focus our analysis around a neighborhood \u0398 of the equilibrium by assumption A1. After running sufficient many iterations with a small enough \u00a7We slightly abuse the notation and generalize A(k) nh to A(k) t . 5 Variational Schr\u00a8 odinger Diffusion Models Algorithm 1 Variational Schr\u00a8 odinger Diffusion Models (VSDM). \u03c1prior is fixed to a Gaussian distribution. \u03b7k is the step size for SA and h is the learning rate for the backward sampling of Eq.(17). \u03ben denotes the standard Gaussian vector at the sampling iteration n. The exponential moving averaging (EMA) technique can be used to further stabilize the algorithm. repeat Simulation-free Optimization of Backward Score Draw x0 \u223c\u03c1data, n \u223c{0, 1, \u00b7 \u00b7 \u00b7 , N \u22121}, \u03f5 \u223cN(0, I). Sample xnh|x0 \u223cN(\u00b5nh|0, \u03a3nh|0) by Eq.(13) and (14) given A(k) nh . Cache {\u00b5nh|0}N\u22121 n=0 and {L\u2212\u22ba nh }N\u22121 n=0 via Cholesky decomposition of {\u03a3nh}N\u22121 n=0 to avoid repeated computations. Optimize the score functions s(k+1) nh sufficiently through the loss function \u2207\u03b8\u2225L\u2212\u22ba nh \u03f5 \u2212s(k+1) nh (xnh)\u22252 2. Optimization of Variational Score via Stochastic Approximation (SA) Simulate the backward trajectory \u2190 \u2212 x (k+1) nh given A(k) nh via Eq.(22), where \u2190 \u2212 x (k+1) (N\u22121) \u223cN(0, \u03a3(k) (N\u22121)h|0). Optimize variational score A(k+1) nh using the loss function (10), where n \u2208{0, 1, \u00b7 \u00b7 \u00b7 , N \u22121}: A(k+1) nh = A(k) nh \u2212\u03b7k+1\u2207\u2212 \u2192 L nh(A(k) nh ; \u2190 \u2212 x (k+1) nh ). (19) until Stage k = kmax Sample \u2190 \u2212 x 0 with stochastic (resp. deterministic) trajectories via the discretized Eq.(17) (resp. Eq.(18)). step size \u03b7k, suppose A(k) t \u2208\u0398 is somewhere near one equilibrium A\u22c6 t (out of all equilibrium), then by the induction method, the iteration tends to get trapped in the same region as shown in Eq.(32) and yields the convergence to one equilibrium A\u22c6 t . We also present the variational gap of the (sub)-optimal transport and show our transport is more efficient than diffusion models with Gaussian marginals. Next, we outline informal assumptions and sketch our main results, reserving formal ones for readers interested in the details in the appendix. We also formulate the optimization of the variational score At using stochastic approximation in Algorithm 2 in the supplementary material. Assumption A1 (Regularity). (Positive definiteness) For any t \u22650 and At \u2208A, Dt = I \u22122At is positive definite. (Locally strong convexity) For any stable local minimum A\u22c6 t with \u2207\u2212 \u2192 L t(A\u22c6 t ) = 0, there is always a neighborhood \u0398 s.t. A\u22c6 t \u2208\u0398 \u2282A and \u2212 \u2192 L t is strongly convex in \u0398. By the mode-seeking property of the exclusive (reverse) KL divergence (Chan et al., 2022), we only make a mild assumption on a small neighborhood of the solution and expect the convergence given proper regularities. Assumption A2 (Lipschitz Score). For any t \u2208[0, T], the score \u2207log \u2212 \u2192 \u03c1 t is L-Lipschitz. Assumption A3 (Second Moment Bound). The data distribution has a bounded second moment. Assumption A4 (Score Estimation Error). We have bounded score estimation errors in L2 quantified by \u03f5score. We first use the multivariate diffusion to train our score estimators {s(k) t }N\u22121 n=0 via the loss function (16) based on the pre-specified A(k) t at step k. Similar in spirit to Chen et al. (2023a; 2022a), we can show the generated samples based on {s(k) t }N\u22121 n=0 are close in distribution to the ideal samples in Theorem 1. The novelty lies in the extension of single-variate diffusions to multi-variate diffusions. Theorem 1 (Generation quality, informal). Assume assumptions A1-A4 hold with a fixed A(k) t , the generated data distribution is close to the data distributions \u03c1data such that TV(\u2190 \u2212 \u03c1 (k) 0 , \u03c1data) \u2272exp(\u2212T) + ( \u221a dh + \u03f5score) \u221a T. To show the convergence of A(k) t to A\u22c6 t , the proof hinges on a stability condition such that the solution asymptotically tracks the equilibrium A\u22c6 t of the mean field (20). Lemma 2 (Local stability, informal). Assume the assumptions A1 and A2 hold. For \u2200t \u2208[0, T] and \u2200A \u2208\u0398, the solution satisfies a local stability condition such that \u27e8A \u2212A\u22c6 t , \u2207\u2212 \u2192 L t(A)\u27e9\u2273\u2225A \u2212A\u22c6 t \u22252 2. The preceding result illustrates the convergence of the solution toward the equilibrium on average. The next assumption assumes a standard slow update of the SA process, which is standard for theoretical analysis but may not be always needed in empirical evaluations. Assumption A5 (Step size). The step size {\u03b7k}k\u2208N is a positive and decreasing sequence \u03b7k \u21920, \u221e X k=1 \u03b7k = +\u221e, \u221e X k=1 \u03b72 k < +\u221e. 6 Variational Schr\u00a8 odinger Diffusion Models Next, we use the stochastic approximation theory to prove the convergence of A(k) t to an equilibrium A\u22c6 t . Theorem 2 (Convergence in L2). Assume assumptions A1-A5 hold. The variational score A(k) t converges to an equilibrium A\u22c6 t in L2 such that E[\u2225A(k) t \u2212A\u22c6 t \u22252 2] \u22642\u03b7k, where the expectation is taken w.r.t samples from \u2190 \u2212 \u03c1 (k) t . In the end, we adapt Theorem 1 again to show the adaptively generated samples are asymptotically close to the samples based on the optimal A\u22c6 t in Theorem 3, which quantifies the quality of data based on more efficient transportation plans. Theorem 3 (Generation quality of adaptive samples). Given assumptions A1-A5, the generated sample distribution at stage k is close to the exact sample distribution based on the equilibrium A\u22c6 t such that TV(\u2190 \u2212 \u03c1 \u22c6 0, \u03c1data) \u2272exp(\u2212T) + ( \u221a dh + \u03f5score + \u221a\u03b7k) \u221a T. 6. Variational Gap Recall that the optimal and variational forward SDEs follow d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2t\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t) i dt + p \u03b2td\u2212 \u2192 wt, d\u2212 \u2192 x t = h f t(\u2212 \u2192 x t) + \u03b2tA(k) t \u2212 \u2192 x t i dt + p \u03b2td\u2212 \u2192 wt, d\u2212 \u2192 x t = \u0002 f t(\u2212 \u2192 x t) + \u03b2tA\u22c6 t \u2212 \u2192 x t \u0003 dt + p \u03b2td\u2212 \u2192 wt, where we abuse the notion of \u2212 \u2192 x t for the sake of clarity and they represent three different processes. Despite the improved efficiency based on the ideal A\u22c6 t compared to the vanilla At \u22610, the variational score inevitably yields a sub-optimal transport in general nonlinear transport. We denote the law of the above processes by L, L(k), and L\u22c6. To assess the disparity, we leverage the Girsanov theorem to study the variational gap. Theorem 3 (Variational gap). Assume the assumption A2 and Novikov\u2019s condition hold. Assume f t and \u2207log \u2212 \u2192 \u03c8 t are Lipschitz smooth and satisfy the linear growth. The variational gap follows that KL(L\u2225L\u22c6) = 1 2 Z T 0 E \u0014 \u03b2t\u2225A\u22c6 t \u2212 \u2192 x t \u2212\u2207log \u2212 \u2192 \u03c8 t(\u2212 \u2192 x t)\u22252 2 \u0015 dt KL(L\u2225L(k)) \u2272\u03b7k + KL(L\u2225L\u22c6). Connections to Gaussian Schr\u00a8 odinger bridge (GSB) When data follows a Gaussian distribution, VSDM approximates the closed-form OT solution of Schr\u00a8 odinger bridge (Janati et al., 2020; Bunne et al., 2023). We refer readers to Theorem 3 (Bunne et al., 2023) for the detailed transportation plans. Compared to the vanilla At \u22610, we can significantly reduce the variational gap with KL(L\u2225L\u22c6) using proper parametrization and sufficient training. 7. Empirical Studies 7.1. Comparison to Gaussian Schrodinger Bridge VSDM is approximating GSB (Bunne et al., 2023) when both marginals are Gaussian distributions. To evaluate the solutions, we run our VSDM with a fixed \u03b2t \u22614 in Eq.(25) in Song et al. (2021b) and use the same marginals to replicate the VPSDE of the Gaussian SB with \u03b1t \u22610 and ct \u2261\u22122 in Eq.(7) in Bunne et al. (2023). We train VSDM with 20 stages and randomly pick 256 samples for presentation. We compare the flow trajectories from both models and observe in Figure 1 that the ground truth solution forms an almost linear path, while our VSDM sample trajectories exhibit a consistent alignment with trajectories from Gaussian SB. We attribute the bias predominantly to score estimations and numerical discretization. (a) GSB (b) VSDM Figure 1. Gaussian v.s. VSDM on the flow trajectories. 7.2. Synthetic Data We test our variational Schr\u00a8 odinger diffusion models (VSDMs) on two synthetic datasets: spiral and checkerboard (detailed in section D.2.1). We include SGMs as the baseline models and aim to show the strength of VSDMs on general shapes with straighter trajectories. As such, we stretch the Y-axis of the spiral data by 8 times and the X-axis of the checkerboard data by 6 times and denote them by spiral-8Y and checkerboard-6X, respectively. We adopt a monotone increasing {\u03b2nh}N\u22121 n=0 similar to Song et al. (2021b) and denote by \u03b2min and \u03b2max the minimum and maximum of {\u03b2nh}N\u22121 n=0 . We fix \u03b6 = 0.75 and \u03b2min = 0.1 and we focus on the study with different \u03b2max. We find that SGMs work pretty well with \u03b2max = 10 (SGM-10) on standard isotropic shapes. However, when it comes to spiral-8Y, the SGM-10 struggles to recover the boundary regions on the spiral-8Y data as shown in Figure 2 (top). Generations of Anisotropic Shapes To illustrate the effectiveness of our approach, Figure 2 (bottom) shows that VSDM-10 accurately reconstructs the edges of the spiral 7 Variational Schr\u00a8 odinger Diffusion Models and generates high-quality samples. 2.5 0.0 2.5 20 0 20 t=0.00 2.5 0.0 2.5 20 0 20 t=0.33 0 5 10 0 10 20 t=0.67 2.5 0.0 2.5 4 2 0 2 4 t=1.00 2.5 0.0 2.5 20 0 20 t=0.00 2.5 0.0 2.5 20 0 20 t=0.33 2.5 0.0 2.5 10 0 10 t=0.67 2.5 0.0 2.5 4 2 0 2 4 t=1.00 Figure 2. Variational Schr\u00a8 odinger diffusion models (VSDMs, bottom) v.s. SGMs (top) with the same hyperparameters (\u03b2max = 10). Straighter Trajectories The SGM-10 fails to fully generate the anisotropic spiral-8Y and increasing \u03b2max to 20 or 30 (SGM-20 and SGM-30) significantly alleviates this issue. However, we observe that excessive \u03b2max values in SGMs compromises the straightness and leads to inefficient transport, especially in the X-axis of spiral-8Y. 3 1 1 3 25 10 5 20 (a) SGM-10 3 1 1 3 25 10 5 20 (b) SGM-20 3 1 1 3 25 10 5 20 (c) SGM-30 3 1 1 3 25 10 5 20 (d) VSDM-10 Figure 3. Probability flow ODE via VSDMs and SGMs. SGM with \u03b2max = 10 is denoted by SGM-10 for convenience. Instead of setting excessive \u03b2max on both axes, our VSDM10, by contrast, proposes conservative diffusion scales on the X-axis of spiral-8Y and explores more on the Y-axis of spiral-8Y. As such, we obtain around 40% improvement on the straightness in Figure 3 and Table 4. Additional insights into a similar analysis of the checkboard dataset, convergence analysis, computational time, assessments of straightness, and evaluations via a smaller number of function evaluations (NFEs) can be found in Appendix D.2. 7.3. Image Data Modeling Experiment Setup In this experiment, we evaluate the performance of VSDM on image modeling tasks. We choose the CIFAR10 datasetas representative image data to demonstrate the scalability of the proposed VSDM on generative modeling of high-dimensional distributions. We refer to the code base of FB-SDE (Chen et al., 2022b) and use the same forward diffusion process of the EDM model (Karras et al., 2022). Since the training of VSDM is an alternative manner between forward and backward training, we build our implementations based on the open-source Figure 4. Unconditional generated samples from VSDM on CIFAR10 (32\u00d732 resolution) trained from scratch. diffusion distillation code base (Luo et al., 2024a) \u00b6, which provides a high-quality empirical implementation of alternative training with EDM model on CIFAR10 data. To make the VSDM algorithm stable, we simplify the matrix Dt to be diagonal with learnable diagonal elements, which is the case as we introduced in Corollary 1. We train the VSDM model from scratch on two NVIDIA A100-80G GPUs for two days and generate images from the trained VSDM with the Euler\u2013Maruyama numerical solver with 200 discretized steps for generation. Performances. We measure the generative performances in terms of the Fretchat Inception Score (FID (Heusel et al., 2017), the lower the better), which is a widely used metric for evaluating generative modeling performances. Tables 2 summarize the FID values of VSDM along with other optimal-transport-based and score-based generative models on the CIFAR10 datasets (unconditional without labels). The VSDM outperforms other optimal transportbased models with an FID of 2.28. This demonstrates that the VSDM has applicable scalability to model highdimensional distributions. Figure 7.3 shows some noncherry-picked unconditional generated samples from VSDM trained on the CIFAR10 dataset. Convergence Speed. To demonstrate the convergence speed of VSDM along training processes, we record the FID values in Table 1 for a training trail with no warmup on CIFAR10 datasets (unconditional). We use a batch size of 256 and a learning rate of 1e \u22124. We use the 2nd-order Heun numerical solver to sample. The result shows that VSDM has a smooth convergence performance. \u00b6See code in https://github.com/pkulwj1994/diff_instruct 8 Variational Schr\u00a8 odinger Diffusion Models Table 1. CONVERGENCE SPEED OF FID VALUES FOR VSDM. K IMAGES 0 10K 20K 30K 40K 50K 100K 150K 200K CONVERGE FID\u2193(NFE=35) 406.13 13.13 8.65 6.83 5.66 5.21 3.62 3.29 3.01 2.28 Table 2. CIFAR10 EVALUATION USING SAMPLE QUALITY (FID SCORE). OUR VSDM OUTPERFORMS OTHER OPTIMAL TRANSPORT BASELINES BY A LARGE MARGIN. CLASS METHOD FID \u2193 OT VSDM (OURS) 2.28 SB-FBSDE (CHEN ET AL., 2022B) 3.01 DOT (TANAKA, 2019) 15.78 DGFLOW (ANSARI ET AL., 2020) 9.63 SGMS SDE (SONG ET AL. (2021B)) 2.92 SCOREFLOW (SONG ET AL., 2021A) 5.7 VDM (KINGMA ET AL., 2021) 4.00 LSGM(VAHDAT ET AL., 2021) 2.10 EDM(KARRAS ET AL., 2022) 1.97 7.4. Time Series Forecasting We use multivariate probabilistic forecasting as a real-world conditional modeling task. Let {(t1, x1), . . . , (tn, xn)}, x \u2208Rd, denote a single multivariate time series. Given a dataset of such time series we want to predict the next P values xn+1, . . . , xn+P . In probabilistic modeling, we want to generate forecasts from learned p(xn+1:n+P |x1:n). The usual approach is to have an encoder that represents a sequence x1:i with a fixed-sized vector hi \u2208Rh, \u2200i, and then parameterize the output distribution p(xi+1|hi). At inference time we encode the history into hn and sample the next value from p(xn+1|hn), then use xn+1 to get the updated hn+1 and repeat until we obtain xn+P . In the previous works, the output distribution has been specified with a Copulas (Salinas et al., 2019) and denoising diffusion (Rasul et al., 2021). We augment our approach to allow conditional generation which requires only changing the model to include the conditioning vector hi. For that we adopt the U-Net architecture. We use the LSTM neural network as a sequence encoder. We use three real-world datasets, as described in Appendix D.3. We compare to the SGM and the denoising diffusion approach from Rasul et al. (2021) which we refer to as DDPM. Table 3 shows that our method matches or outperforms the competitors. Figure 5 is a demo for conditional time series generation and more details are presented in Figure 12 to demonstrate the quality of the forecasts. 8. Conclusions and Future Works The Schr\u00a8 odinger bridge diffusion model offers a principled approach to solving optimal transport, but estimatTable 3. FORECASTING RESULTS (LOWER IS BETTER). CRPS-SUM ELECTRICITY EXCHANGE RATE SOLAR DDPM 0.026\u00b10.007 0.012\u00b10.001 0.506\u00b10.058 SGM 0.045\u00b10.005 0.012\u00b10.002 0.413\u00b10.045 VSDM (OUR) 0.038\u00b10.006 0.008\u00b10.002 0.395\u00b10.011 Figure 5. Example for Electricity for 2 (out of 370) dimensions. ing the intractable forward score relies on implicit training through costly simulated trajectories. To address this scalability issue, we present the variational Schr\u00a8 odinger diffusion model (VSDM), utilizing linear variational forward scores for simulation-free training of backward score functions. Theoretical foundations leverage stochastic approximation theory, demonstrating the convergence of variational scores to local equilibrium and highlighting the variational gap in optimal transport. Empirically, VSDM showcases the strength of generating data with anisotropic shapes and yielding the desired straighter transport paths for reducing the number of functional evaluations. VSDM also shows scalability in dealing large-scale image datasets without reliance on warm-up initializations. In future research, we aim to explore the critically damped (momentum) acceleration (Dockhorn et al., 2022) and Hessian approximations to develop the \u201cADAM\u201d alternative of diffusion models. 9. Impact Statements This paper proposed a principled approach to accelerate the training and sampling of generative models using optimal transport. This work will contribute to developing textto-image generation, artwork creation, and product design. However, it may also raise challenges in the fake-content generation and pose a threat to online privacy and security. Acknowledgements We would like to thank Valentin De Bortoli, Tianyang Hu, and the reviewers for their insightful suggestions. 9 Variational Schr\u00a8 odinger Diffusion Models"
16
+ }
intro_8K/test_introduction_short_2405.04834v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04834v1",
3
+ "title": "FlexEControl: Flexible and Efficient Multimodal Control for Text-to-Image Generation",
4
+ "abstract": "Controllable text-to-image (T2I) diffusion models generate images conditioned\non both text prompts and semantic inputs of other modalities like edge maps.\nNevertheless, current controllable T2I methods commonly face challenges related\nto efficiency and faithfulness, especially when conditioning on multiple inputs\nfrom either the same or diverse modalities. In this paper, we propose a novel\nFlexible and Efficient method, FlexEControl, for controllable T2I generation.\nAt the core of FlexEControl is a unique weight decomposition strategy, which\nallows for streamlined integration of various input types. This approach not\nonly enhances the faithfulness of the generated image to the control, but also\nsignificantly reduces the computational overhead typically associated with\nmultimodal conditioning. Our approach achieves a reduction of 41% in trainable\nparameters and 30% in memory usage compared with Uni-ControlNet. Moreover, it\ndoubles data efficiency and can flexibly generate images under the guidance of\nmultiple input conditions of various modalities.",
5
+ "authors": "Xuehai He, Jian Zheng, Jacob Zhiyuan Fang, Robinson Piramuthu, Mohit Bansal, Vicente Ordonez, Gunnar A Sigurdsson, Nanyun Peng, Xin Eric Wang",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "In the realm of text-to-image (T2I) generation, diffu- sion models exhibit exceptional performance in transform- ing textual descriptions into visually accurate images. Such models exhibit extraordinary potential across a plethora of applications, spanning from content creation [1, 9, 43, 47, 51,55,65], image editing [4,5,12,23,31,41,43,59,70], and also fashion design [7]. We propose a new unified method that can tackle two problems in text-to-image generation: improve the training efficiency of T2I models concerning memory usage, computational requirements, and a thirst for extensive datasets [48,51,54]; and improve their controlla- bility especially when dealing with multimodal condition- ing, e.g. multiple edge maps and at the same time follow the guidance of text prompts, as shown in Figure 1 (c). Controllable text-to-image generation models [42] often come at a significant training computational cost, with lin- ear growth in cost and size when training with different con- ditions. Our approach can improve the training efficiency of existing text-to-image diffusion models and unify and flexibly handle different structural input conditions all to- gether. We take cues from the efficient parameterization strategies prevalent in the NLP domain [26, 27, 44, 66] and computer vision literature [20]. The key idea is to learn shared decomposed weights for varied input conditions, en- suring their intrinsic characteristics are conserved. Our method has several benefits: It not only achieves greater compactness [51], but also retains the full representation ca- pacity to handle various input conditions of various modali- ties; Sharing weights across different conditions contributes to the data efficiency; The streamlined parameter space aids in mitigating overfitting to singular conditions, thereby re- inforcing the flexible control aspect of our model. Meanwhile, generating images from multiple homo- geneous conditional inputs, especially when they present conflicting conditions or need to align with specific text prompts, is challenging. To further augment our model\u2019s capability to handle multiple inputs from either the same or diverse modalities as shown in Figure 1, during training, we introduce a new training strategy with two new loss func- tions introduced to strengthen the guidance of correspond- ing conditions. This approach, combined with our com- pact parameter optimization space, empowers the model to learn and manage multiple controls efficiently, even within the same category (e.g., handling two distinct segmentation maps and two separate edge maps). Our primary contribu- tions are summarized below: \u2022 We propose FlexEControl, a novel text-to-image gen- eration model for efficient controllable image genera- tion that substantially reduces training memory over- head and model parameters through decomposition of weights shared across different conditions. \u2022 We introduce a new training strategy to improve the 1 arXiv:2405.04834v1 [cs.CV] 8 May 2024 (c) Controllable T2I w. Same Input Conditions (b) Controllable T2I w. Different Input Conditions Text Prompt: Stormtrooper's lecture at the football field (a) Efficiency Comparisons\u00a0 Figure 1. (a) FlexEControl excels in training efficiency, achiev- ing superior performance with just half the training data compared to its counterparts on (b) Controllable Text-to-Image Generation w. Different Input Conditions (one edge map and one segmenta- tion map). (c) FlexEControl effectively conditions on two canny edge maps. The text prompt is Stormtrooper\u2019s lecture at the football field in both Figure (b) and Figure (c). flexible controllability of FlexEControl. Compared with previous works, FlexEControl can generate new images conditioning on multiple inputs from diverse compositions of multiple modalities. \u2022 FlexEControl shows on-par performance with Uni- ControlNet [71] on controllable text-to-image gener- ation with 41% less trainable parameters and 30% less training memory. Furthermore, FlexEControl exhibits enhanced data efficiency, effectively doubling the per- formance achieved with only half amount of training data.",
15
+ "main_content": "The overview of our method is shown in Figure 2. In general, we use the copied Stable Diffusion encoder which accepts structural conditional input and then perform efficient training via parameter reduction using Kronecker Decomposition first [67] and then low-rank decomposition over the updated weights of the copied Stable Diffusion encoder. To enhance the control from language and different input conditions, we propose a new training strategy with two newly designed loss functions. The details are shown in the sequel. SD Encoder SD Decoder Text Prompt Mask diffusion loss Zero Conv Cross-Attention Supervision Loss Multimodal Conditioning Shared across conditions Updated Weights Copied Encoder Original parameter size: 36 New parameter size: 4+6n Figure 2. Overview of FlexEControl: a decomposed green matrix is shared across different input conditions, significantly enhancing the model\u2019s efficiency. During training, we integrate two specialized loss functions to enable flexible control and to adeptly manage conflicting conditions. In the example depicted here, the new parameter size is efficiently condensed to 4 + 6n, where n denotes the number of decomposed matrix pairs. 2.1. Preliminary We use Stable Diffusion 1.5 [51] in our experiments. This model falls under the category of Latent Diffusion Models (LDM) that encode input images x into a latent representation z via an encoder E, such that z = E(x), and subsequently carry out the denoising process within the latent space Z. An LDM is trained with a denoising objective as follows: Lldm = Ez,c,e,t \ufffd \u2225\u02c6 \u03f5\u03b8(zt | c, t) \u2212\u03f5\u22252\ufffd (1) c) constitute data-conditioning pairs (comprising \ufffd \ufffd where (z, c) constitute data-conditioning pairs (comprising image latents and text embeddings), \u03f5 \u223cN(0, I) , t \u223c Uniform(1, T), and \u03b8 denotes the model parameters. 2.2. Efficient Training for Controllable Text-toImage (T2I) Generation Our approach is motivated by empirical evidence that Kronecker Decomposition [67] effectively preserves critical weight information. We employ this technique to encapsulate the shared relational structures among different input conditions. Our hypothesis posits that by amalgamating diverse conditions with a common set of weights, data utilization can be optimized and training efficiency can be improved. We focus on decomposing and fine-tuning only the cross-attention weight matrices within the U-Net [52] of the diffusion model, where recent works [33] show their dominance when customizing the diffusion model. As depicted in Figure 2, the copied encoder from the Stable Diffusion will accept conditional input from different modalities. During training, we posit that these modalities, being transformations of the same underlying image, share common information. Consequently, we hypothesize that the updated weights of this copied encoder, \u2206W , can be effi2 Figure 3. The visualization of decomposed shared \u201cslow\u201d weights (right image) for single condition case where the input condition (left image) is the depth map and the input text prompt is Car. We took the average over the decomposed shared weights of the last cross-attention block across all attention heads in Stable Diffusion. ciently adapted within a shared decomposed low-rank subspace. This leads to: \u2206W = n X i=1 Hi \u2297 \u0000uiv\u22a4 i \u0001 (2) with n is the number of decomposed matrices, ui \u2208R k n \u00d7r and vi \u2208Rr\u00d7 d n , where r is the rank of the matrix which is a small number, Hi are the decomposed learnable matrices shared across different conditions, and \u2297is the Kronecker product operation. The low-rank decomposition ensures a consistent low-rank representation strategy. This approach substantially saves trainable parameters, allowing efficient fine-tuning over the downstream text-to-image generation tasks. The intuition for why Kronecker decomposition works for finetuning partially is partly rooted in the findings of [20, 40, 67]. These studies highlight how the model weights can be broken down into a series of matrix products and thereby save parameter space. As shown in Figure 2, the original weights is 6x6, then decomposed into a series of matrix products. When adapting the training approach based on the decomposition to controllable T2I, the key lies in the shared weights, which, while being common across various conditions, retain most semantic information. For instance, the shared \u201cslow\u201d weights [61] of an image, combined with another set of \u201cfast\u201d low-rank weights, can preserve the original image\u2019s distribution without a loss in semantic integrity, as illustrated in Figure 3. This observation implies that updating the slow weights is crucial for adapting to diverse conditions. Following this insight, it becomes logical to learn a set of condition-shared decomposed weights in each layer, ensuring that these weights remain consistent across different scenarios. The data utilization and parameter efficiency is also improved. 2.3. Enhanced Training for Conditional Inputs We then discuss how to improve the control under multiple input conditions of varying modalities with the efficient training approach. Dataset Augmentation with Text Parsing and Segmentation To optimize the model for scenarios involving multiple homogeneous (same-type) conditional inputs, we initially augment our dataset. We utilize a large language model (gpt-3.5-turbo) to parse texts in prompts containing multiple object entities. The parsing query is structured as: Given a sentence, analyze the objects in this sentence, give me the objects if there are multiple. Following this, we apply CLIPSeg [39] (clipseg-rd64-refined version) to segment corresponding regions in the images, allowing us to divide structural conditions into separate sub-feature maps tailored to the parsed objects. Cross-Attention Supervision For each identified segment, we calculate a unified attention map, Ai, averaging attention across layers and relevant N text tokens: Ai = 1 L L X l=1 N X i=1 JTi \u2208TjKCAl i, (3) where J\u00b7K is the Iverson bracket, CAl i is the cross-attention map for token i in layer l, and Tj denotes the set of tokens associated with the j-th segment. The model is trained to predict noise for image-text pairs concatenated based on the parsed and segmented results. An additional loss term, designed to ensure focused reconstruction in areas relevant to each text-derived concept, is introduced. Inspired by [2], this loss is calculated as the Mean Squared Error (MSE) deviation from predefined masks corresponding to the segmented regions: Lca = Ez,t h \u2225Ai(vi, zt) \u2212Mi\u22252 2 i , (4) where Ai(vi, zt) is the cross-attention map between token vi and noisy latent zt, and Mi represents the mask for the ith segment, which is derived from the segmented regions in our augmented dataset and appropriately resized to match the dimensions of the cross-attention maps. Masked Noise Prediction To ensure fidelity to the specified conditions, we apply a condition-selective diffusion loss that concentrates the denoising effort on conceptually significant regions. This focused loss function is applied solely to pixels within the regions delineated by the concept masks, which are derived from the non-zero features of the input structural conditions. Specifically, we set the masks to be binary where non-zero feature areas are assigned value of ones [21], and areas lacking features are set to zero. Because of the sparsity of pose features for this condition, we use the all-ones mask. These masks serve to underscore the regions referenced in the corresponding text prompts: 3 Lmask = Ez,\u03f5,t h \u2225(\u03f5 \u2212\u03f5\u03b8(zt, t)) \u2299M\u22252 2 i , (5) where M represents the union of binary mask obtained from input conditions, zt denotes the noisy latent at timestep t, \u03f5 the injected noise, and \u03f5\u03b8 the estimated noise from the denoising network (U-Net). The total loss function employed is: Ltotal = Lldm + \u03bbcaLca + \u03bbmaskLmask, (6) with \u03bbrec and \u03bbattn set to 0.01. The integration of Lca and Lmask ensure the model will focus at reconstructing the conditional region and attend to guided regions during generation. 3. Experiments 3.1. Datasets In pursuit of our objective of achieving controlled Textto-Image (T2I) generation, we employed the LAION improved aesthetics 6plus [57] dataset for our model training. Specifically, we meticulously curated a subset comprising 5,082,236 instances, undertaking the elimination of duplicates and applying filters based on criteria such as resolution and NSFW score. Given the targeted nature of our controlled generation tasks, the assembly of training data involved considerations of additional input conditions, specifically edge maps, sketch maps, depth maps, segmentation maps, and pose maps. The extraction of features from these maps adhered to the methodology expounded in [68]. 3.2. Evaluation Metrics We employ a comprehensive benchmark suite of metrics including mIoU [50], SSIM [60], mAP, MSE, FID [25], and CLIP Score [24, 46] 1. The details are given in the Appendix. 3.3. Experimental Setup In accordance with the configuration employed in UniControlNet, we utilized Stable Diffusion 1.5 2 as the foundational model. Our model underwent training for a singular epoch, employing the AdamW optimizer [32] with a learning rate set at 10\u22125. Throughout all experimental iterations, we standardized the dimensions of input and conditional images to 512 \u00d7 512. The fine-tuning process was executed on P3 AWS EC2 instances equipped with 64 NVIDIA V100 GPUs. For quantitative assessment, a subset comprising 10,000 high-quality images from the LAION improved aesthetics 6.5plus dataset was utilized. The resizing of input conditions to 512 \u00d7 512 was conducted during the inference process. 1https://github.com/jmhessel/clipscore 2https://huggingface.co/runwayml/stable-diffusion-v1-5 Table 1. Text-to-image generation efficiency comparison: FlexEControl shows substantial reductions in memory cost, trainable parameters, and training time, highlighting its improved training efficiency with the same model architecture. Training times are averaged over three runs up to 400 iterations for consistency. Models Memory Cost \u2193 # Params. \u2193 Training Time \u2193 Uni-ControlNet [71] 20.47GB 1271M 5.69 \u00b1 1.33s/it LoRA [27] 17.84GB 1074M 3.97 \u00b1 1.27 s/it PHM [67] 15.08GB 819M 3.90 \u00b1 2.01 s/it FlexEControl (ours) 14.33GB 750M 2.15 \u00b1 1.42 s/it 3.3.1 Structural Input Condition Extraction We start from the processing of various local conditions used in our experiments. To facilitate a comprehensive evaluation, we have incorporated a diverse range of structural conditions, each processed using specialized techniques: \u2022 Edge Maps: For generating edge maps, we utilized two distinct techniques: \u2013 Canny Edge Detector [6] A widely used method for edge detection in images. \u2013 HED Boundary Extractor [63] HolisticallyNested Edge Detection, an advanced technique for identifying object boundaries. \u2013 MLSD [17] A method particularly designed for detecting multi-scale line segments in images. \u2022 Sketch Maps: We adopted a sketch extraction technique detailed in [58] to convert images into their sketch representations. \u2022 Pose Information: OpenPose [8] was employed to extract human pose information from images, which provides detailed body joint and keypoint information. \u2022 Depth Maps: For depth estimation, we integrated Midas [49], a robust method for predicting depth information from single images. \u2022 Segmentation Maps: Segmentation of images was performed using the method outlined in [62], which focuses on accurately segmenting various objects within an image. 3.4. Baselines In our comparative evaluation, we assess T2IAdapter [42], PHM [67], Uni-ControlNet [71], and LoRA [27]. 3.5. Quantitative Results Table 1 highlights FlexEControl\u2019s superior efficiency compared to Uni-ControlNet. It achieves a 30% reduction 4 Input Condition\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Input Condition2\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Uni-ControlNet\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0LoRA\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Uni-Control\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0Ours Text Prompt: A\u00a0coffee and the candle Text Prompt: A car is parking Figure 4. Qualitative comparison of FlexEControl and existing controllable diffusion models with multiple heterogeneous conditions. First row: FlexEControl effectively integrates both the segmentation and edge maps to generate a coherent image while Uni-ControlNet and LoRA miss the segmentation map and Uni-Control generates a messy image. Second row: The input condition types are one depth map and one sketch map. FlexEControl can do more faithful generation while all three others generate the candle in the coffee. Figure 5. Qualitative comparison of FlexEControl and existing controllable diffusion models with single condition. Text prompt: A bed. The image quality of FlexEControl is comparable to existing methods and Uni-ControlNet + LoRA, while FlexEControl has much more efficiency. in memory cost, lowers trainable parameters by 41% (from 1271M to 750M), and significantly reduces training time per iteration from 5.69s to 2.15s. Table 2 provides a comprehensive comparison of FlexEControl\u2019s performance against Uni-ControlNet and T2IAdapter across diverse input conditions. After training on a dataset of 5M text-image pairs, FlexEControl demonstrates better, if not superior, performance metrics compared to Uni-ControlNet and T2IAdapter. Note that UniControlNet is trained on a much larger dataset (10M textimage pairs from the LAION dataset). Although there is a marginal decrease in SSIM scores for sketch maps and mAP scores for poses, FlexEControl excels in other metrics, notably surpassing Uni-ControlNet and T2IAdapter. This underscores our method\u2019s proficiency in enhancing efficiency and elevating overall quality and accuracy in controllable text-to-image generation tasks. To substantiate the efficacy of FlexEControl in en5 Table 2. Quantitative evaluation of controllability and image quality for single structural conditional inputs. FlexEControl performs overall better while maintaining much improved efficiency. Models Canny MLSD HED Sketch Depth Segmentation Poses FID\u2193 CLIP Score\u2191 (SSIM)\u2191 (SSIM)\u2191 (SSIM)\u2191 (SSIM)\u2191 (MSE)\u2193 (mIoU)\u2191 (mAP)\u2191 T2IAdapter [42] 0.4480 0.5241 90.01 0.6983 0.3156 27.80 0.4957 Uni-Control [45] 0.4977 0.6374 0.4885 0.5509 90.04 0.7143 0.2083 27.80 0.4899 Uni-ControlNet [71] 0.4910 0.6083 0.4715 0.5901 90.17 0.7084 0.2125 27.74 0.4890 PHM [67] 0.4365 0.5712 0.4633 0.4878 91.38 0.5534 0.1664 27.91 0.4961 LoRA [27] 0.4497 0.6381 0.5043 0.5097 89.09 0.5480 0.1538 27.99 0.4832 FlexEControl (ours) 0.4990 0.6385 0.5041 0.5518 90.93 0.7496 0.2093 27.55 0.4963 Table 3. Quantitative evaluation of controllability and image quality on FlexEControl along with its variants and Uni-ControlNet. For UniControlNet, we implement multiple conditioning by adding two homogeneous conditional images after passing through feature extractors. Models Canny MLSD HED Sketch Depth Segmentation Poses FID\u2193 CLIP Score\u2191 (SSIM)\u2191 (SSIM)\u2191 (SSIM)\u2191 (SSIM)\u2191 (MSE)\u2193 (mIoU)\u2191 (mAP)\u2191 Single Conditioning Uni-ControlNet 0.3268 0.4097 0.3177 0.4096 98.80 0.4075 0.1433 29.43 0.4844 FlexEControl (w/o Lca) 0.3698 0.4905 0.3870 0.4855 94.90 0.4449 0.1432 28.03 0.4874 FlexEControl (w/o Lmask) 0.3701 0.4894 0.3805 0.4879 94.30 0.4418 0.1432 28.19 0.4570 FlexEControl 0.3711 0.4920 0.3871 0.4869 94.83 0.4479 0.1432 28.03 0.4877 Multiple Conditioning Uni-ControlNet 0.3078 0.3962 0.3054 0.3871 98.84 0.3981 0.1393 28.75 0.4828 FlexEControl (w/o Lca) 0.3642 0.4901 0.3704 0.4815 94.95 0.4368 0.1405 28.50 0.4870 FlexEControl (w/o Lmask) 0.3666 0.4834 0.3712 0.4831 94.89 0.4400 0.1406 28.68 0.4542 FlexEControl 0.3690 0.4915 0.3784 0.4849 92.90 0.4429 0.1411 28.24 0.4873 hancing training efficiency while upholding commendable model performance, and to ensure a fair comparison, an ablation study was conducted by training models on an identical dataset. We traine FlexEControl along its variants and Uni-ControlNet on a subset of 100,000 training samples from LAION improved aesthetics 6plus. When trained with the identical data, FlexEControl performs better than Uni-ControlNet. The outcomes are presented in Table 3. Evidently, FlexEControl exhibits substantial improvements over Uni-ControlNet when trained on the same dataset. This underscores the effectiveness of our approach in optimizing data utilization, concurrently diminishing computational costs, and enhancing efficiency in the text-to-image generation process. To validate FlexEControl\u2019s effectiveness in handling multiple structural conditions, we compared it with UniControlNet through human evaluations. Two scenarios were considered: multiple homogeneous input conditions (300 images, each generated with 2 canny edge maps) and multiple heterogeneous input conditions (500 images, each generated with 2 randomly selected conditions). Results, summarized in Table 4, reveal that FlexEControl was preferred by 64.00% of annotators, significantly outperforming Uni-ControlNet (23.67%). This underscores FlexEControl\u2019s proficiency with complex, homogeneous inputs. Additionally, FlexEControl demonstrated superior alignment with input conditions (67.33%) compared to UniControlNet (23.00%). In scenarios with random heterogeneous conditions, FlexEControl was preferred for overall Table 4. Human evaluation of FlexEControl and Uni-ControlNet under homogenous and heterogeneous structural conditions, assessing both human preference and condition alignment. \u201dWin\u201d indicates FlexEControl\u2019s preference, \u201dTie\u201d denotes equivalence, and \u201dLose\u201d indicates Uni-ControlNet\u2019s preference. Results indicate that under homogeneous conditions, FlexEControl outperforms Uni-ControlNet in both human preference and condition alignment. Condition Type Metric Win Tie Lose Homogeneous Human Preference (%) 64.00 12.33 23.67 Condition Alignment (%) 67.33 9.67 23.00 Heterogeneous Human Preference (%) 9.80 87.40 2.80 Condition Alignment (%) 6.60 89.49 4.00 quality and alignment over Uni-ControlNet. In addition to our primary comparisons, we conducted an additional quantitative evaluation of FlexEControl and Uni-ControlNet. This evaluation focused on assessing image quality under scenarios involving multiple conditions from both the homogeneous and heterogeneous modalities. The findings of this evaluation are summarized in Table 5. FlexEControl consistently outperforms Uni-ControlNet in both categories, demonstrating lower FID scores for better image quality and higher CLIP scores for improved alignment with text prompts. 6 Table 5. Quantitative evaluation of controllability and image quality in scenarios with multiple conditions from heterogeneous and homogeneous modalities for FlexEControl and Uni-ControlNet. The \u2019heterogeneous\u2019 category averages the performance across one Canny condition combined with six other different modalities. The \u2019homogeneous\u2019 category represents the average performance across seven identical modalities (three inputs). Condition Type Baseline FID\u2193 CLIP Score\u2191 Heterogeneous Uni-ControlNet 27.81 0.4869 FlexEControl 27.47 0.4981 Homogeneous Uni-ControlNet 28.98 0.4858 FlexEControl 27.65 0.4932 3.6. Qualitative Results We present qualitative results of our FlexEControl under three different settings: single input condition, multiple heterogeneous conditions, and multiple homogeneous conditions, illustrated in Figure 5, Figure 4, and Figure 6, respectively. The results indicate that FlexEControl is comparable to baseline models when a single condition is input. However, with multiple conditions, FlexEControl consistently and noticeably outperforms other models. Particularly, under multiple homogeneous conditions, FlexEControl excels in generating overall higher quality images that align more closely with the input conditions, surpassing other models. 4. Related Work FlexEControl is an instance of efficient training and controllable text-to-image generation. Here, we overview modeling efforts in the subset of efficient training towards reducing parameters and memory cost and controllable T2I. Efficient Training Prior work has proposed efficient training methodologies both for pretraining and fine-tuning. These methods have established their efficacy across an array of language and vision tasks. One of these explored strategies is Prompt Tuning [35], where trainable prompt tokens are appended to pretrained models [22, 29, 30, 56]. These tokens can be added exclusively to input embeddings or to all intermediate layers [37], allowing for nuanced model control and performance optimization. LowRank Adaptation (LoRA) [27] is another innovative approach that introduces trainable rank decomposition matrices for the parameters of each layer. LoRA has exhibited promising fine-tuning ability on large generative models including diffusion models [19], indicating its potential for broader application. Furthermore, the use of Adapters inserts lightweight adaptation modules into each layer of a pretrained transformer [26, 53]. This method has been successfully extended across various setups [16, 42, 69], demonstrating its adaptability and practicality. Other approaches including post-training model compression [14] facilitate the transition from a fully optimized model to a compressed version \u2013 either sparse [15], quantized [18,36], or both. This methodology was particularly helpful for parameter quantization [13]. Different from these methodologies, our work puts forth a new unified strategy that aims to enhance the efficient training of text-to-image diffusion models through the leverage of low-rank structure. Our proposed method integrates principles from these established techniques to offer a fresh perspective on training efficiency, adding to the rich tapestry of existing solutions in this rapidly evolving field. Controllable Text-to-Image Generation Recent developments in the text-to-image generation domain strives for more control over image generation, enabling more targeted, stable, and accurate visual outputs, several models like T2I-Adapter [42] and Composer [28] have emerged to enhance image generations following the semantic guidance of text prompts and multiple different structural conditional control. However, existing methods are struggling at dealing with multiple conditions from the same modalities, especially when they have conflicts, e.g. multiple segmentation maps and at the same time follow the guidance of text prompts; Recent studies also highlight challenges in controllable text-to-image generation (T2I), such as omission of objects in text prompts and mismatched attributes [3,34], showing that current models are strugging at handling controls from different conditions. Towards these, the Attendand-Excite method [10] refines attention regions to ensure distinct attention across separate image regions. ReCo [64], GLIGEN [38], and Layout-Guidance [11] allow for image generation informed by bounding boxes and regional descriptions. Our work improves the model\u2019s controllability by proposing a new training strategy. 5. Conclusion This work introduces a unified approach that improves both the flexibility and efficiency of diffusion-based textto-image generation. Our experimental results demonstrate a substantial reduction in memory cost and trainable parameters without compromising inference time or performance. Future work may explore more sophisticated decomposition techniques, furthering the pursuit of an optimal balance between model efficiency, complexity, and expressive power. 7 Figure 6. Qualitative performance of FlexEControl when conditioning on diverse compositions of multiple modalities. Each row in the figure corresponds to a unique type of condition, with the text prompts and conditions as follows: (first row) two canny edge maps with the prompt A motorcycle in the forest, (second row) two depth maps for A car, (third row) two sketch maps depicting A vase with a green apple, (fourth row) dual canny edge maps for Stormtrooper\u2019s lecture at the football field, (fifth row) two segmentation maps visualizing A deer in the forests, (sixth row) two MLSD edge maps for A sofa in a desert, and (seventh row) one segmentation map and one edge map for A bird. These examples illustrate the robust capability of FlexEControl to effectively utilize multiple multimodal conditions, generating images that are not only visually compelling but also faithfully aligned with the given textual descriptions and input conditions. 8"
16
+ }
intro_8K/test_introduction_short_2405.04925v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04925v1",
3
+ "title": "The many colors of the TNG100 simulation",
4
+ "abstract": "We apply the 3D dust radiative transfer code SKIRT to the low-redshift\n($z\\leq0.1$) galaxy population in the TNG100 cosmological simulation, the\nfiducial run of the IllustrisTNG project. We compute global fluxes and spectral\nenergy distributions (SEDs) from the far-ultraviolet to the sub-millimeter for\n$\\approx\\,$60 000 galaxies. Our post-processing methodology follows the study\nof Tr\\v{c}ka et al. (2022) of the higher-resolution TNG50 simulation. We verify\nthat TNG100 reproduces observational luminosity functions at low redshifts to\nexcellent precision, unlike TNG50. Additionally, we test the realism of our\nTNG100 plus SKIRT fluxes by comparing various flux and color relations to data\nfrom the GAMA survey. TNG100 broadly reproduces the observed distributions, but\nwe predict ultraviolet colors that are too blue by $\\approx\\,$0.4 mag, possibly\nrelated to the extinction in the star-forming regions subgrid model not being\nselective enough. Furthermore, we find that the simulated galaxies exhibit\nmid-infrared fluxes elevated by up to $\\approx\\,$0.5 mag that we attribute to\noverly effective stochastic heating of the diffuse dust. All synthetic\nbroadband fluxes and SEDs are made publicly available in three orientations and\nfour apertures, and can readily be used to study TNG100 galaxies in a mock\nobservational fashion.",
5
+ "authors": "Andrea Gebek, Ana Tr\u010dka, Maarten Baes, Marco Martorano, Annalisa Pillepich, Anand Utsav Kapoor, Angelos Nersesian, Arjen van der Wel",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "astro-ph.GA",
9
+ "cats": [
10
+ "astro-ph.GA"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Cosmological hydrodynamical simulations that emulate the assem- bly and evolution of thousands of galaxies have proven an indispens- able tool to understand many facets of the observed galaxy population (Somerville & Dav\u00e9 2015; Vogelsberger et al. 2020a). Assessing the realism and reliability of cosmological simulations by comparing their outcome to observations is critical: a solid baseline agreement is necessary in order to draw meaningful conclusions from the sim- ulations. Furthermore, discrepancies can be used to unveil gaps in our understanding of galaxy formation and evolution. However, com- paring the simulated and observed galaxy populations comes with a major caveat: observations of galaxies only trace the light emitted by stellar populations (and partially reprocessed by dust and gas), while in simulations only the \u2018physical\u2019 parameters of the stellar popu- lations and interstellar medium (such as masses and metallicities) are known. Tracking the radiation field in cosmological simulations is computationally prohibitive, unless the simulation run only cov- ers the high-redshift regime (\ud835\udc67\u22735) and only few wavelength bins are considered as is done in the SPHINX (Rosdahl et al. 2018) or THESAN (Kannan et al. 2022) simulations. Comparing the simulated and observed galaxy populations in the \u2018physical\u2019 realm (e.g. the stellar mass function or the main sequence of star-forming galaxies) bears the main caveat that all physical prop- erties need to be inferred from observations. Such retrievals of phys- \u2605E-mail: [email protected] ical parameters sensitively depend on the adopted model used in e.g. the SED fitting process, relying on simplified star-formation histo- ries and dust-to-star geometries (Pacifici et al. 2023). As an example cautionary note, the long-standing disagreement in the star-forming main sequence for 0.5 < \ud835\udc67< 3 with the simulated galaxy population offset to lower star-formation rates (Mitchell et al. 2014; Leja et al. 2015; Furlong et al. 2015; Tomczak et al. 2016; Donnari et al. 2019; Katsianis et al. 2020) could only recently be remedied with more sophisticated SED fitting methods (Nelson et al. 2021; Leja et al. 2022). As a complementary approach, it is therefore critical to move the simulated galaxies into the observational realm by postprocessing them with radiative transfer. This method circumvents any uncer- tainties in the parameter inference from observations (e.g. choice of free parameters and prior ranges), but requires a postprocessing scheme based on the stars and gas of the simulated galaxies that comes with its own caveats (e.g. choice of dust allocation recipe if dust is not modelled in the cosmological simulation). As a substan- tial fraction of the light emitted by stellar populations is reprocessed by dust and gas in the interstellar medium (Popescu & Tuffs 2002; Viaene et al. 2016; Bianchi et al. 2018), methods to solve for the transport of radiation are required. Since dust efficiently scatters and absorbs starlight at ultraviolet (UV) and optical wavelengths, Monte Carlo radiative transfer (MCRT) methods are generally used to ac- curately simulate the radiation field in galaxies taking the 3D dust and stellar distributions into account. Using such MCRT methods, synthetic broadband fluxes and images for a large variety of cos- \u00a9 2024 The Authors arXiv:2405.04925v1 [astro-ph.GA] 8 May 2024 2 A. Gebek et al. mological simulations such as EAGLE (Camps et al. 2016; Trayford et al. 2017), SIMBA (Narayanan et al. 2021); AURIGA (Kapoor et al. 2021; Kapoor et al. in prep.); ARTEMIS (Camps et al. 2022), Illus- trisTNG (Rodriguez-Gomez et al. 2019; Schulz et al. 2020; Vogels- berger et al. 2020b; Tr\u010dka et al. 2022; Popping et al. 2022; Costantin et al. 2023; Guzm\u00e1n-Ortega et al. 2023; Baes et al. 2024a; Bottrell et al. 2024), and NewHorizon (Jang et al. 2023) have been calculated and compared to observational data. In Tr\u010dka et al. (2022), broadband fluxes from the UV to the far- infrared (FIR) have been computed for a sample of \u223c14 000 galaxies at low redshift (\ud835\udc67\u22640.1) from the TNG50 simulation. Comparing the simulated fluxes to observational low-redshift luminosity func- tions (LFs), Tr\u010dka et al. (2022) found that the TNG50 LFs at all wavelengths exceed the observational estimates. Tr\u010dka et al. (2022) attribute this tension mostly to the subgrid parameter calibration in the IllustrisTNG project. As some of the physical processes cannot be resolved by cosmological simulations, these simulations typically rely on a number of subgrid parameters (e.g. the strength of feed- back from active galactic nuclei) to reproduce some important galaxy statistics (e.g. the stellar mass-halo mass relation, Schaye et al. 2015; Kugel et al. 2023). In the case of the IllustrisTNG project, these sub- grid parameters were chosen at the resolution of the fiducial TNG100 run and then left constant for other simulation runs. This leads to small systematic resolution-dependent differences in the outcomes of the various IllustrisTNG runs (see Appendices of Pillepich et al. 2018a,b, 2019 for more details). In this study, we want to test if the IllustrisTNG subgrid choices truly caused the discrepancies between TNG50 and observations found by Tr\u010dka et al. (2022). To this end, we apply the postprocessing method of Tr\u010dka et al. (2022) to the TNG100 simulation, the fidu- cial run of the IllustrisTNG simulation suite. Following Tr\u010dka et al. (2022), we apply the MCRT code SKIRT (Baes et al. 2011; Camps & Baes 2015, 2020) to a stellar mass-limited sample of \u224860 000 TNG100 galaxies at \ud835\udc67= 0 and \ud835\udc67= 0.1 . We generate broadband fluxes in 53 broadband filters ranging from the GALEX far-UV (FUV) to the ALMA band 6 and low-resolution SEDs ranging from 0.1\u22122000 \ud835\udf07m with \ud835\udc45= 39 for this sample of TNG100 galaxies. To reveal potential biases in our postprocessing method and to further assess the realism of the cosmological simulation in the ob- servational realm, we also explore different galaxy flux-flux and color-color relations over a large wavelength range. Since these rela- tions trace the underlying distributions and scaling relations of phys- ical properties (e.g. specific star-formation rate, dust mass, age), they provide an important testbed for the cosmological simulation plus radiative transfer postprocessing approach. This provides a com- plementary approach of assessing the simulation\u2019s realism, as the simulations are typically evaluated for their ability to reproduce the physical properties of the galaxy population inferred from observa- tions (e.g. Dav\u00e9 et al. 2017; De Rossi et al. 2017; Torrey et al. 2019; Rosito et al. 2019; Nelson et al. 2021). The outline of this paper is as follows: We describe the cosmo- logical simulation as well as the SKIRT postprocessing method in Section 2, and compare TNG100 LFs to observations in Section 3. We proceed by comparing the simulated fluxes to observational data from the GAMA survey in terms of flux-flux and color-color rela- tions (Section 4), and summarize our results in Section 5. We adopt a flat \u039bCDM cosmology, with parameters measured by the Planck satellite (Planck Collaboration et al. 2016), consistent with the Illus- trisTNG cosmology. We use the AB magnitude system (Oke 1971) throughout this study.",
15
+ "main_content": "2.1 IllustrisTNG The IllustrisTNG suite (Pillepich et al. 2018b; Springel et al. 2018; Nelson et al. 2018; Naiman et al. 2018; Marinacci et al. 2018) is a set of cosmological, magnetohydrodynamical simulations run using the moving-mesh code AREPO (Springel 2010). The simulation suite consists of three different volumes with box sizes of approximately 50, 100, and 300 comoving Mpc, each realized with three to four different resolutions. All of these simulations were run with the same physical model, with the subgrid parameters chosen for the fiducial TNG100-1 run (hereafter \u2018TNG100\u2019), which is the highest-resolution run for the 100-cMpc box. Unlike in the EAGLE suite (Schaye et al. 2015), the subgrid parameters were not recalibrated for other IllustrisTNG simulations (at different resolutions and box sizes). For the cosmological parameters, the simulations use the 2015 results measured by the Planck satellite (Planck Collaboration et al. 2016), i.e. \u03a9\ud835\udc5a= 0.3089, \u03a9\ud835\udc4f= 0.0486, \u03a9\u039b = 0.6911, \ud835\udc3b0 = 100 \u210ekm s\u22121Mpc\u22121 with \u210e= 0.6774). In the following, we briefly describe the aspects of IllustrisTNG and its galaxy formation model (Weinberger et al. 2017; Pillepich et al. 2018a) that are most relevant to this study. TNG100 simulates a cube with box size of 110.7 comoving Mpc from \ud835\udc67= 127 to \ud835\udc67= 0. This volume is resolved with 18203 baryonic and dark matter particles, corresponding to a mean particle mass of 1.4 \u00d7 106 \ud835\udc40\u2299and 7.5 \u00d7 106 \ud835\udc40\u2299, respectively. Galaxies are identified as gravitationally bound substructures using the SUBFIND algorithm (Springel et al. 2001). Since molecular clouds cannot be resolved in the simulation, star formation is modelled stochastically for gas with \ud835\udc5bH > 0.106 cm\u22123 according to the two-phase model of Springel & Hernquist (2003). Stellar populations are modelled with a Chabrier initial mass function (Chabrier 2003). These star particles subsequently affect the surrounding interstellar medium (ISM) via metal enrichment as well as feedback from supernovae explosions. The IllustrisTNG model furthermore incorporates gas radiative processes (including metal-line cooling and heating in an evolving UV background), formation and merging of supermassive black holes, as well as feedback from active galactic nuclei in a thermal and a kinetic mode. In Tr\u010dka et al. (2022), we calculated broadband fluxes (in 53 filters) as well as low-resolution SEDs (at 387 wavelengths between 0.1 and 2000 \ud835\udf07m) for the TNG50-1 and the lower-resolution TNG50-2 simulations (Nelson et al. 2019b; Pillepich et al. 2019, see Table 1 for an overview of the different simulation resolutions) and publicly released them on the IllustrisTNG website1. This data is available at two snapshots (099 and 091), corresponding to \ud835\udc67= 0 and \ud835\udc67= 0.1, for all galaxies above a stellar mass threshold. The stellar mass threshold of 108 M\u2299ensures that the galaxies are resolved by enough (\u2273102) star particles for the radiative transfer postprocessing. We remark that we always use the stellar mass within two stellar halfmass radii for the simulation stellar masses (as opposed to the total graviationally bound stellar mass for instance), which is available from the IllustrisTNG galaxy catalogue. With the present study, we add the same data products (i.e. broadband fluxes and low-resolution SEDs) for TNG100 at redshifts \ud835\udc67= 0 and \ud835\udc67= 0.1 for 61 076 galaxies with \ud835\udc40\u2605> 108.5 M\u2299(we choose a higher stellar mass threshold for TNG100 due to the lower particle mass resolution compared to the TNG50 runs) to the database. For the galaxy samples of all three simulations, subhalos that are flagged 1 https://www.tng-project.org/ MNRAS 000, 1\u201315 (2024) The colors of TNG100 3 Simulation \ud835\udc49[cMpc3] \ud835\udc5ab [M\u2299] \ud835\udc40min \u2605 [M\u2299] \ud835\udc41\ud835\udc67=0 gal \ud835\udc41\ud835\udc67=0.1 gal TNG100-1 106.53 1.4 \u00d7 106 108.5 30 712 30 364 TNG50-1 51.73 8.5 \u00d7 104 108 7 375 7 302 TNG50-2 51.73 6.8 \u00d7 105 108 5 669 5 665 Table 1. Runs of the IllustrisTNG suite that we consider in this study. For each simulation, we list the volume, the target baryon mass (the resolution) and the stellar mass (more specifically, the stellar mass in two stellar half-mass radii) threshold which defines the galaxy samples. \ud835\udc41gal indicates the number of galaxies (in the snapshots at \ud835\udc67= 0 and \ud835\udc67= 0.1) that conform to our sample selection criteria. as being not of cosmological origin are excluded2. An overview of the different sample definitions and galaxy sample sizes is shown in Table 1. We caution that the chosen stellar mass thresholds are relatively low, meaning that the postprocessing results for the lowestmass galaxies could be unreliable for TNG100-1 and TNG50-2. 2.2 Radiative transfer postprocessing The methodology for the radiative transfer postprocessing adopted here for TNG100 galaxies is exactly the same as in Tr\u010dka et al. (2022), which, in turn, is based on Camps et al. (2016, 2018) and Kapoor et al. (2021). We briefly summarize the main steps here and refer the reader to Tr\u010dka et al. (2022) for more details. We use the 3D dust MCRT code SKIRT (Baes et al. 2011; Camps & Baes 2015, 2020) to generate broadband fluxes over a large (UV-FIR) wavelength range. We simulate the emission of photon packets from evolved stellar populations as well as star-forming regions. The photon packets are then propagated through the dusty ISM, where they get absorbed and scattered. Furthermore, the dust grains are stochastically heated and subsequently emit IR radiation (Camps et al. 2015). Finally, the photon packets are recorded in synthetic instruments that emulate different apertures, orientations, and broadband filters. We briefly describe the different components of the SKIRT simulations and how they are imported from IllustrisTNG in the following. \u2022 Evolved stellar populations: All star particles with ages above 10 Myr are treated as evolved stellar populations. We model their SED using the Bruzual & Charlot (2003) template library with a Chabrier IMF. All parameters to model the emission of evolved stellar populations (positions, current masses, metallicites, ages, and smoothing lengths) are directly available from the IllustrisTNG snapshot data. \u2022 Star-forming regions: Star particles with ages below 10 Myr are modelled as star-forming regions, i.e. young stars that are still partially enshrouded within their dusty birth clouds. We use the template library MAPPINGS-III (Groves et al. 2008) to model their SED, which contains the light contribution from the young stellar population as well as nebular and dust emission. In addition to the positions, metallicities, and smoothing lengths, this template library has a number of parameters that are not directly available from the snapshot data. These are the star-formation rates (calculated as initial mass of the star particle divided by its age), ISM pressure (set to a constant value of \ud835\udc43/\ud835\udc58\ud835\udc35= 105 K cm\u22123), and compactness parameter (randomly sampled from a Gaussian distribution). Lastly, the photodissociation region (PDR) covering factor is calculated as 2 The IllustrisTNG subhalo finder sometimes falsely identifies baryonic fragments or clumps as galaxies. The IllustrisTNG galaxy catalogue (Nelson et al. 2019a) contains a flag that indicates if a subhalo is probably not of cosmological origin, in which case the \u2018SubhaloFlag\u2019 field is set to zero. We omit these objects from the postprocessing analysis. \ud835\udc53PDR = \ud835\udc52\u2212\ud835\udc61/\ud835\udf0f, with \ud835\udc61being the age of the star particle and \ud835\udf0fa free parameter in the radiative transfer postprocessing scheme. \u2022 Diffuse dust: As IllustrisTNG does not track the dust content in the ISM, we assign dust to gas cells based on their metallicity. Specifically, we use the criterion of Torrey et al. (2012, 2019) to select dust-containing gas cells based on their temperature and mass density. This criterion separates the hot circumgalactic medium (CGM) from the ISM. While we do not assign dust to the CGM gas cells, the dust mass in all other cells is scaled to their metal masses, with the dust-to-metal ratio \ud835\udc53dust being a free parameter of the postprocessing scheme. All other parameters that control the diffuse dust (positions, mass densities, temperatures, and metallicities) are directly available from the snapshot data. For the optical properties of the diffuse dust, we use the THEMIS dust model from Jones et al. (2017). The dusty medium is discretised on an octtree grid (Saftly et al. 2013, 2014) with a maximum subdivision level of twelve. The SKIRT postprocessing simulations are performed for a defined spatial domain. In our case, we use a cube with side length ten times stellar half-mass radii, centered on the subhalo positions. Additionally, we consider only star particles within a sphere of radius five stellar half-mass radii3 for the postprocessing. Lastly, we use 5 \u00d7 107 photon packets to perform the radiative transfer simulations. In Tr\u010dka et al. (2022), the free parameters \ud835\udf0fand \ud835\udc53dust were calibrated using a test sample of TNG50 galaxies which are compared to low-redshift multiwavelength observational data from the DustPedia archive4 (Davies et al. 2017; Clark et al. 2018). Using various luminosity and color scaling relations, the default parameters were determined to \ud835\udf0f= 3 Myr and \ud835\udc53dust = 0.2. We kept these parameters unchanged for the postprocessing of TNG100 galaxies. We have verified that the TNG100 galaxies exhibit a similar behaviour compared to TNG50 on the scaling relations that were used to calibrate the free parameters. 2.3 Simulation products The main output of the radiative transfer postprocessing are broadband fluxes in 53 filters, from the UV (GALEX FUV) to the ALMA band 6. These fluxes are available for all galaxies in TNG100-1 (as well as TNG50-1 and TNG50-2 already presented by Tr\u010dka et al. 2022) above the stellar mass threshold (see Table 1), at redshifts 0 and 0.1. The broadband flux is given both in the galaxy rest-frame (in absolute AB magnitudes) and in the observational frame5 (in Jy). Additionally, we provide low-resolution SEDs (\ud835\udc45= 39) in the observational frame (in Jy) for all TNG100-1 galaxies in the base sample. All data are available in three different galaxy orientations (random, edge-on, and face-on) as well as four different circular apertures (with aperture radii of 10 kpc, 30 kpc, two stellar half-mass radii, and five stellar half-mass radii). 3 This ensures that we capture most of the starlight emitted by the galaxy. To test this more quantitatively, we compared half-light sizes derived by Baes et al. (2024b) for massive (\ud835\udc40\u2605\u2265109.8 M\u2299) TNG50-1 galaxies to their halfmass sizes. The bluest available band (LSST u) shows the highest half-light to half-mass size ratios, with 28.3 % (3.47 %) of all galaxies having a half-light size larger than two (five) half-mass radii. Hence, there is a sizeable fraction of galaxies for which we miss some starlight in the bluest optical and the UV filters, but we remark that our maximum aperture of five stellar half-mass radii is comparable or larger than the observational apertures used in this paper (see Figure 2). 4 http://dustpedia.astro.noa.gr/ 5 For the data in the observational frame, the SKIRT instrument is placed at 20 Mpc for redshift zero or at the corresponding redshift for \ud835\udc67= 0.1. MNRAS 000, 1\u201315 (2024) 4 A. Gebek et al. 3 GALAXY LUMINOSITY FUNCTIONS We begin by investigating low-redshift luminosity functions in various broadband filters. As in Tr\u010dka et al. (2022), we use the rest-frame magnitudes (which we convert into solar luminosities) for our main galaxy sample which combines the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots. We use a default orientation (random) throughout this work and adopt an aperture of five stellar half-mass radii in Section 3, the default choice for the simulated LFs in Tr\u010dka et al. (2022). Since the observational LFs are thought to be representative of the local galaxy population, we do not mimic any observational selection effect (as instead done and described in Section 4.2). In Tr\u010dka et al. (2022) (their figure 9), luminosity functions of the TNG50-1 simulation were found to overestimate the observational estimates in all filters and at all luminosities from the UV to the farIR. At the bright end, this discrepancy can be mitigated by choosing a significantly smaller aperture (10 kpc instead of the default five stellar half-mass radii), but this value is less representative of observational apertures and does not resolve the tension for galaxies fainter than the knee of the luminosity functions. Tr\u010dka et al. (2022) found that the discrepancy is largely mitigated when using the lower-resolution TNG50-2 simulation. Indeed, within the IllustrisTNG model, the resolution improvement from the fiducial TNG100 resolution to TNG50 results in somewhat larger galaxy masses and SFRs (Pillepich et al. 2018a,b, 2019; Donnari et al. 2019). We test this statement here explicitly by investigating the LFs of the TNG100 simulation, which is the fiducial resolution at which the subgrid parameters were chosen. We show the low-redshift luminosity functions for TNG100 in Figure 1. The observational estimates from various low-redshift surveys6 are equivalent to the ones from Tr\u010dka et al. (2022) (see their section 3.2.1 for more details), which are corrected to \u210e= 0.6774 to be consistent with the cosmological parameters of IllustrisTNG. We also include the LFs from the TNG50-1 (hereafter \u2018TNG50\u2019) simulation to highlight the convergence behaviour of the cosmological simulations. To not overcrowd the figure TNG50-2 is not shown, but we note that the TNG50-2 LFs closely align with the TNG100-1 results, meaning that the LFs are converged with simulation box size. In Figure 1, the Poisson error for the simulated LFs is shown as shaded area, and luminosity bins with fewer than ten galaxies are marked. We cut the calculation of the simulated LFs at a minimum luminosity to ensure that the shown LFs are complete. This minimum luminosity is calculated as the 90 % luminosity percentile in the lowest 5 % stellar mass bin. The number of galaxies above this luminosity threshold are noted in each panel for TNG100 and TNG50 separately. Figure 1 shows how the agreement between TNG100 and observational LFs improves compared to TNG50. In fact, the TNG100 LFs provide an excellent match to the observational data in the nearUV (NUV) and FIR bands. In the FUV, optical and near-infrared (NIR) bands (GALEX FUV, SDSS and UKIDSS filters), the faint ends and knees of the observed LFs are also precisely reproduced in TNG100. At the bright ends TNG100 overestimates the observational estimates, but we note that in this regime there are also large differences across the observational datasets. As an example, the LFs in the SDSS filters from Loveday et al. (2012) are given in Petrosian apertures, while Driver et al. (2012) use Kron apertures. Even though 6 Specifically, we use LF data at \ud835\udc67\u22720.1 from the GALEX MIS (Budav\u00e1ri et al. 2005), GALEX AIS (Wyder et al. 2005), GAMA (Driver et al. 2012), SDSS (Loveday et al. 2012), SDSS + UKIDSS LAS + MGC redshift survey (Hill et al. 2010), H-ATLAS (Dunne et al. 2011), Planck ERCSC (Negrello et al. 2013), and Spitzer Data Fusion database + HerMES (Marchetti et al. 2016) surveys. both studies use data from the GAMA survey, the differences in the LFs reach almost an order of magnitude for the brightest luminosity bins (see also Hill et al. 2011 and Bernardi et al. 2013 for a discussion on this issue). For a detailed discussion on the impact of the aperture for the simulated LFs, we refer the reader to section 4 in Tr\u010dka et al. (2022). We conclude that, as suggested by Tr\u010dka et al. (2022), the way how the subgrid parameters are chosen in the IllustrisTNG model (at the fiducial TNG100 resolution) indeed caused the discrepancy in the LFs for TNG50. Acknowledging observational uncertainties at the bright end related to aperture choices, the agreement between TNG100 and low-redshift observational LFs is excellent. 4 UV-SUBMM BROADBAND FLUXES: COMPARISON WITH GAMA To assess galaxy scaling relations and distributions in the observational realm, we continue by analyzing different flux-flux and colorcolor relations over a large wavelength range. As opposed to analyzing scaling relations in the physical realm, this analysis provides a complementary approach of assessing the simulation\u2019s realism. We also use these relations to evaluate the accuracy and reveal potential systematics in our radiative transfer postprocessing scheme. We only analyze TNG100 in this section, and refer the reader to Appendix A for a comparison to TNG50. We first detail the observational dataset in Section 4.1 and describe how we homogenize the observational and simulated galaxy samples in Section 4.2, before discussing the results for the flux-flux and color-color relations in Sections 4.3 and 4.4, respectively. 4.1 Observational data from GAMA The Galaxy and Mass Assembly (GAMA) survey (Driver et al. 2009, 2011; Liske et al. 2015; Baldry et al. 2018; Driver et al. 2022) is a spectroscopic survey of galaxies with the AAOmega spectrograph in the optical wavelength range, mounted on the Anglo Australian Telescope (AAT). The survey consists of five different fields with varying input catalogues (used for target selection), observing a total area of 286 deg2. The most recent data release (DR4) of GAMA (Driver et al. 2022) contains spectra, spectroscopic redshifts, X-ray to FIR photometry from various other surveys7, as well as derived data such as stellar masses and rest-frame fluxes for some \u223c300 000 galaxies. All accessed data used in this study is part of the GAMA data release 4, described in Driver et al. (2022). Due to its large sample size of low-redshift galaxies (\ud835\udc67\u22720.6) and large wavelength coverage of photometric data, the GAMA database provides an excellent observational sample to compare to the simulated photometric data from TNG100. The GAMA project consists of three phases, which are different in their target selection as the input catalogues were updated with more recent photometric data from other surveys over time. As the three equatorial fields (labelled G09, G12, and G15) observed as part of GAMA II have the highest availability of derived data products (importantly, those are the only galaxies within GAMA with matched-aperture photometry), we only use this dataset throughout 7 Specifically, the GAMA database includes photometry from the XMMXXL, GALEX, SDSS, KiDS, VIKING, WISE, and Herschel-ATLAS surveys. MNRAS 000, 1\u201315 (2024) The colors of TNG100 5 7 7 8 8 9 9 10 10 11 11 12 12 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 7653 NTNG100 = 21627 GALEX FUV TNG100 TNG50 7 7 8 8 9 9 10 10 11 11 12 12 NTNG50 = 7810 NTNG100 = 22654 GALEX NUV Budavari+ 2005 Wyder+ 2005 Driver+ 2012 7 7 8 8 9 9 10 10 11 11 12 12 NTNG50 = 9712 NTNG100 = 34880 SDSS u 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 11400 NTNG100 = 42348 SDSS g NTNG50 = 12022 NTNG100 = 45269 SDSS r NTNG50 = 11973 NTNG100 = 44794 SDSS i 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 11999 NTNG100 = 51233 SDSS z NTNG50 = 11823 NTNG100 = 50492 UKIDSS Y Hill+ 2010 Loveday+ 2012 NTNG50 = 12648 NTNG100 = 47711 UKIDSS J 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 12109 NTNG100 = 51240 UKIDSS H NTNG50 = 12626 NTNG100 = 47597 UKIDSS K NTNG50 = 9453 NTNG100 = 34239 SPIRE 250 7 8 9 10 11 12 log10 (L/L ) 10 5 10 4 10 3 10 2 10 1 [cMpc 3dex 1] NTNG50 = 9583 NTNG100 = 35598 SPIRE 350 7 8 9 10 11 12 log10 (L/L ) NTNG50 = 9102 NTNG100 = 34001 SPIRE 500 7 8 9 10 11 12 log10 (L/L ) NTNG50 = 8790 NTNG100 = 29155 TIR Dunne+ 2011 Negrello+ 2014 Marchetti+ 2016 Figure 1. Luminosity functions in 14 bands and the total infrared (TIR). Continuous lines mark the simulation results for TNG50 (blue) and TNG100 (red), for \ud835\udc67\u22640.1. The shaded area corresponds to the Poisson error, crosses mark luminosity bins with fewer than ten galaxies. The simulated luminosity functions are computed only above a completeness limit, see text for details. The number of simulated galaxies above this completeness limit is shown in each panel. Observational data are shown as various markers. The TNG100 LFs are in excellent agreement with the observations. MNRAS 000, 1\u201315 (2024) 6 A. Gebek et al. this study. The target selection is defined as having an apparent Petrosian \ud835\udc5f-band magnitude below 19.8 mag in SDSS DR7. This limit is the same for all three fields. For the analysis in this paper, we use various catalogues from the GAMA database, which we describe in this section. To select only galaxies that are part of the main GAMA II survey, we use the TilingCat v46 catalogue from the EqInputCat data management unit (DMU). Objects that are part of the main survey have a survey class of four or higher. We enforce this criterion for our GAMA sample. We use broadband fluxes from the LambdarCat v01 catalogue in the LambdarPhotometry DMU. In this catalogue, the fluxes are extracted using matched aperture photometry with the Lambdar code (Wright et al. 2016). Lambdar measures the photometry given an input aperture (in this case, the apertures come from a combination of SExtractor (Bertin & Arnouts 1996) runs on the SDSS \ud835\udc5fand VIKING \ud835\udc4d-bands of imaging as well as visual inspection) and performs aperture convolution, deblending, correction, and sky substraction. The fluxes are available for the GALEX, SDSS, VISTA, WISE, PACS, and SPIRE bands. The fluxes are corrected for Milky Way extinction but not K-corrected, hence these are fluxes in the observational frame (as opposed to rest-frame fluxes). As we want to limit the observational galaxy sample in redshift, we also download redshift estimates from the DistanceFrames v14 catalogue in the LocalFlowCorrection DMU (Baldry et al. 2012). We use the redshifts from the Tonry flow model (Tonry et al. 2000), which equals the cosmic microwave background redshift for \ud835\udc67\u22650.03 and takes into account local flows at lower redshifts. Following the documentation of this DMU, we impose \ud835\udc67\u22650.002 as lower-redshift objects are potentially not galaxies. We also impose \ud835\udc67\u22640.1 to not extrapolate our simulation results into higher redshift ranges. Only galaxies with a high-quality redshift (redshift flag must be three or larger) are kept in our sample. We also impose a stellar mass limit (\ud835\udc40\u2605\u2265108.5 M\u2299) to the GAMA galaxies, the same stellar mass limit of our TNG100 galaxy sample. Stellar masses8 are obtained from the StellarMassesLambdar v20 catalogue in the StellarMasses DMU. These are inferred from SED fits to the Lambdar aperture photometry (see Taylor et al. 2011 for details). The cuts in survey class (\u22654), redshift flag (\u22653), redshift (0.002 \u2264\ud835\udc67\u22640.1), and stellar mass (\ud835\udc40\u2605\u2265108.5 M\u2299) lead to a base sample of 17 932 galaxies contained in the GAMA dataset. We note that not all galaxies in this GAMA catalogue have detected broadband fluxes in all filters. The GAMA base sample is then cut further depending on the broadbands that are involved in a specific flux-flux or color-color plot. We first impose SNR cuts on all involved filters to ensure that the GAMA galaxies have reliable fluxes. Specifically, we discard all galaxies with SNR < 3 in any of the involved filters. In a second step, we want to define a flux threshold that broadly corresponds to a volume-limited sample. The same threshold can then be applied to the simulated galaxies to ensure a fair comparison. We noted that the GAMA galaxies exhibit noise distributions with outliers multiple orders of magnitude below the median, even after this SNR cut. This leads to some galaxies having very low flux values, which are not representative of the typical sensitivity of the respective surveys. Hence, we compute the 10 %-percentiles of the GAMA galaxies with 8 Different stellar mass estimates exist in this GAMA table. While the TNG100 stellar masses would correspond to the sum of the GAMA stellar and remnant masses, we just consider the more commonly used stellar masses. Adding the remnant masses would shift the stellar masses by less than 0.1 dex. Furthermore, we correct the stellar masses to \u210e= 0.6774, but do not perform an aperture correction. SNR > 3 in each band, and use these fluxes as thresholds for the GAMA and TNG100 datasets. This means that in every flux-flux or color-color plot, if a GAMA or TNG100 galaxy has a flux below the threshold in any band it is omitted from the plot. The flux thresholds are given in Table 2 for all filters considered in Figures 3 and 4. We caution that the choice of SNR (3) and flux (10 %-percentile) thresholds are arbitrary. We have tested different strategies (changing the SNR and flux percentile values, or either just using an SNR or a flux percentile criterion), and find that the peaks and correlations of the distributions are hardly affected. On the other hand, the widths of the distributions are altered (e.g. lowering or dropping the SNR criterion primarily makes the GAMA distribution wider). We adopted the specific thresholds as a compromise between galaxy sample size and mitigating noise and incompleteness effects in GAMA. For our chosen thresholds, we find the GAMA noise levels moderate in the sense that the widths of the flux and color distributions of TNG100 and GAMA are similar, i.e. the intrinsic scatter in the galaxy population dominates over instrumental effects. Due to this ambiguity in SNR and flux thresholds, we focus the discussion in Sections 4.3 and 4.4 on the peaks and correlations of the distributions. Making firm statements about the scatter of the shown flux-flux and colorcolor relations would require adding realistic GAMA-like noise to the TNG100 galaxies, which is beyond the scope of this study. Lastly, we remark that aperture mismatches in the observed and simulated datasets can substantially bias the comparison. The distribution of GAMA apertures (given as the circularized radii9 of the elliptical aperture used by Lambdar) as a function of stellar mass for all 17 932 galaxies in the base sample is shown in Figure 2. These apertures are compared to the four different available apertures for the TNG100 data (10 kpc, 30 kpc, 2 or 5 stellar half-mass radii). We find that two stellar half-mass radii provide the closest match to the GAMA apertures, even though the TNG100 apertures are significantly smaller for all stellar masses below 1011 M\u2299in that case. Hence, we adopt two stellar half-mass radii as our default aperture in Section 4. 4.2 Observational sensitivity limits for simulated galaxies A major caveat when comparing observational and simulated datasets is that the galaxy samples can be very different. This caveat is usually mitigated by matching the samples in some physical properties like stellar masses or star-formation rates (e.g. Diemer et al. 2019; Donnari et al. 2021; Tr\u010dka et al. 2022; Goddy et al. 2023). However, this approach bears the problem that the observational and simulated definitions of those properties can be different10, and physical parameters inferred from observations come with their own caveats. Hence, we implement a different method to homogenize the galaxy samples. We base our method on the observational sensitivity limits in various filters, which determine the flux limits of the galaxies. We use these limits to filter out \u2018fainter\u2019 TNG100 galaxies which would lie below the observational detection threshold. This approach is similar to postprocessing studies of semi-analytical models (SAMs) over large redshift ranges which have been used to study galaxy clustering 9 We use \ud835\udc45aperture = \u221a \ud835\udc4e\ud835\udc4fwith \ud835\udc4eand \ud835\udc4fthe semi-major and semi-minor axes of the aperture, respectively. 10 As an example, the galaxy star-formation rate in the simulation is typically defined as the instantaneous SFR of the star-forming gas. On the other hand, in observations the SFR is determined for some tracer of young stellar populations, yielding the average SFR over a certain timescale. MNRAS 000, 1\u201315 (2024) The colors of TNG100 7 8.5 9.0 9.5 10.0 10.5 11.0 11.5 log10 (M /M ) 0 5 10 15 20 25 30 35 40 Raperture [kpc] GAMA TNG (2R) TNG (5R) Figure 2. Apertures of TNG100 galaxies (red) and objects from the GAMA survey, for 0.002 \u2264\ud835\udc67\u22640.1 and \ud835\udc40\u2605\u2265108.5 M\u2299. The GAMA apertures correspond to the cricularized radii of the elliptical Lambdar apertures. For TNG100 we show the constant apertures (10 or 30 kpc) as dotted lines. The other available TNG100 apertures (2 or 5 stellar half-mass radii) and the GAMA apertures are displayed as running medians as a function of stellar mass. Shaded areas indicate the interquartile range (not shown for 5 stellar half-mass radii). Since an aperture of two stellar half-mass radii provides the closest match to the GAMA apertures, we adopt this as our default aperture for the TNG100 fluxes in Section 4. (Blaizot et al. 2005; Kitzbichler & White 2007). In this approach, the (periodic) simulation box at different snapshots is stacked many times to construct a sufficiently large volume and to calculate a mock lightcone. Unfortunately, such a mock lightcone construction requires the postprocessing of many different snapshots, which is feasible for the SAM postprocessing but prohibitive for our 3D dust radiative transfer modelling. Hence, we do not stack the simulation box at different snapshots, but rather place the friend-of-friend halos (FoF groups) of the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots at arbitrary distances (within the redshift bounds from the observational sample, i.e. 0.002 < \ud835\udc67< 0.1) from the mock observer. We assume that the halos are uniformly distributed in space, such that the comoving number density of halos \ud835\udc5bis constant: \ud835\udc5b(\ud835\udc37\ud835\udc50) = \ud835\udc41(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) \ud835\udc49(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) = const = \ud835\udc41tot \ud835\udc49tot . (1) Here, \ud835\udc41tot denotes the total number of halos from TNG100 that are now distributed, \ud835\udc41(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) indicates the number of halos within a small comoving distance interval d\ud835\udc37\ud835\udc50, and \ud835\udc49(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) corresponds to the volume of this comoving distance slice. The total comoving volume of the (mock) survey, \ud835\udc49tot, is given by the redshift limits \ud835\udc67min and \ud835\udc67max: \ud835\udc49tot = 4\ud835\udf0b 3 \u0000\ud835\udc37\ud835\udc50(\ud835\udc67max)3 \u2212\ud835\udc37\ud835\udc50(\ud835\udc67min)3\u0001. (2) The normalized probability distribution function for \u2018placing\u2019 a halo at a specific distance, \ud835\udc5d(\ud835\udc37\ud835\udc50), can then be written as follows: \ud835\udc5d(\ud835\udc37\ud835\udc50)d\ud835\udc37\ud835\udc50= \ud835\udc41(\ud835\udc37\ud835\udc50, \ud835\udc37\ud835\udc50+ d\ud835\udc37\ud835\udc50) \ud835\udc41tot = 4\ud835\udf0b\ud835\udc372 \ud835\udc50d\ud835\udc37\ud835\udc50 \ud835\udc49tot . (3) With this procedure, we draw random redshifts within 0.002 \u2264 \ud835\udc67\u22640.1 for each TNG100 halo and then assign these random halo redshifts to all subhalos (i.e. galaxies) that belong to a particular halo. This is done independently for the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots. We then compute the broadband flux \ud835\udc39j \ud835\udf08(\ud835\udc67) in any filter j of the galaxy at that arbitrary redshift. Since we need this flux in the observational frame, we cannot simply use the fluxes that we stored for the TNG100 galaxies (they are stored in the restand in the observational frame, but only at the fixed snapshot redshifts of 0 and 0.1). Hence we convolve the low-resolution SED \ud835\udc39\ud835\udf08(\ud835\udc67snap, \ud835\udf06) (which is stored for each galaxy in the observational frame at its snapshot redshift \ud835\udc67snap) with filter transmission curves11 counter instruments, the transmission curves are multiplied by the wavelengths. \ud835\udc47j(\ud835\udf06), accounting for the redshifting of the photons: \ud835\udc39j \ud835\udf08(\ud835\udc67) = \u222b \ud835\udc47j(\ud835\udf06\u00b7 \ud835\udc58) \u00b7 \ud835\udc39\ud835\udf08(\ud835\udc67snap, \ud835\udf06) \u00b7 \ud835\udc58d\ud835\udf06 \u222b \ud835\udc47j(\ud835\udf06\u00b7 \ud835\udc58) d\ud835\udf06 \u00d7 \ud835\udc37\ud835\udc59(\ud835\udc67snap)2 \ud835\udc37\ud835\udc59(\ud835\udc67)2 , (4) with \ud835\udc58= (1+ \ud835\udc67)/(1+ \ud835\udc67snap) and \ud835\udc37\ud835\udc59indicates the luminosity distance (for the \ud835\udc67= 0 snapshot we use \ud835\udc37\ud835\udc59= 20 Mpc as the SKIRT instrument is placed at this distance). Placing TNG100 galaxies at arbitrary redshifts introduces inconsistencies due to galaxy evolution between the snapshot redshift from which they were extracted and the new redshift at which they are placed. The unknown result without this systematic effect, which would be obtained if we had access to each galaxy at the random continuous redshift between 0.002 and 0.1, is bound by the results using only one of the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots. To estimate if this inconsistency affects our results, we repeat our analysis using only the snapshot \ud835\udc67= 0 and \ud835\udc67= 0.1, respectively. We find that none of our results are affected significantly12. The end product of this procedure are observer-frame fluxes in Jansky (Eq. 4) for the entire \ud835\udc67= 0 and \ud835\udc67= 0.1 TNG100 galaxy sample, in all available 53 filters. These fluxes can be computed for continuous redshifts within arbitrary redshift intervals, and readily used to mimic observational sensitivity limits in various filters. We emulate the observational galaxy selection by distributing the TNG100 galaxies over the same redshift range (0.002 < \ud835\udc67< 0.1) as the GAMA dataset. Consistent with the GAMA data, only TNG100 galaxies with fluxes above the thresholds from Table 2 are shown in Figures 3 and 4. Under the assumption that the GAMA data is complete (i.e. volume-limited) above these flux limits, this procedure mitigates any sample selection effects to ensure a fair comparison of the TNG100 and GAMA galaxy samples. 4.3 Galaxy flux-flux relations We compare the simulated and observed fluxes in six different fluxflux relations in Figure 3. We always consider the VISTA \ud835\udc3eband in combination with various other bands. The \ud835\udc3eband is a good tracer for stellar mass (Kauffmann & Charlot 1998; Bell & de Jong 2001), 11 We obtained the filter transmission curves from the Spanish Virtual Observatory (SVO) filter profile service (http://svo2.cab.inta-csic.es/ theory/fps/). For photon 12 The similarity of the TNG100 and GAMA distributions quantified by the 2D Kolmogorov-Smirnov test statistic \ud835\udc37KS never changes by more than 0.04 in Figures 3 and 4. MNRAS 000, 1\u201315 (2024) 8 A. Gebek et al. Filter Pivot wavelength [\ud835\udf07m] Flux limit [Jy] GALEX FUV 0.154 4.48 \u00d7 10\u22126 GALEX NUV 0.230 6.36 \u00d7 10\u22126 SDSS \ud835\udc62 0.356 1.41 \u00d7 10\u22125 SDSS \ud835\udc5f 0.618 5.76 \u00d7 10\u22125 VISTA \ud835\udc3d 1.25 9.42 \u00d7 10\u22125 VISTA \ud835\udc3e 2.21 1.00 \u00d7 10\u22124 WISE W1 3.39 1.26 \u00d7 10\u22124 WISE W3 12.6 4.02 \u00d7 10\u22124 WISE W4 22.3 4.25 \u00d7 10\u22123 PACS 100 101 5.39 \u00d7 10\u22122 SPIRE 250 253 2.10 \u00d7 10\u22122 SPIRE 500 515 2.15 \u00d7 10\u22122 Table 2. Flux limits for the various broadband filters used to construct fluxflux and color-color relations (Figures 3 and 4). These flux limits correspond to the 10 %-percentile of the GAMA fluxes with SNR > 3 in each filter. Only galaxies (for both the GAMA and TNG100 samples) which have fluxes above these thresholds in all involved filters for a specific flux-flux/color-color relation are plotted in this relation in Figures 3 and 4. hence this analysis is analogous to various galaxy scaling relations as a function of stellar mass in the \u2018observational realm\u2019. In Figures 3 and 4, we show the GAMA and TNG100 2D distributions as kernel density estimates (KDE), with the contours indicating various percentiles of enclosed fraction of the galaxy population density. 1D histograms are shown on the sides, and observational errors are indicated by the grey ellipses in the upper left corner where the darker (lighter) ellipse indicates the median (upper quartile) 1-\ud835\udf0eobservational error bar. The error bars are computed by propagating the flux uncertainties in quadrature, assuming that the flux uncertainties are uncorrelated with each other. The number of galaxies above the flux limits are given both for TNG100 and GAMA in the top right of each panel. To quantify the degree of agreement between the TNG100 and GAMA distributions, we also compute the twodimensional Kolmogorov-Smirnov test statistic \ud835\udc37KS (Kolmogorov 1933; Smirnov 1948). This number is given in the top right of each panel, with lower numbers indicating a better agreement between the two distributions. We also discuss two alternative realizations of Figure 3 in the appendix. Figure A1 displays the exact same flux-flux relations, but using TNG50 instead of TNG100 to explore the impact of the simulation resolution. In Figure B1, we test the same flux-flux relations using a conditional KDE, i.e. exactly matching the TNG100 and GAMA VISTA \ud835\udc3edistributions. All results in Figures 3 and 4 use a TNG100 aperture of two stellar half-mass radii, which is systematically smaller than the GAMA apertures (see Figure 2). To verify if this aperture choice significantly affects our results, we have reproduced (but do not show) all fluxflux and color-color relations using a TNG100 aperture of five stellar half-mass radii. We find that the differences are minor (\ud835\udc37KS never changes by more than 0.05) and do not affect any of our conclusions. 4.3.1 VISTA K vs. GALEX FUV The relation between galaxy stellar mass and star-formation rate is a fundamental galaxy evolution diagnostic (e.g. Popesso et al. 2023). We begin the TNG100-GAMA comparison by showing an analogue of this fundamental relation in the observational realm: VISTA \ud835\udc3e versus GALEX FUV luminosity (top left panel of Figure 3). The FUV-luminosity is dominated by young stellar populations and hence traces SFR (modulo dust attenuation effects, e.g. Salim et al. 2007). The TNG100 and GAMA distributions in this flux-flux relation match to excellent precision (\ud835\udc37KS = 0.08), with both datasets showing the expected relation between stellar mass and SFR (the main sequence of star-forming galaxies, Noeske et al. 2007). We highlight that while the IllustrisTNG model has been calibrated to reproduce several galaxy scaling relations (e.g. the stellar mass-halo mass relation, Pillepich et al. 2018a), the stellar mass-SFR relation was not invoked. On the other hand, the two free parameters of the radiative transfer postprocessing (the dust-to-metal ratio \ud835\udc53dust and the clearing timescale for the birth clouds of star-forming regions \ud835\udf0f) have been calibrated to reproduce various flux and color relations from the DustPedia sample in Tr\u010dka et al. (2022), including a WISE W1-FUV relation which is very similar to the one presented here (see Section 4.3.3 for our reasoning why to replace WISE W1 with VISTA \ud835\udc3eas stellar mass tracer). 4.3.2 VISTA K vs. SDSS r The SDSS \ud835\udc5f-band luminosity also traces stellar mass (e.g. Mahajan et al. 2018), but due to increased dust attenuation and variability with stellar age it is less often used as a direct stellar mass proxy compared to the \ud835\udc3eband (Bell et al. 2003). On the other hand, the stellar evolution templates in the NIR carry systematic uncertainties related to TP-AGB stars (Maraston et al. 2006; Taylor et al. 2011). We find that the \ud835\udc5fand \ud835\udc3e-band fluxes correlate very tightly, in a similar fashion for both the GAMA and TNG100 data. The TNG100 galaxies are redder by \u22480.25 mag which could be due to an overly effective dust attenuation in the \ud835\udc5fband. Comparatively older or more metal-rich stellar populations in TNG100 could also contribute to this discrepancy, but Nelson et al. (2018) find that the TNG100 galaxy ages and stellar metallicities broadly agree with observational SDSS data within the systematic uncertainties (see their figure 2). Lastly, we find that systematic uncertainties of the SED templates for the evolved stellar populations are of the order of \u22480.2 mag when testing different template libraries. 4.3.3 VISTA K vs. WISE W1 Since the WISE W1 flux traces the Rayleigh-Jeans tail of evolved stars, this band can also be used as a stellar mass estimate (e.g. Jarrett et al. 2013; Meidt et al. 2014; Jarrett et al. 2023; Sureshkumar et al. 2023). The comparison with GAMA fluxes reveals that there is a sizeable population of TNG100 galaxies above the GAMA distribution. While the 1D histograms indicate that this offset seems to be mostly due to the \ud835\udc3e-band flux being too low in TNG100, we caution that a strong selection effect is at play: only galaxies that have both \ud835\udc3eand WISE W1 fluxes above the thresholds shown in Table 2 are included in the plot. If the selection is dominated by the WISE W1 band, and if the TNG100 galaxies are systematically brighter in this band than GAMA galaxies (at a fixed \ud835\udc3e-band luminosity), then this \u2018WISE W1 excess\u2019 could manifest itself as a \u2018VISTA \ud835\udc3e deficiency\u2019 even if the TNG100 and GAMA \ud835\udc3e-band distributions would match exactly. This is because TNG100 galaxies which are comparatively faint in the \ud835\udc3e-band can reach the required WISE W1 flux threshold, while GAMA galaxies at similar \ud835\udc3e-band luminosities would be discarded leading to the shown offset in the 1D \ud835\udc3e-band distributions. To visualize the flux-flux relations under this assumption of perfectly matching \ud835\udc3e-band luminosity distributions between TNG100 and GAMA, we show the results of a conditional KDE in Figure B1. We find that the offset from the GAMA distribution strongly correlates with the number of star-forming regions (stellar populations MNRAS 000, 1\u201315 (2024) The colors of TNG100 9 with ages below 10 Myr which we model with the MAPPINGS-III templates) relative to the number of evolved stellar populations. At first sight, this suggests that the MAPPINGS-III templates are the cause of excess WISE W1 emission for the TNG100 galaxies. However, we found that the contribution of star-forming regions to the WISE W1 flux is small, typically below 5 %. Instead, we suggest that emission from the diffuse dust causes the elevated WISE W1 fluxes, as the diffuse dust contribution also strongly correlates with the offset from the GAMA distribution and reaches values up to 70 %. Upon inspection of the simulated TNG100 spectra, we find emission features at the WISE W1 band which corresponds to the 3.3-micron polycyclic aromic hydrocarbon (PAH) feature (Tokunaga et al. 1991; Kim et al. 2012). It seems plausible that it is this PAH emission which causes the excess WISE W1 fluxes for the TNG100 galaxies, but whether this originates in overly emissive PAH dust in the THEMIS dust mix or if the MAPPINGS-III templates are overly effective in stochastically heating the surrounding diffuse dust remains unclear. 4.3.4 VISTA K vs. WISE W3 Since the WISE W3 band predominantly traces the PAH emission from PDRs (Kapoor et al. 2023), this flux is used as an alternative tracer for star-formation rate which is unaffected by dust attenuation (e.g. Cluver et al. 2017; Elson et al. 2019; Naluminsa et al. 2021; Sureshkumar et al. 2023). Similarly as in Section 4.3.1, we see the star-forming main sequence with similar slopes in the TNG100 and GAMA data, but with a clearer separation between the star-forming and quiescent galaxy populations compared to the \ud835\udc3e-FUV relation. The TNG100 galaxies populating the sequence in the bottom right corner, with WISE W3 luminosities 1.5 dex below the main sequence, are all devoid of star-forming gas (i.e. have zero star-formation rate). This population of quiescent galaxies is also seen in the GAMA data. On the other hand, the star-forming TNG100 galaxies are slightly offset towards the top left corner. The 1D WISE W3 distributions match to great precision, but the TNG100 VISTA \ud835\udc3eluminosities seem to be offset to lower values compared to the GAMA data. This is exactly the same effect as discussed in Section 4.3.3 (a TNG100 excess in WISE W3 flux disguised as a deficiency in \ud835\udc3e-band flux due to selection effects), and we speculate that it also has the same origin (an excess of PAH emission from the diffuse dust component) as the diffuse dust emission contributes at least \u224860 % of the WISE W3 flux for all star-forming galaxies. 4.3.5 VISTA K vs. PACS 100 Since the FIR dust emission peak is usually encompassed by the 100 and 160 \ud835\udf07m bands (Cortese et al. 2014), the PACS 100 flux traces relatively warm dust. The correlation of this flux with the \ud835\udc3e-band exhibits a similar slope and scatter in the TNG100 and GAMA distributions. The TNG100 PACS 100 fluxes are systematically smaller than the GAMA fluxes, but the offset is very small (\u22480.1 dex). We note that for this and the next panel involving FIR fluxes, the galaxy samples shrink substantially (c.f. the GAMA and TNG100 base samples of 17 932 and 61 076 galaxies, respectively). 4.3.6 VISTA K vs. SPIRE 500 The SPIRE 500 band traces relatively cold dust, and can be used as a dust mass proxy since the dust budget in the ISM is dominated by cold dust (\ud835\udc47\u227225 K, Dunne & Eales 2001) and the SPIRE 500 flux is less affected by dust temperature variations than for instance the SPIRE 250 flux (Galametz et al. 2012). Hence, the correlation between \ud835\udc3eand SPIRE 500 flux is a purely observational counterpart of the physical non-linear relation between stellar and cold dust mass (e.g. Cortese et al. 2012). While we find that the TNG100 and GAMA flux distributions broadly agree in this flux-flux relation, there is a sizable population of GAMA galaxies at low \ud835\udc3e-band luminosities (\ud835\udc3fK \u223c109 L\u2299) with substantially elevated SPIRE 500 fluxes (by approximately one order of magnitude). When replacing the SPIRE 500 with the SPIRE 250 band we find a better agreement (\ud835\udc37KS = 0.13), moreover the 2D KDE contours do not show a population of elevated SPIRE 250 fluxes for the GAMA galaxies. The SPIRE 500 band is (unlike the SPIRE 250 band) susceptible to submillimeter (submm) excess. This excess flux could be due to very cold dust shielded from starlight or changes in the emission properties of the dust grains at submm wavelengths, but the exact origin remains unknown (Kirkpatrick et al. 2013; Hermelo et al. 2016). As the cold (\ud835\udc47\u22728000 K) ISM is not modelled explicitly in the IllustrisTNG model but treated according to the two-phase model of Springel & Hernquist (2003), the lack of a cold ISM component could explain the absence of this galaxy population with elevated SPIRE 500 fluxes in TNG100. However, the SPIRE 500 fluxes are also known to suffer more from source confusion (Rigby et al. 2011) and are less reliable than the SPIRE 250 fluxes. We tested a more stringent SNR criterion of five (instead of three), which mostly affects the SPIRE 500 band. We find that the population of GAMA galaxies with elevated SPIRE 500 fluxes vanishes almost completely in this case13, with \ud835\udc37KS never changing by more than 0.07 for any of the relations in Figures 3 and 4. Hence, this particular tension is not robust and could be due to observational uncertainties. 4.4 Galaxy color-color relations We show four different color-color relations in Figure 4. The galaxy samples are determined in the same way as in Figure 3, i.e. they are derived from the GAMA and TNG100 base samples by imposing SNR and flux thresholds on each band involved in a specific colorcolor relation. An alternative realization of Figure 4 using TNG50 instead of TNG100 is shown in Figure A2. 4.4.1 (SDSS r VISTA J) vs. (SDSS u SDSS r) This color-color relation emulates the commonly used UVJ diagram (using the \ud835\udc49-\ud835\udc3dand \ud835\udc48-\ud835\udc49Johnson filters), which is relevant due to its capability of separating the star-forming and quiescent galaxy populations observationally (Williams et al. 2009; Whitaker et al. 2010; Patel et al. 2012; see Leja et al. 2019 for some limitations of the UVJ diagram). While dust attenuation shifts galaxies in the top right direction of the UVJ diagram, quiescent galaxies appear as a distinct population that is offset towards the top left direction. The UVJ diagram has also been studied in postprocessed cosmological simulations for TNG100 (Donnari et al. 2019; Nagaraj et al. 2022), TNG50 (Baes et al. 2024a), and SIMBA (Akins et al. 2022). Using a raytracing postprocessing method developed by Nelson et al. (2018), Donnari et al. (2019) derive the rest-frame UVJ diagram for TNG100 at \ud835\udc67= 0 and find that it is broadly consistent with observational data, 13 The impact on any of the other results is minor when using a more stringent SNR criterion of SNR > 5 MNRAS 000, 1\u201315 (2024) 10 A. Gebek et al. 9 10 11 log10 (LK/L ) 8.0 8.5 9.0 9.5 10.0 log10 (LFUV/L ) NTNG = 21954 NGAMA = 8112 DKS = 0.08 0.0 0.5 PDF [dex 1] TNG GAMA 0 1 PDF [dex 1] 8 9 10 11 log10 (LK/L ) 8.5 9.0 9.5 10.0 10.5 11.0 log10 (Lr/L ) NTNG = 32483 NGAMA = 14415 DKS = 0.12 0.0 0.5 PDF [dex 1] 0 1 PDF [dex 1] 8 9 10 11 log10 (LK/L ) 8.0 8.5 9.0 9.5 10.0 10.5 log10 (LW1/L ) NTNG = 28489 NGAMA = 10383 DKS = 0.16 0.0 0.5 PDF [dex 1] 0.0 0.5 PDF [dex 1] 9 10 11 log10 (LK/L ) 7.5 8.0 8.5 9.0 9.5 10.0 log10 (LW3/L ) NTNG = 19805 NGAMA = 6210 DKS = 0.29 0.0 0.5 PDF [dex 1] 0 1 PDF [dex 1] 9 10 11 log10 (LK/L ) 8.5 9.0 9.5 10.0 10.5 11.0 log10 (LP100/L ) NTNG = 5424 NGAMA = 1938 DKS = 0.24 0.0 0.5 1.0 PDF [dex 1] 0 1 PDF [dex 1] 9 10 11 log10 (LK/L ) 7.0 7.5 8.0 8.5 9.0 9.5 log10 (LS500/L ) NTNG = 2135 NGAMA = 937 DKS = 0.2 0.0 0.5 1.0 PDF [dex 1] 0 1 PDF [dex 1] Figure 3. Six different flux-flux relations, for TNG100 (red) and observational data from the GAMA survey (black), for 0.002 \u2264\ud835\udc67\u22640.1 and \ud835\udc40\u2605\u2265108.5 M\u2299. MNRAS 000, 1\u201315 (2024) The colors of TNG100 11 Figure 3 \u2013 continued The panels always have the VISTA \ud835\udc3e-band flux on the \ud835\udc65-axis, and feature various bands (increasing with wavelength from the top left to the bottom right panel) on the \ud835\udc66-axis. For both datasets, we filter out galaxies which lie below specific flux thresholds in any of the bands (see text for details). The number of remaining galaxies is given in the top right corner of each panel. The 2D distribution is estimated using a kernel density estimate (KDE). The different levels correspond to 5, 25, 60, and 90 % of the total KDE density. 1D color histograms for both datasets are also shown. Note that we use observer-frame fluxes here. An estimate of the average noise in the observations is indicated by the grey ellipses, with the darker (lighter) ellipse indicating the median (upper quartile) 1-\ud835\udf0eobservational error bar. \ud835\udc37KS indicates the distance between the two distributions according to a two-dimensional Kolmogorov-Smirnov test. The flux-flux relations seen in the GAMA data are well reproduced by the TNG100 galaxies. but do not compare the simulated and observed color distributions in detail. In our case, we find two galaxy populations that are clearly separated as seen in observations. As we know the star-formation rates for the TNG100 galaxies, we can verify if these two populations indeed correspond to star-forming and quiescent galaxies. When splitting the galaxy population by specific star-formation rate (sSFR), we find that star-forming galaxies (with sSFR > 10\u221210.5 yr\u22121) indeed occupy the peak at blue colors and broadly extend to the top right corner, while quiescent galaxies (with sSFR < 10\u221211.5 yr\u22121 are located along a very narrow sequence offset from the star-forming sequence to redder \ud835\udc62-\ud835\udc5fcolors. However, the star-forming sequence appears to be slightly too red in TNG100 by \u22480.25 mag along both axes. Multiple effects could contribute to render the star-forming galaxies too red for TNG100 (as discussed in section 4.3.2): at these wavelengths, the amount of dust as well as the dust model affects the colors. Furthermore, the SED template libraries for the evolved stars can affect the UVJ colors by up to 0.2 mag (G. Worthey, private communication). And lastly, the stellar populations of the star-forming TNG100 galaxies could also be intrinsically too old or metal-rich. However, we remark that the \ud835\udc62-\ud835\udc5fcolor of \ud835\udc67= 0 TNG100 galaxies postprocessed with the simpler method of Nelson et al. (2018) reproduce observational data from SDSS, i.e. the star-forming galaxies are bluer by \u22480.25 mag compared to our TNG100 colors calculated with SKIRT. At the same time, the quiescent galaxies which are less affected by dust attenuation are slightly redder (by \u22480.1 mag) in Nelson et al. (2018) compared to our results. This points towards too much dust reddening in our SKIRT pipeline, which is puzzling given the excellent agreement of our SKIRT fluxes with other flux-flux/color-color relations and luminosity functions. We defer a more detailed assessment of the intrinsic and dust-reddened optical colors of TNG100 galaxies to future work. 4.4.2 (GALEX FUV VISTA K) vs. (VISTA K SPIRE 250) As discussed in Sections 4.3.1 and 4.3.6, the GALEX FUV and SPIRE 250 fluxes trace galaxy SFR and dust mass, respectively. Hence, this color-color relation is an analogue of the physical galaxy scaling relation between specific star-formation rate and specific dust mass (e.g. Cortese et al. 2012; R\u00e9my-Ruyer et al. 2015; Nanni et al. 2020; Shivaei et al. 2022). Both GAMA and TNG100 feature a mild correlation between these colors, with a tail extending towards the bottom left corner which contains quiescent galaxies with very small specific dust masses. The peaks, widths, and correlation of the GAMA and TNG100 color distributions match to great precision for this relation. We briefly compare this result to a similar color-color relation which was used to calibrate the SKIRT postprocessing parameters in Tr\u010dka et al. (2022) (their figure 6, panel g). Their color-color relation slightly differs from the one shown here as Tr\u010dka et al. (2022) adopted the WISE W1 band to trace stellar mass, while we use the VISTA \ud835\udc3eband. As discussed in Section 4.3.3, the WISE W1 flux can contain significant PAH contribution from the diffuse dust component. We also examined the exact same color-color relation replacing the VISTA \ud835\udc3ewith the WISE W1 band, to reproduce the color-color relation that was used by Tr\u010dka et al. (2022) in the calibration process. We find for our datasets that the excellent match between the GAMA and TNG100 distributions vanishes (\ud835\udc37KS = 0.6), with the WISE W1-SPIRE 250 (GALEX FUV-WISE W1) color of GAMA being significantly redder bluer by \u22480.5 mag compared to TNG100. This means that the radiative transfer calibration of Tr\u010dka et al. (2022) (using DustPedia data and WISE W1 fluxes) produces sensible results when using GAMA data and VISTA \ud835\udc3efluxes as shown here, but is in tension when using the same GAMA data with WISE W1 fluxes. We found two different effects with coincidentally similar magnitudes, which can explain these discrepant results: first, the TNG100 WISE W1 fluxes contain PAH emission that seems to be too strong in the SKIRT setup used here (see Section 4.3.3). Second, the DustPedia WISE W1-SPIRE 250 colors are bluer by \u22480.5 mag compared to the GAMA colors, probably related to selection effects (the DustPedia archive is a much smaller sample of 814 local galaxies with Herschel and WISE W1 detections). These two effects conspire to give consistent results for this color-color relation using the WISE W1 band and DustPedia or using the VISTA \ud835\udc3eband and GAMA data. 4.4.3 (GALEX FUV VISTA K) vs. (GALEX FUV GALEX NUV) This color-color relation has the UV slope GALEX FUV-NUV on the \ud835\udc66-axis. Since the UV is dominated by star-forming regions and dust attenuation is very strong at these wavelengths, the UV slope of the TNG100 galaxies sensitively depends on the treatment of starforming regions and the subsequent attenuation in the diffuse ISM. Hence, the UV slope \ud835\udefdis correlated with the infrared excess IRX (ratio of IR and UV luminosity) and commonly used as a measure for attenuation in the ISM using the IRX-\ud835\udefdrelation (Calzetti 1997; Meurer et al. 1999). We examine the FUV-NUV color as a function of FUV-VISTA \ud835\udc3e, which we use as a proxy for sSFR\u22121 as in Section 4.4.2. We find that the sSFR and UV slopes are anticorrelated in both datasets, but the anticorrelation is substantially stronger in the GAMA data. Furthermore, the TNG100 UV slopes are also offset to lower values, with the peaks of the distributions differing by \u22480.4 mag. We also note that the FUV-NUV distribution of the GAMA galaxies is wider, which is (at least partially) caused by the relatively high noise levels of this particular color. When calculating FUV-NUV colors without diffuse dust component for the TNG100 galaxies we find that the FUV-NUV colors hardly change, meaning that the diffuse dust has a negligible impact on the UV slope. Instead, the FUV-NUV color is driven by the SED templates of both the evolved stellar populations and the star-forming regions, which contribute roughly similar fractions to the total UV fluxes. A redder FUV-NUV color (i.e. a steeper UV slope) could for instance be obtained with a more selective extinction in the FUV band MNRAS 000, 1\u201315 (2024) 12 A. Gebek et al. 0.25 0.50 0.75 1.00 1.25 SDSS r VISTA J [mag] 1.0 1.5 2.0 2.5 3.0 SDSS u SDSS r [mag] NTNG = 27328 NGAMA = 13233 DKS = 0.22 0 1 PDF [mag 1] 0 1 PDF [mag 1] 2 4 6 GALEX FUV VISTA K [mag] 2 3 4 5 6 VISTA K SPIRE 250 [mag] NTNG = 9185 NGAMA = 2881 DKS = 0.14 0.0 0.2 0.4 PDF [mag 1] TNG GAMA 0 1 PDF [mag 1] 2 4 6 8 GALEX FUV VISTA K [mag] 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 GALEX FUV GALEX NUV [mag] NTNG = 20809 NGAMA = 7309 DKS = 0.75 0.0 0.2 0.4 PDF [mag 1] 0.0 2.5 PDF [mag 1] 1 2 3 4 WISE W4 SPIRE 250 [mag] 1.5 2.0 2.5 3.0 3.5 4.0 4.5 WISE W3 SPIRE 250 [mag] NTNG = 3096 NGAMA = 1148 DKS = 0.64 0.0 0.5 1.0 PDF [mag 1] 0 1 PDF [mag 1] Figure 4. Four different color-color relations, for TNG100 (red) and observational data from the GAMA survey (black), for 0.002 \u2264\ud835\udc67\u22640.1 and \ud835\udc40\u2605\u2265108.5 M\u2299. For both datasets, we filter out galaxies which lie below specific flux thresholds in any of the bands involved in the color-color relation (see text for details). The number of remaining galaxies is given in the top right corner of each panel. The 2D distribution is estimated using a kernel density estimate (KDE). The different levels correspond to 5, 25, 60, and 90 % of the total kernel density. 1D color histograms for both datasets are also shown. Note that we use observer-frame fluxes here. An estimate of the average noise in the observations is indicated by the grey ellipses, with the darker (lighter) ellipse indicating the median (upper quartile) 1-\ud835\udf0eobservational error bar. \ud835\udc37KS indicates the distance between the two distributions according to a two-dimensional Kolmogorov-Smirnov test. TNG100 reproduces the observed color distributions in the two upper panels, but the TNG100 galaxies have flatter UV slopes and bluer WISE W3 SPIRE 250 colors compared to the GAMA data. from the dusty birth clouds in the MAPPINGS-III templates. Kapoor et al. (in prep.) find that for the 30 MW-like galaxies from the AURIGA simulation (Grand et al. 2017), the recent TODDLERS library for star-forming regions (Kapoor et al. 2023) yields redder FUVNUV colors of \u22480.15 mag compared to MAPPINGS-III. Whether this change fully resolves the tension in this color-color relation or additional adjustments need to be made (e.g. in the templates for the evolved stellar populations, which can also contribute substantially to the UV fluxes) would require postprocessing the TNG100 galaxMNRAS 000, 1\u201315 (2024) The colors of TNG100 13 ies again varying the SED templates of the star-forming regions and evolved stellar populations, which is beyond the scope of this study. 4.4.4 (WISE W4 SPIRE 250) vs. (WISE W3 SPIRE 250) Lastly, we show a color-color relation involving the SPIRE 250, WISE W3 and WISE W4 fluxes. As discussed in Section 4.3.4, the WISE W3 band traces PAH emission from the diffuse dust component. The WISE W4 flux originates from hot dust around star-forming regions (Kapoor et al. 2023), and we find that it comes roughly in equal parts from the MAPPINGS-III star-forming regions and the diffuse dust. Hence, this color-color relation measures the amount of hot dust and PAH emission relative to cold dust traced by SPIRE 250. This relation is observationally particularly challenging to measure, resulting in the large observational errors and wider GAMA distributions. While the WISE W4-SPIRE 250 color distributions broadly match, the TNG100 WISE W3-SPIRE 250 colors are bluer by \u22480.5 mag which we attribute to elevated WISE W3 fluxes due to PAH emission from the diffuse dust (as discussed in Section 4.3.4). The 2D distributions show that the slope of the relation is steeper in TNG100. This is expected since galaxies with high WISE W4-SPIRE 250 colors have a comparatively large fraction of their dust heated to high temperatures due to emission from star-forming regions. This in turn leads to a stronger WISE W3 excess for those galaxies and thus a steepening of this color-color relation for TNG100 galaxies. 5 SUMMARY We applied the radiative transfer postprocessing method developed by Tr\u010dka et al. (2022), where the TNG50 simulation was analyzed, to the fiducial TNG100 run of the IllustrisTNG suite. The postprocessing method uses the dust MCRT code SKIRT to propagate the emission from evolved stars and star-forming regions through the dusty ISM. We generated broadband fluxes and low-resolution SEDs from the UV to the far-IR for all TNG100 galaxies in the \ud835\udc67= 0 and \ud835\udc67= 0.1 snapshots resolved by more than \u2248200 star particles (\ud835\udc40\u2605> 108.5 M\u2299), leading to a sample of \u224860 000 postprocessed galaxies. This dataset (as well as the TNG50 and TNG50-2 fluxes and SEDs generated by Tr\u010dka et al. 2022) is publicly available on the IllustrisTNG website14. To test the fidelity of the cosmological simulation and our postprocessing method, we compared the simulated fluxes to low-redshift observational data. The following points summarize our main findings: \u2022 TNG100 luminosity functions from the UV to the far-IR fall within the range of low-redshift observational results (Figure 1). Residual discrepancies at the bright end in the UV/optical/NIR are on the level of systematic effects in the observations related to aperture choices. As noted by Tr\u010dka et al. (2022), the improvement over the TNG50 simulation stems from the fact that the IllustrisTNG model was designed at the resolution of TNG100, i.e. the subgrid parameters were chosen such that TNG100 reproduces some key statistics of the low-redshift galaxy population (e.g. the stellar mass-halo mass relation). \u2022 We compare six different flux-flux relations between TNG100 and observational data from GAMA in Figure 3. To mimic the strong observational selection effects, we redistribute the TNG100 galaxies to arbitrary redshifts to compute a realistic apparent brightness 14 www.tng-project.org/gebek24 distribution (Section 4.2). Exploring the fluxes in various bands as a function of \ud835\udc3e-band luminosity (which traces stellar mass), we find a broad baseline agreement between TNG100 and GAMA. Tension in the WISE bands is correlated with the abundance of star-forming regions in TNG100 galaxies and with emission from the diffuse dust component. Hence, we attribute this tension to excess PAH emission, potentially related to overly effective stochastic dust heating from the star-forming regions. \u2022 Lastly, we use the same method applied for the flux-flux relations to compare four different color-color relations between TNG100 and GAMA. Tension exists mostly in the UV slope (TNG100 galaxies exhibit flatter UV slopes, i.e. lower FUV-NUV colors, than GAMA data) and in IR colors involving WISE bands. The former could be related to the extinction in the dusty birth clouds of the MAPPINGSIII templates not being selective enough, while the latter is again caused by excess PAH emission from the diffuse dust. However, we remark that uncertainties in the dust model, dust distribution, and templates for evolved stellar populations could also play a role. We conclude that this low-redshift dataset provides a useful resource to test the fidelity of TNG100, explore observational systematics (e.g. aperture, inclination, or sample selection effects), and interpret the complexity faced in the observed galaxy population. Fundamentally, this is made possible by shifting the simulated data into the \u2018observational realm\u2019. This approach is complementary to studies in the \u2018physical realm\u2019, and we highlight the importance of considering both approaches as they carry different systematics and biases. The dataset presented in this study represents an important step towards analyzing the vast IllustrisTNG simulation landscape in the \u2018observational realm\u2019. ACKNOWLEDGEMENTS We thank Eric Rohr and Peter Camps for enlightening discussions. We also wish to express our gratitude towards the anonymous referee, whose feedback substantially improved the quality of this paper. AG gratefully acknowledges financial support from the Fund for Scientific Research Flanders (FWO-Vlaanderen, project FWO.3F0.2021.0030.01). This study made extensive use of the Python programming language, especially the numpy (van der Walt et al. 2011), matplotlib (Hunter 2007), and scipy (Virtanen et al. 2020) packages. We also acknowledge the use of the Topcat visualization tool (Taylor 2005) and the ndtest Python package (https://github.com/syrte/ ndtest). The IllustrisTNG simulations were undertaken with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Projects GCS-ILLU and GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS), as well as on the machines of the Max Planck Computing and Data Facility (MPCDF) in Garching, Germany. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/. We also use MNRAS 000, 1\u201315 (2024) 14 A. Gebek et al. VISTA VIKING data from the GAMA database, based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 179.A-2004. This research has made use of the Spanish Virtual Observatory (https://svo.cab.inta-csic.es, Rodrigo et al. 2012; Rodrigo & Solano 2020) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020112949GB-I00. DATA AVAILABILITY The IllustrisTNG data used in this work as well as the generated broadband fluxes are publicly available at https://www. tng-project.org/ as described by Nelson et al. (2019a). The GAMA data is publicly available as part of data release 4 (DR4, Driver et al. 2022) of the GAMA survey. DR4 can be accessed at http://www.gama-survey.org/dr4/. All other data (observational luminosity functions, derived data for GAMA) and the analysis scripts are publicly available at https: //github.com/andreagebek/TNG100colors."
16
+ }
intro_8K/test_introduction_short_2405.04940v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.04940v1",
3
+ "title": "Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID",
4
+ "abstract": "Text-to-image person re-identification (ReID) retrieves pedestrian images\naccording to textual descriptions. Manually annotating textual descriptions is\ntime-consuming, restricting the scale of existing datasets and therefore the\ngeneralization ability of ReID models. As a result, we study the transferable\ntext-to-image ReID problem, where we train a model on our proposed large-scale\ndatabase and directly deploy it to various datasets for evaluation. We obtain\nsubstantial training data via Multi-modal Large Language Models (MLLMs).\nMoreover, we identify and address two key challenges in utilizing the obtained\ntextual descriptions. First, an MLLM tends to generate descriptions with\nsimilar structures, causing the model to overfit specific sentence patterns.\nThus, we propose a novel method that uses MLLMs to caption images according to\nvarious templates. These templates are obtained using a multi-turn dialogue\nwith a Large Language Model (LLM). Therefore, we can build a large-scale\ndataset with diverse textual descriptions. Second, an MLLM may produce\nincorrect descriptions. Hence, we introduce a novel method that automatically\nidentifies words in a description that do not correspond with the image. This\nmethod is based on the similarity between one text and all patch token\nembeddings in the image. Then, we mask these words with a larger probability in\nthe subsequent training epoch, alleviating the impact of noisy textual\ndescriptions. The experimental results demonstrate that our methods\nsignificantly boost the direct transfer text-to-image ReID performance.\nBenefiting from the pre-trained model weights, we also achieve state-of-the-art\nperformance in the traditional evaluation settings.",
5
+ "authors": "Wentao Tan, Changxing Ding, Jiayu Jiang, Fei Wang, Yibing Zhan, Dapeng Tao",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Multi AND Modal AND LLM",
14
+ "gt": "Text-to-image person re-identification (ReID) [13\u201316, 24, 41, 42, 44, 50, 52, 61] is a task that retrieves pedestrian *Corresponding author a woman with blonde hair, wearing a colorful top and black leggings. a woman with brown hair, wearing a red jacket, grey pants and white shoes. She has a handbag with her. The person in the image is a woman with long dark hair, wearing a green and white striped jacket, light blue jeans, and white sneakers. MLLM Similar Structure The woman is wearing sneakers, a blue and white striped shirt, jeans, and a black purse. She has long dark hair. In a green and white striped shirt and gray jeans, the woman also has long brown hair. With long brown hair, the woman is wearing white shoes, a green and white striped shirt, gray jeans, and a black purse. She is carrying a black purse and is walking down the street. Template1: With [hair], the [gender] is wearing [footwear], [clothing], [accessory], and carrying [belongings]. \u2714 MLLM Diverse Structure Template3: In [clothing] and [footwear], the [gender] also has [hair]. Template2: The [gender] is wearing [footwear], [accessory], [clothing], and [belongings]. The [gender] has [hair]. The person in the image is a woman with blonde hair, wearing a colorful top and black leggings. The person in the image is a woman with brown hair, wearing a red jacket, grey pants and white shoes. She has a handbag with her. Figure 1. Illustration of textual descriptions generated by an MLLM (i.e., Qwen [3]). (Top) The description patterns are sim- ilar for different images. (Bottom) Our proposed Template-based Diversity Enhancement (TDE) method significantly enhances the description pattern diversity. It is worth noting that some errors are present in the generated descriptions shown in this figure. images according to textual descriptions. It is a powerful tool when probe images of the target person are unavail- able and only textual descriptions exist. It has various po- tential applications, including video surveillance [6], social media analysis [31], and crowd management [18]. How- ever, it remains challenging mainly because annotating tex- tual descriptions for pedestrian images is time-consuming [64]. Consequently, existing datasets [15, 31, 70] for text- to-image person ReID are usually small, resulting in insuf- ficient deep model training. Previous studies on text-to-image ReID usually assumed that training and testing data are drawn from the same do- main. They proposed novel model architectures [4, 15, 38, 41, 57, 58], loss functions [60, 66, 69], and pre-training strategies [42, 64] to improve model performance for each database. However, researchers have recently discovered arXiv:2405.04940v1 [cs.CV] 8 May 2024 that the cross-dataset generalization ability of their ap- proaches is significantly low [41], limiting real-world ap- plications. Since annotating textual descriptions is time- consuming, collecting training data for each target domain is infeasible. Therefore, training a model that can be di- rectly deployed to various target domains is necessary. Accordingly, we study the transferable text-to-image ReID problem. The term \u201ctransferable\u201d is derived from the seminal work CLIP [38], which refers to a large-scale pre-trained model\u2019s capacity that directly applies its knowl- edge to other domains or tasks without fine-tuning on la- beled data. Due to the rapid advancements in multi-modal large language models (MLLMs) [3, 8, 29, 65], we utilize them to generate textual descriptions automatically and em- ploy them to replace traditional manual annotations. Specif- ically, we utilize the large-scale LUPerson dataset [17] as the image source and generate textual descriptions using MLLMs. The obtained image-text pairs are utilized to train a model directly evaluated in existing text-to-image ReID databases. However, to improve the model\u2019s transfer abil- ity, two essential challenges must be addressed: (1) guiding MLLMs to generate diverse textual descriptions for a sin- gle image and (2) reducing the impact of the noise in the synthesized textual descriptions. First, MLLMs tend to generate descriptions with simi- lar sentence structures, as shown in Fig. 1. This causes the text-to-image ReID model to overfit specific sentence patterns, reducing the model\u2019s ability to generalize to vari- ous human description styles encountered in real-world ap- plications. To address this issue, we propose a Template- based Diversity Enhancement (TDE) method that instructs MLLMs to conduct image captioning according to given description templates. Obtaining these templates with min- imal effort involves performing multi-turn dialogues with ChatGPT [37] and prompting it to generate diverse tem- plates. Then, we randomly integrate one of these templates into the MLLM\u2019s captioning instruction, resulting in vivid descriptions with varied sentence structures. This approach significantly enhances textual description diversity. Second, although MLLMs are highly effective, the gen- erated descriptions still contain errors. This implies that certain words in a textual description may not match the paired image. Thus, we propose a novel Noise-aware Mask- ing (NAM) method to address this problem. Specifically, we compute the similarities between each text token and all image tokens in the paired image for a specific textual description. The similarity scores between the unmatched word and image tokens are usually low. Hence, we iden- tify potentially incorrect words and mask them with a large probability in the next training epoch before they are fed into a text encoder. Furthermore, NAM and Masked Lan- guage Modeling (MLM) are similar but have two key dif- ferences: (1) MLM masks all tokens with equal probabil- ity, while NAM masks them based on their noise level. (2) MLM applies cross-entropy loss to predict the masked to- kens, whereas NAM focuses on masking words without pre- dicting potentially noisy words. In the experimentation sec- tion, we demonstrate NAM\u2019s ability to effectively alleviate the impact of noisy textual descriptions. To the best of our knowledge, this is the first study fo- cusing on the transferable text-to-image ReID problem by harnessing the power of MLLMs. We innovatively gener- ate diverse textual descriptions and minimize the impact of the noise contained in these descriptions. The experimental results show that our method performs excellently on three popular benchmarks in both direct transfer and traditional evaluation settings.",
15
+ "main_content": "Text-to-Image Re-Identification. Existing approaches for this task improve model performance from three perspectives: model backbone [4, 24], feature alignment strategies [24, 41, 66], and pre-training [42, 64]. The first method category improves the model backbone. Early approaches adopted the VGG model [9, 31] and LSTM [35, 62, 66] as image and text encoders, respectively. These encoders gradually evolve into ResNet50 [15, 16, 22, 52] and BERT [12, 32, 40, 43, 70] models. Moreover, the CLIP [21, 38] and ALBEF-based encoders [4, 28, 64] have recently become popular. Notably, the CLIP model contains jointly pre-trained image and text encoders. Thus, its cross-modal alignment capabilities are advantageous and have proven more effective than the individually pre-trained encoders [42]. Moreover, the ALBEF model [28] performs interaction between visual and textual features, which improves the feature representation capacity but brings in significant computational cost. The second category of methods enhances feature alignment strategies. Previous methods aligned an image\u2019s holistic features with its textual description [1, 43, 49, 51, 56, 59, 66]. Subsequent approaches [10, 19, 25, 26, 33, 36, 45, 53, 54] focused on aligning the image-text pair\u2019s local features to suit the fine-grained retrieval nature of text-toimage ReID. These approaches can be divided into explicit and implicit alignment methods. Explicit methods [15, 52] extract the visualand textual-part features and then compute the alignment loss between them. Implicit methods can also align local features [16, 41, 64]. For example, Jiang et al. [24] applied MLM to text tokens and then predicted the masked tokens using image token features. This indirectly realizes local feature alignment between the image patch and noun phrase representations. Since existing databases are small, two recent studies explored pre-training for text-to-image ReID. Shao et al. [42] utilized the CLIP model to predict the attributes of a pedestrian image. Then, they inserted these attributes into manually defined description templates. As a result, they obtained a large number of pre-training data. Similarly, Yang et al. [64] utilized the text descriptions from the CUHKPEDES [31] and ICFG-PEDES [15] datasets to synthesize images using a diffusion model [39]. Then, they used the BLIP model [29] to caption these images and obtain a largescale pre-training dataset. However, these two studies targeted at pre-training and did not investigate the direct transfer setting where no target domain data is available for finetuning. Moreover, they overlooked the noise or diversity issues generated in the obtained textual descriptions. The above methods achieve excellent in-domain performance; however, their cross-dataset performance is usually significantly low [41]. This paper explores the transferable text-to-image ReID task with minimal manual operations. Also, we address the challenges in textual descriptions generated by MLLMs. Multi-modal Large Language Models. Multi-modal Large Language Models (MLLMs) [34, 46, 47, 65, 71] are built on Large Language Models (LLMs) [5, 11, 63, 67, 68] and incorporate textual and non-textual information as input [3, 8, 20]. This paper only considers MLLMs that use both texts and images as input signals. The input text (i.e., the \u201cinstruction\u201d or \u201cprompt\u201d) describes the tasks assigned to MLLMs to understand the image\u2019s content. Regarding MLLM architecture, most studies [30, 34, 48] first map the image patch and text token embeddings into a shared feature space and then perform decoding using a LLM. Some methods [2] improve the interaction and alignment strategies between the image and text tokens during decoding, facilitating more stable training [27]. In this paper, we utilize MLLMs to eliminate the need to manually annotate textual descriptions. We also explore strategies to address the diversity and noise issues in the obtained textual descriptions, facilitating the development of a transferable text-to-image ReID model. 3. Methods The overview of our solution to the transferable text-toimage ReID problem is illustrated in Fig. 2. Section 3.1 addresses diversity issues associated with textual descriptions generated by MLLMs. Section 3.2 discusses the reduction of noise impact in the descriptions. And section 3.3 outlines the loss function utilized for model optimization. 3.1. Generating Diverse Descriptions Manually annotating textual descriptions for pedestrian images is time-consuming and hardly scalable. Fortunately, MLLMs have advanced rapidly and provide effective image captioning. Therefore, we decide to utilize MLLMs to create large-scale text annotations for training a model with excellent transfer capacity. Instruction Design. We adopt the LUPerson database [17] as the image source because it holds a significant amount of images that were captured in diverse environments. A technical aspect of using MLLMs lies in designing an effective instruction, which usually depends on user experience. We solve this problem using a multi-turn dialogue with ChatGPT [37], and this process is detailed in the supplementary material. The resulting instruction is as follows: \u201cWrite a description about the overall appearance of the person in the image, including the attributes: clothing, shoes, hairstyle, gender and belongings. If any attribute is not visible, you can ignore it. Do not imagine any contents that are not in the image.\u201d This is considered a static instruction as it is fixed for all images. In this paper, the textual descriptions generated using the static instruction are denoted as static texts or T s. Diversity Enhancement. An MLLM generates textual descriptions with similar sentence patterns for different images using the static instruction, as illustrated in Fig. 1. This causes the text-to-image ReID model to overfit these sentence patterns, limiting its generalization to real-world descriptions. We attempt to improve the static instruction, but the obtained sentence patterns remained limited. Although using more MLLMs can bring in multiple sentence patterns, these patterns are still far from diverse. Again, we resort to ChatGPT to solve this problem. Specifically, we propose a Template-based Diversity Enhancement (TDE) method. First, we generate two descriptions for each of a set of images using two MLLMs [3, 8] according to the static instruction. Then, we feed these descriptions to ChatGPT to capture their sentence patterns (i.e., description templates). With the guidance of these templates, we instruct ChatGPT to create more templates. Finally, it produces 46 templates after multi-turn dialogues, which are detailed in the supplementary material. We randomly select one of the templates and insert it into the static instruction, obtaining a dynamic instruction as follows: \u201cGenerate a description about the overall appearance of the person, including clothing, shoes, hairstyle, gender, and belongings, in a style similar to the template: \u2018{template}\u2019. If some requirements in the template are not visible, you can ignore them. Do not imagine any contents that are not in the image.\u201d The \u2018{template}\u2019 is replaceable. Furthermore, the textual descriptions generated according to the dynamic instruction are referred to as dynamic texts (T d). As illustrated in Fig. 1, MLLMs can follow the sentence patterns specified in the templates, significantly enhancing the diversity of the obtained textual descriptions. Dataset Description. We utilize the publicly available Qwen [3] and Shikra [8] models in this paper. By harnessing the power of the two MLLMs, we obtain the largeThe person is wearing black dress, [MASK] socks, and white shoes, and carrying [MASK] [MASK]. The person has long blonde hair. Transformer \u2026 Noise-Level Estimation \ufffd\u2019 \ufffd\ufffd\ufffd\ufffd \u2026 Transformer Transformer Transformer \u2026 \u2026 \ufffd\u2019\ufffd\ufffd\ufffd \ufffd\u2019\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd \ufffd\ufffd \ufffd\ufffd \u2026 \u2026 \u00d7 \ufffd= 0 22 0 27 0 15 0 29 0 35 0 40 0 04 0 09 0 08 0 21 0 18 0 16 . . . . . . . . . . . . \uf0e9 \uf0f9 \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0ea \uf0fa \uf0eb \uf0fb \uf04c \uf04c \uf04c \uf04c \uf04c \uf04c \uf04c \uf04c 0.27 0.40 0.09 0.21 \uf0e9 \uf0f9 \uf0eb \uf0fb \uf04c \ufffd\u2212 0.73 0.60 0.91 0.79 \uf0e9 \uf0f9 \uf0eb \uf0fb \uf04c \ufffd\u00a0 = \ufffd\ufffd\u2212 0.15 0.02 0.33 0.21 \uf0e9 \uf0f9 \uf0eb \uf0fb \uf04c \ufffd\u2019 = \ufffd\u2019\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd \u2026 \u2026 \ufffd\ufffd\ufffd\ufffd \ufffd\u2019\ufffd\ufffd\ufffd No Gradient Propagation The person is wearing black dress, black socks, and white shoes, and carrying a bag. The person has long blonde hair. (b) Noise-Level Estimation (a) Overall Framework Noise-Aware Masking SDM Loss \uf03d \uf067j s \uf067j s \ufffd\ufffd\u2208\u211d\ufffd\u00d7\ufffd r E +\ufffd \ufffd\ufffd\u2208\u211d\ufffd\u00d7\ufffd Figure 2. Overview of our framework. We adopt the CLIP-ViT/B-16 model as the backbone. Our framework uses one pedestrian image, the original textual description T full, and a masked textual description T nam as input during training. T nam is obtained by applying NAM to T full. To perform NAM, we first compute the similarity matrix S between the text tokens Ft of T full and the image tokens Fv according to their embeddings at the l-th layer of the encoders. Then, we estimate the probability of each text token\u2019s noisiness according to the similarity between its embedding and the image token embeddings. The similarity distribution matching (SDM) loss is computed between the global visual feature vcls of the pedestrian image and the global textual feature t\u2032 eos of T nam. The model\u2019s optimization quality is enhanced by masking noisy words in T full. (Best viewed in color.) scale LUPerson-MLLM dataset. This dataset comprises 1.0 million images, and each image has four captions, T s qwen, T s shikra, T d qwen, and T d shikra. The first and the last two captions are generated according to the static and dynamic instructions, respectively. We reserve the T s for each image as we observe that its description is usually complementary to that of T d. In the following section, we will train the model with LUPerson-MLLM. For simplicity, we refer all the above MLLM-generated descriptions as T full. 3.2. Noise-Aware Masking Although MLLMs are powerful, they cannot describe images very precisely. As depicted in Fig. 1 and Fig. 2, a few words do not match the described image in the obtained textual descriptions. Existing methods [23, 29] usually discard the noisy descriptions, losing the other valuable information contained in the matched words. Accordingly, we propose a novel noise-aware masking (NAM) method that identifies noisy text tokens and fully uses the matched text tokens for model training. Image Encoder. An image is divided into M nonoverlapped patches. These image tokens are concatenated with the [CLS] token and are fed into the image encoder. Then, the [CLS] token embedding at the last image encoder layer is used as the global image feature, denoted as vcls \u2208Rd. The feature dimension is represented by d. Text Encoder. We tokenize each textual description T full into a sequence of N tokens. The N of each sentence varies according to its length. The token sequence is bracketed with [SOS] and [EOS] to represent the start and the end of the sequence. Meanwhile, we examine each text token\u2019s noise level in T full, which is computed and stored in the previous training epoch. These values are used to perform NAM on T full to obtain T nam. After that, T full and T nam are fed into the text encoder independently. At the final text encoder layer, the global feature t\u2032 eos of T nam is utilized to calculate loss. T full is only used for NAM, which means it is not used for loss computation. Noise-Aware Masking. We utilize the image and text encoders\u2019 token embeddings in the l-th layers for the noiselevel estimation of T full. These embeddings are denoted as Fv = [vl 1, ..., vl M] and Ft = [tl 1, ..., tl N], respectively, where vl j \u2208Rd and tl j \u2208Rd. Furthermore, we calculate the token-wise similarity between a single text-image pair as follows: S = Ft T Fv, (1) where S \u2208RN\u00d7M is a similarity matrix and sij represents the cosine similarity between the i-th text token embedding and the j-th image token embedding. If one text token does not match the image, the similarity scores between this token\u2019s embedding and those of all the image tokens will be consistently be low. Therefore, the noise level of the i-th text token in T full can be estimated via: ri = 1 \u2212( max 1\u2264j\u2264M sij). (2) By applying Eq.(2) to each row of S, we obtain a vector r = [r1, ..., rN] that records the noise-level of all text tokens. Moreover, NAM applies the masking operation to all the text tokens in T full with different probabilities, which can be determined based on the noise-level values recorded in r. However, in the initial training stage, the values of elements in r may be high. This results in excessive masking of important tokens and hinders learning. To resolve this issue, we modify the expectation value of all r elements into a constant number as described below: Er = 1 N N X i=1 ri, (3) r\u2032 = [r1 \u2212Er + p, ..., rN \u2212Er + p], (4) where p is the average masking ratio. We utilize the r\u2032 values as the final probability that a text token might be masked. We include the pseudo code and visualization of NAM in the supplementary materials. Discussion. Computing r\u2032 and then applying NAM to obtain T nam in each iteration requires two forward passes. This additional time cost cannot be overlooked in largescale training. In contrast, our strategy computes r\u2032 for the next training epoch, which requires only one forward pass for each iteration. Furthermore, we initialize the r\u2032 values with the constant p in the first training epoch. 3.3. Optimization Following [24], we adopt the similarity distribution matching (SDM) loss to optimize our model. Given a minibatch of B matched image-text pairs {(vi cls, t \u2032i eos)}B i , we first establish the matching relationship between each image and text (i.e., {(vi cls, t \u2032j eos), yi,j}(1 \u2264i, j \u2264B)), where yi,j = 1 and yi,j = 0 denote a positive and a negative image-text pair, respectively. Then, we calculate the ground truth matching distribution qi for the i-th image, where its j-th element is qi,j = yi,j/ PB b=1 yi,b. Finally, we align the predicted probability distribution pi with qi as follows: Li2t = 1 B B X i=1 KL(pi\u2225qi) = 1 B B X i=1 B X j=1 pi,j log( pi,j qi,j + \u03f5), (5) where \u03f5 is a small number to avoid numerical problems and pi,j = exp(sim(vi cls, t \u2032j eos)/\u03c4) PB b=1 exp(sim(vi cls, t \u2032b eos)/\u03c4) . (6) sim(u, v) = u\u22a4v/\u2225u\u2225\u2225v\u2225denotes the cosine similarity between u and v, \u03c4 is a temperature coefficient. The SDM loss from text to image Lt2i can be computed by exchanging the position of vcls and t \u2032 eos in Eq. (5) and Eq. (6). Finally, the complete SDM loss is computed as follows: Lsdm = Li2t + Lt2i. (7) It is worth noting that since we randomly sample images from the large-scale LUPerson database, we assume that each image in a sampled batch has a unique identity. 4. Experiments 4.1. Datasets and Settings CUHK-PEDES. CUHK-PEDES [31] is a pioneer dataset in the text-to-image ReID field. Each image in this dataset has two textual descriptions. The training set comprises data on 11,003 identities, including 34,054 images and 68,108 textual descriptions. In contrast, the testing set contains 3,074 images and 6,156 textual descriptions from 1,000 identities. ICFG-PEDES. ICFG-PEDES [15] contains of 54,522 images from 4,102 identities. Each image has one textual description. The training set consists of 34,674 image-text pairs corresponding to 3,102 identities, while the testing set comprises 19,848 image-text pairs from the remaining 1,000 identities. RSTPReid. RSTPReid [70] includes 20,505 images captured by 15 cameras from 4,101 identities. Each identity has five images captured with different cameras and each image has two textual descriptions. According to the official data division, the training set incorporates data from 3,701 identities, while both the validation and testing sets include data from 200 identities, respectively. LUPerson. LUPerson [17] contains 4,180,243 pedestrian images sampled from 46,260 online videos, covering a variety of scenes and view points. The images are from over 200K pedestrians. Evaluation Metrics. Like existing works [4, 24, 42, 64], we adopt the popular Rank-k accuracy (k=1,5,10) and mean Average Precision (mAP) as the evaluation metrics for the three databases. Moreover, we consider the following two evaluation settings. Direct Transfer Setting. For this setting, the model is only trained on the LUPerson-MLLM dataset, and the above three benchmarks are tested immediately. This setting directly evaluates the quality of our dataset and the effectiveness of the proposed methods (i.e., TDE and NAM). Table 1. Ablation study on each key component in the direct transfer setting. \u2018CLIP\u2019 refers to directly using the original CLIP encoders provided in [38]. Method T s qwen T s shikra T d qwen T d shikra NAM CUHK-PEDES ICFG-PEDES RSTPReID R1 R5 mAP R1 R5 mAP R1 R5 mAP CLIP 12.65 27.16 11.15 6.67 17.91 2.51 13.45 33.85 10.31 Static Text \u2713 37.65 57.86 33.40 23.78 42.77 11.18 36.30 60.60 26.25 \u2713 39.70 62.60 36.09 19.02 35.63 9.67 36.90 62.65 28.33 \u2713 \u2713 46.00 66.82 41.27 26.74 44.22 13.23 41.10 66.95 30.21 Dynamic Text \u2713 40.72 62.36 37.21 24.16 41.24 11.32 38.65 64.70 28.81 \u2713 43.63 65.46 39.08 22.07 39.57 11.35 38.80 63.45 28.60 \u2713 \u2713 48.86 69.41 44.09 28.43 46.37 14.23 44.25 66.15 32.99 TDE \u2713 \u2713 \u2713 \u2713 50.32 71.36 45.74 29.12 47.96 15.13 45.70 70.75 33.23 NAM \u2713 \u2713 \u2713 \u2713 \u2713 52.64 71.62 46.48 32.61 50.79 16.48 47.75 70.75 34.73 Fine-tuning Setting. In this setting, we first pre-train our model on the LUPerson-MLLM dataset and then finetune it on each of the three benchmarks respectively. 4.2. Implementation Details Similar to previous studies [7, 24], we adopt CLIP-VITB/16 [38] as the image encoder and a 12-layer transformer as our text encoder. The input image resolution is resized to 384 \u00d7 128 pixels. Additionally, we apply random horizontal flipping, random cropping, and random erasing as data augmentation for the input images. Each textual description is first tokenized, with a maximum length of 77 tokens (including the [SOS] and [EOS] tokens). The hyper-parameter p is set to 0.15 and the temperature coefficient \u03c4 in Eq. (6) is set to 0.02. The model is trained using the Adam optimizer with a learning rate of 1e-5 and cosine learning rate decay strategy. We train each model on 8 TITAN-V GPUs, with 64 images per GPU. The training process lasts for 30 epochs. The versions of the mentioned LLM/MLLMs are ChatGPT-3.5-Turbo, Qwen-VL-Chat-7B, and Shikra-7B. 4.3. Ablation Study We randomly sample 0.1 million images from our LUPerson-MLLM database to accelerate the ablation study on the direct transfer evaluation setting. Then, we increase the amount of training images to 1.0 million to enhance the transfer ability of our text-to-image ReID models. Effectiveness of TDE. The experiments in Table 1 show that dynamic instruction is better than static instruction. For example, the model using only T d qwen outperforms that the one using T s qwen by about 3% in Rank-1 performance on the CUHK-PEDES database. On the same database and evaluation metric, the model that uses only T d shikra outperforms the one using T s shikra by about 4%. These experimental results indicate that enhancing sentence pattern diversity improves the transfer ability of ReID models. Therefore, we use the four descriptions for each image in the subsequent experiments. It is worth noting that none of the above experiments employ NAM. Instead, they mask every text token with an equal probability of p. Effectiveness of NAM. MLLM-generated textual descriptions often contain noise, which is harmful for model training. Replacing the equal masking strategy with our NAM method improves our model\u2019s Rank-1 performance by 2.32%, 3.49%, and 2.05% on the three databases, respectively. These improvements are even higher than the benefits of combining dynamic and static texts (i.e., 1.46%, 0.69%, and 1.45%). These experimental results demonstrate that NAM identifies the noisy words in the text and effectively reduces their impact. NAM allows the model to accurately align visual and textual features, thereby enhancing the direct transfer text-image ReID performance. The Layer where NAM Computes S. S contains pairwise similarity scores between features in Fv and Ft. This experiment investigates the optimal layer for obtaining Fv and Ft. The results are plotted in Fig. 3. We observe that the model\u2019s performance consistently improves regardless of the layer used to provide Fv and Ft. We also notice that the adopted encoders\u2019 10-th layer yields the best overall performance. Compared to the last encoder layer, the 10-th layer may offer more fine-grained information, facilitating more accurate similarity computation between token pairs. The Overall Masking Ratio for NAM. Our NAM method masks different text tokens with unequal probabilities, but it maintains an overall probability of p. In this experiment, we explore the optimal p value. To demonstrate NAM\u2019s advantages, we also include the results of the masking tokens with equal probabilities (referred to as \u201cEM\u201d). As shown in Table 4, NAM consistently outperforms EM with various p values. The optimal value of p is about 0.15. Combination of NAM and MLM. MLM requires the model to predict the masked text tokens. It has proven effective and is widely applied in NLP models. Recent textto-image ReID studies [24] confirm that MLM loss is beneficial when the textual descriptions are manually annotated. However, our NAM doesn\u2019t predict the masked tokens as the textual descriptions generated by MLLMs may be noisy. Table 2 shows that applying MLM loss to NAM is harmful, indicating the MLLM description noise is a crucial issue. The Data Size Impact. The dataset size is essential to 51.81 52.24 52.12 52.64 51.98 51.80 50.32 49.5 50 50.5 51 51.5 52 52.5 53 7 8 9 10 11 12 CUHK-PEDES w/ NAM w/o NAM Layer 32.17 32.31 32.33 32.61 32.08 31.62 29.12 28.2 29 29.8 30.6 31.4 32.2 33 7 8 9 10 11 12 ICFG-PEDES w/ NAM w/o NAM Rank-1(%) 47.75 48.05 48.30 47.75 47.55 47.40 45.70 45 45.6 46.2 46.8 47.4 48 48.6 7 8 9 10 11 12 RSTPReid w/ NAM w/o NAM Layer Layer Rank-1(%) Rank-1(%) Figure 3. Results of different layers for NAM to compute S. The encoders contain 12 layers in total. Best viewed with zoom-in. 51.62 52.64 52.25 53.11 52.24 49.95 50.23 49.62 49.87 49.87 48 49 50 51 52 53 54 0.1 0.15 0.2 0.25 0.3 CUHK-PEDES NAM EM 32.32 32.61 31.57 32.42 32.41 29.77 29.12 29.00 30.11 30.04 28 29 30 31 32 33 34 0.1 0.15 0.2 0.25 0.3 ICFG-PEDES NAM EM 47.40 47.75 48.40 47.00 48.30 45.75 45.79 46.75 45.55 45.75 44 45 46 47 48 49 0.1 0.15 0.2 0.25 0.3 NAM EM Rank-1(%) Rank-1(%) Rank-1(%) p p p RSTPReid Figure 4. Results of different overall masking ratios p for NAM. \u2018EM\u2019 represents masking all text tokens with the same probability p. Best viewed with zoom-in. Table 2. Results of the combination of NAM and the MLM loss. Method CUHK-PEDES ICFG-PEDES RSTPReid R1 mAP R1 mAP R1 mAP EM 50.32 45.74 29.12 15.13 45.70 33.23 NAM 52.64 46.48 32.61 16.48 47.75 34.73 NAM w/ MLM loss 48.79 43.86 27.36 14.16 44.45 33.07 Table 3. Comparisons with existing pre-training datasets in the direct transfer setting. Pretrain Dataset CUHK-PEDES ICFG-PEDES RSTPReid R1 mAP R1 mAP R1 mAP None 12.65 11.15 6.67 2.51 13.45 10.31 MALS [64] (1.5 M) 19.36 18.62 7.93 3.52 22.85 17.11 LUPerson-T [42] (0.95 M) 21.88 19.96 11.46 4.56 22.40 17.08 Ours (0.1 M) 52.64 46.48 32.61 16.53 47.75 34.73 Ours (1.0 M) 57.61 51.44 38.36 20.43 51.50 37.34 training. More pre-trained data improves the performance. We investigate the effect of training data size on the direct transfer ReID performance and summarize the results in Fig. 5. It is evident that the model\u2019s direct transfer performance steadily improves as the data amount increases. Finally, compared with the model using only 0.1 million training images, the Rank-1 performance of the model using 1.0 million training images is significantly promoted by 5.75% on the challenging ICFG-PEDES database, indicating that our approach can scale to large-scale database. 4.4. Comparisons with State-of-the-Art Methods Comparisons with Other Pre-training Datasets. MALS [64] and LUPerson-T [42] are two pre-training datasets in the field of text-to-image ReID. MALS [64] contains 1.5 M images, with textual descriptions obtained using the BLIP model [29]. However, it does not address the diversity and 12.65 52.64 54.67 57.00 57.61 6.67 32.61 36.66 38.29 38.36 13.45 47.75 50.20 50.60 51.50 0 10 20 30 40 50 60 0M 0.1M 0.3M 0.6M 1.0M CUHK-PEDES ICFG-PEDES RSTPReid Data Size Rank-1(%) Figure 5. Training data size\u2019s impact on our methods\u2019 direct transfer ReID performance. \u20180 M\u2019 refers to directly using the original CLIP encoders. Table 4. Comparisons with existing pre-training datasets in the fine-tuning setting. Init Parameters Source Target CUHK-PEDES ICFG-PEDES RSTPReid R1 mAP R1 mAP R1 mAP CLIP [38] CUHK-PEDES 73.48 66.21 43.04 22.45 52.55 39.97 ICFG-PEDES 33.90 31.65 63.83 38.37 47.45 36.83 PSTPReid 35.25 32.35 33.58 19.58 60.40 47.70 MALS [64] (1.5 M) CUHK-PEDES 74.05 66.57 44.53 22.66 53.55 39.17 ICFG-PEDES 40.38 36.83 64.37 38.85 49.00 38.20 PSTPReid 38.40 34.47 34.11 20.82 61.90 48.08 LuPerson-T [42] (0.95 M) CUHK-PEDES 74.37 66.60 44.30 22.67 53.75 38.98 ICFG-PEDES 35.07 32.47 64.50 38.22 48.05 38.21 PSTPReid 38.29 34.43 35.81 21.62 62.20 48.33 Ours (0.1 M) CUHK-PEDES 74.64 67.44 46.19 24.08 56.15 40.84 ICFG-PEDES 56.70 51.23 65.30 39.90 52.60 39.76 PSTPReid 56.69 51.40 42.70 25.69 64.05 49.27 Ours (1.0 M) CUHK-PEDES 76.82 69.55 49.38 26.92 59.60 44.70 ICFG-PEDES 61.20 55.60 67.05 41.51 54.80 42.56 PSTPReid 62.99 57.20 48.44 30.03 68.50 53.02 noise issues in the obtained descriptions. LUPerson-T [42] contains 0.95 M images that were also sampled from the LUPerson database [42]. It utilizes the CLIP model to predict pedestrian attributes and inserts them into manually defined templates as textual descriptions. We utilize the three databases to train the CLIP-ViT/B-16 model, incorporating the SDM loss. Finally, we evaluate the model\u2019s performance in both direct transfer and fine-tuning settings. Comparisons on the direct transfer setting are summarized in Table 3. It is shown that the model trained on the LUPerson-MLLM dataset achieves significantly better performance, even when we only sample 0.1 M images. This is because TDE enables diverse description generation. Moreover, NAM efficiently alleviates the impact of noise in textual descriptions. Combining both techniques results in a model that exhibits exceptional transfer abilities. In comparison, neither [64] nor [42] consider the noise problem in their obtained textual descriptions. Table 4 displays the model comparisons in the finetuning setting. In this experiment, we adopt the IRRA method [24] in the fine-tuning stage and initialize its parameters with each of the above three pre-trained models, respectively. The fine-tuned models are evaluated on both Table 5. Comparisons with state-of-the-art methods in the traditional evaluation settings. Method Image Enc. Text Enc. CUHK-PEDES ICFG-PEDES RSTPReid R1 R5 R10 mAP R1 R5 R10 mAP R1 R5 R10 mAP CMPM/C [66] RN50 LSTM 49.37 79.27 43.51 65.44 74.26 ViTAA [52] RN50 LSTM 55.97 75.84 83.52 50.98 68.79 75.78 DSSL [69] RN50 BERT 59.98 80.41 87.56 32.43 55.08 63.19 SSAN [15] RN50 LSTM 61.37 80.15 86.73 54.23 72.63 79.53 43.50 67.80 77.15 LapsCore [55] RN50 BERT 63.40 87.80 LBUL [54] RN50 BERT 64.04 82.66 87.22 45.55 68.2 77.85 SAF [32] ViT-Base BERT 64.13 82.62 88.4 TIPCB [10] RN50 BERT 64.26 83.19 89.1 54.96 74.72 81.89 CAIBC [53] RN50 BERT 64.43 82.87 88.37 47.35 69.55 79.00 AXM-Net [16] RN50 BERT 64.44 80.52 86.77 58.70 LGUR [41] DeiT-Small BERT 65.25 83.12 89.00 59.02 75.32 81.56 47.95 71.85 80.25 IVT [43] ViT-Base BERT 65.69 85.93 91.15 56.04 73.60 80.22 46.70 70.00 78.80 LCR\u00b2S [61] RN50 TextCNN+BERT 67.36 84.19 89.62 59.20 57.93 76.08 82.40 38.21 54.95 76.65 84.70 40.92 UniPT [42] ViT-Base BERT 68.50 84.67 90.38 60.09 76.19 82.46 51.85 74.85 82.85 with CLIP [38] backbone: Han et al. [21] CLIP-RN101 CLIP-Xformer 64.08 81.73 88.19 60.08 IRRA [24] CLIP-ViT CLIP-Xformer 73.38 89.93 93.71 66.10 63.46 80.25 85.82 38.06 60.20 81.30 88.20 47.17 MALS [64] + IRRA CLIP-ViT CLIP-Xformer 74.05 89.48 93.64 66.57 64.37 80.75 86.12 38.85 61.90 80.60 89.30 48.08 LUPerson-T [42] + IRRA CLIP-ViT CLIP-Xformer 74.37 89.51 93.97 66.60 64.50 80.24 85.74 38.22 62.20 83.30 89.75 48.33 Ours (1.0 M) + IRRA CLIP-ViT CLIP-Xformer 76.82 91.16 94.46 69.55 67.05 82.16 87.33 41.51 68.50 87.15 92.10 53.02 with ALBEF [28] backbone: RaSa [4] CLIP-ViT BERT-base 76.51 90.29 94.25 69.38 65.28 80.40 85.12 41.29 66.90 86.50 91.35 52.31 APTM [64] Swin-B BERT-base 76.53 90.04 94.15 66.91 68.51 82.99 87.56 41.22 67.50 85.70 91.45 52.56 Ours (1.0 M) + APTM Swin-B BERT-base 78.13 91.19 94.50 68.75 69.37 83.55 88.18 42.42 69.95 87.35 92.30 54.17 in-domain and cross-domain text-to-image ReID scenarios. According to the results in Table 4, two conclusions can be derived. First, compared with the CLIP model [38], pretraining using the three pre-training datasets exhibits performance promotion for in-domain and cross-domain tasks. Second, pre-training using LUPerson-MLLM exhibits the most remarkable performance promotion. For example, in the ICFG-PEDES \u2192CUHK-PEDES setting, LUPersonMLLM outperforms the other two models by 20.82% and 26.13% in Rank-1 accuracy, respectively. These experimental results further validate the effectiveness of our methods. Comparisons in the Traditional Evaluation Settings. Comparisons with state-of-the-art approaches are summarized in Table 5. We observe that our method achieves the best performance. With our pre-trained model parameters, the Rank-1 accuracy and mAP of IRRA are improved by 8.30% and 5.85% on the RSTPReid database, respectively. Besides, pre-training with our LUPerson-MLLM dataset is more effective than with the MALS and LUPerson-T datasets. This is because we effectively resolve the diversity and noise issues in the MLLM descriptions, facilitating more robust and discriminative feature learning. 5. Conclusion and Limitations This paper explores the challenging transferable text-toimage ReID problem by harnessing the image captioning capability of MLLMs. We acknowledge diversity and noise as critical issues in utilizing the obtained textual descriptions. To address these two problems, we introduce the Template-based Diversity Enhancement (TDE) method to encourage diverse description generation and construct a large-scale dataset named LUPerson-MLLM. In addition, we proposed the NAM method to mitigate the impact of noisy textual descriptions. Extensive experiments demonstrate that TDE and NAM significantly improve the model\u2019s transfer power. However, these methods have limitations: the effectiveness of TDE is limited by the number of sentence templates; NAM may occasionally fail to mask noisy tokens. In the future, we aim to explore more powerful methods to address diversity and noise issues in MLLMgenerated descriptions. Broader Impacts. TDE addresses fixed sentence patterns generated by MLLMs, inspiring effective instruction design to harness MLLMs\u2019 capabilities. Meanwhile, NAM tackles text noise generated by MLLMs, facilitating wider MLLM adoption for practical real-world problems. Acknowledgement. This work was partially supported by the Major Science and Technology Innovation 2030 \u201cNew Generation Artificial Intelligence\u201d key project (No. 2021ZD0111700), the National Natural Science Foundation of China under Grants 62076101 and 62172354, the Guangdong Basic and Applied Basic Research Foundation under Grant 2023A1515010007, the Guangdong Provincial Key Laboratory of Human Digital Twin under Grant 2022B1212010004, and the Yunnan Provincial Major Science and Technology Special Plan Projects under Grant 202202AD080003. We also gratefully acknowledge the support and resources provided by the Yunnan Key Laboratory of Media Convergence, the CAAI Huawei MindSpore Open Fund and the TCL Young Scholars Program."
16
+ }
intro_8K/test_introduction_short_2405.05007v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05007v1",
3
+ "title": "HC-Mamba: Vision MAMBA with Hybrid Convolutional Techniques for Medical Image Segmentation",
4
+ "abstract": "Automatic medical image segmentation technology has the potential to expedite\npathological diagnoses, thereby enhancing the efficiency of patient care.\nHowever, medical images often have complex textures and structures, and the\nmodels often face the problem of reduced image resolution and information loss\ndue to downsampling. To address this issue, we propose HC-Mamba, a new medical\nimage segmentation model based on the modern state space model Mamba.\nSpecifically, we introduce the technique of dilated convolution in the HC-Mamba\nmodel to capture a more extensive range of contextual information without\nincreasing the computational cost by extending the perceptual field of the\nconvolution kernel. In addition, the HC-Mamba model employs depthwise separable\nconvolutions, significantly reducing the number of parameters and the\ncomputational power of the model. By combining dilated convolution and\ndepthwise separable convolutions, HC-Mamba is able to process large-scale\nmedical image data at a much lower computational cost while maintaining a high\nlevel of performance. We conduct comprehensive experiments on segmentation\ntasks including skin lesion, and conduct extensive experiments on ISIC17 and\nISIC18 to demonstrate the potential of the HC-Mamba model in medical image\nsegmentation. The experimental results show that HC-Mamba exhibits competitive\nperformance on all these datasets, thereby proving its effectiveness and\nusefulness in medical image segmentation.",
5
+ "authors": "Jiashu Xu",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "eess.IV",
9
+ "cats": [
10
+ "eess.IV",
11
+ "cs.CV"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Mamba",
15
+ "gt": "Modern medical research is inextricably linked to the utilization of various medical images[1]. Medical images are designed to provide an accurate visual representation of the structure and function of various tissues and organs within the human body. They assist medical professionals and scientific researchers in exploring the normal and abnormal conditions of patients in great detail, thereby serving clinical and research purposes. In both laboratory-based cutting-edge medical research and in the clinical setting, medical image analysis plays a pivotal role in facilitating scientific inference and diagnosis. [2]Automatic medical image segmentation technology has the potential to expedite pathological diagnoses, thereby enhancing the efficiency of patient care. In recent years, a considerable amount of research on the computer-aided system for healthcare applications has been conducted[3, 4, 5].CNN-based and Transformer-based models have demonstrated excellent performance in a variety of vision tasks, especially in medical image segmentation. UNet[6], as a representative of CNN-based models, is known for its simple structure and scalability, and many subsequent improvements are based on this U-shaped architecture. TransUnet[7] is a pioneer in the field of Transformer-based models, it initially employs the Vision Transformer (ViT)[8] for feature extraction during the encoding phase and a Convolutional Neural Network (CNN) during the decoding phase. It demonstrates a robust capacity to capture global information. TransFuse[9] integrates the parallel architectures of ViT and CNN to simultaneously capture both local and global features. Furthermore, Swin-UNet[10] integrates Swin Transformer[11] with a U-shaped architecture, representing the inaugural instance of a U-shaped model that is exclusively based on Transformer. arXiv:2405.05007v1 [eess.IV] 8 May 2024 A PREPRINT - MAY 9, 2024 However, although existing models have achieved some success in feature extraction, they still face the problem of reduced image resolution and information loss due to downsampling when dealing with medical images with complex textures and structures. To address this issue, Yu F. and Koltun V.[12] proposed the technique of dilated convolution. Dilated convolution allows the model to capture a wider range of contextual information without increasing the computational cost by extending the receptive field of the convolution kernel. Because it has the ability to enhance the perception of different scale structures of images without losing image details, it is especially suitable for medical images. However, since the dilated convolution increases the perceptual field by inserting \"0\" between the elements of the convolution kernel, the captured features may not be coherent or accurate in some cases. In recent times, studies based on state space models (SSMs) have attracted considerable interest from researchers [13, 15, 14].Building on the findings of classical SSM research[?], modern SSMs (e.g., Mamba[15]) not only establish long-range dependencies but also exhibit linear complexity with respect to input size. In particular, U-Mamba[16] demonstrates its potential by combining SSM with CNN for the first time in the context of medical image segmentation tasks. Inspired by this, we propose HC Mamba, a model based on SSM, which integrates a variety of convolution methods optimized for medical images, in order to further demonstrate its potential in the task of medical image segmentation. We introduce the technique of dilated convolution in the HC-Mamba model. By feeding the features generated by the dilated convolution into the SSM, the state transition capability of the SSM can be utilized to enhance the spatial correlation between the features, thus compensating for the discontinuities introduced due to the voids. In addition, the HC-Mamba model employs depthwise separable convolutions[17], a convolution method that decom- poses the traditional convolution operation into two parts: depthwise convolution and pointwise convolution, which significantly reduces the number of parameters and the computational power of the model. By combining dilated convolutions and depthwise separable convolutions, HC-Mamba is able to process large-scale medical image data at a much lower computational cost while maintaining a high level of performance, which is particularly important for real-time medical image processing and large-scale medical data analysis. We conducted comprehensive experiments on segmentation tasks including organ segmentation, skin lesion and brain tumor images, and conduct extensive experiments on ISIC17 and ISIC18[18] to demonstrate the potential of the HC-Mamba model in medical image segmentation. The experimental results show that HC-Mamba exhibits competitive performance on all these datasets, thereby proving its effectiveness and usefulness in medical image segmentation. In conclusion, our contribution to the field can be summarized as follows: \u2022 We propose a hybrid convolution Mamba model (HC Mamba) for medical image segmentation, which combines a variety of convolution methods optimized for medical images to improve the receptive field of the model and reduce the parameters of the model. \u2022 We propose the HC-SSM module for enhancing the model\u2019s ability to extract features \u2022 We conducted extensive performance evaluations of the proposed model. The results show that our model has high accuracy (94.84%), mIoU (80.60%) and validity of DSC (89.25%).",
16
+ "main_content": "2.1 Preliminaries Modern models based on State Space Models (SSM), particularly the Structured State Space Sequence Model (S4) and Mamba model, are classical continuous systems. The system maps a one-dimensional input function or sequence x(t) \u2208R to an output y(t) \u2208R via an implicit latent state h(t) \u2208RN, as shown in Equation 1. \ufffdh\u2032(t) = Ah(t) + Bx(t) y(t) = Ch(t) (1) where, A \u2208RN\u00d7N is the state matrix, while B \u2208RN\u00d71 and C \u2208RN\u00d71 represent the projection parameters.The process is shown in the Figure 1.In the figure, the symbol D represents a skip connection, which can be understood as a transformed residual connection. Consequently, the portion of the graph that excludes D is typically designated as SSM. 2 A PREPRINT MAY 9, 2024 Figure 1: SSM(state space model)process diagram To adapt these continuous systems for deep learning applications, S4 and Mamba discretize the system. Specifically, a time scale parameter, or step size \u2206, is introduced, and fixed discretization rules such as Zero Order Hold (ZOH) are used to transform A and B into discrete parameters \u02c6 A and \u00af B: ( \u02c6 A = exp(\u2206A) \u00af B = \u2206A\u22121(exp(\u2206A) \u2212I)\u2206B (2) After discretization, the state space model computation can be implemented either through linear recursion: \u001ah\u2032(t) = Ah(t) + Bx(t) y(t) = Ch(t) (3) or global convolution: \u001aK = (CB, CAB, . . . , CAL\u22121B) y = x \u2217K (4) where, K \u2208RL represents a structured convolution kernel, and L denotes the length of the input sequence x. 2.2 Model structure The structure of HC-Mamba can be described as patch embedding layer, HC-SSM Block and patch merging layer. the model architecture is shown in Figure 2(a). 3 A PREPRINT MAY 9, 2024 Figure 2: (a) Overall structure of HC-Mamba. (b) Overall structure of HC-SSM Bloc In the HC-Mamba , the Patch Embedding layer first partitions the input image x \u2208RH\u00d7W \u00d73 into non-overlapping blocks of size 4x4. This operation maps the dimensions of the image to C dimensions (typically C = 96), resulting in an embedded image x\u2032 \u2208R H 4 \u00d7 W 4 \u00d7C. Subsequently, x\u2032 undergoes a layer normalization to standardize the embedded image before entering the main backbone of the HC-Mamba. The backbone consists of four stages. In particular, after the output of the first three stages, a merging layer is used to reduce the height and width of the input features while increasing the number of channels. We employed [2, 4, 2, 2] HC-SSM blocks in the four stages, with each stage having [C, 2C, 4C, 8C] channels respectively. 2.2.1 SS2D module SS2D module is the core of the HC-SSM block, which includes three key components: scan expansion, S6 block, and scan merging. Scan expansion decomposes the input image into independent sequences along four directions (up, down, left, and right), a step that ensures a wide spatial coverage of information and achieves multidirectional feature capture. Next, the S6 block uses a selectivity mechanism to impose choices on the parameters of the state-space model in order to accurately identify and extract the useful information while filtering out the irrelevant parts. Specifically, the block takes as input the feature format of [B, L, D], where B is the batch size, L is the sequence length, and D is the feature dimension. The features are first transformed through a linear layer, after which the update and output equations in the state space model are applied to produce the final output features. Finally, a scan-and-merge operation reconfigures these transformed sequences to produce an output image that matches the dimensions of the original input image. Through this subtle series of operations, the SS2D module provides powerful feature extraction and processing capabilities for the HC-SSM block. 2.2.2 HC-SSM Block HC-SSM block is the core module of HC-Mamba, as shown in Figure 2(b). We propose a two-branch feature extraction module based on SS2D. First, the module input is split into two sub-inputs of equal size using the channel split operation. Then, the two sub-inputs are fed into two branch modules, SSM branch and HC-Conv branch, respectively. In the SSM branch, the input undergoes a layer normalization and then enters the SS2D module, where the input features are first passed through a linear mapping for dimensionality enhancement, followed closely by a convolutional layer with depth-separable convolutions, which preserves the dimensionality and at the same time improves the localization processing of the features by grouping them. Then, the SiLU activation function is applied, a nonlinear transformation 4 A PREPRINT MAY 9, 2024 is introduced to enrich the model\u2019s expressiveness, and finally, the processed features are remapped to the original feature space to obtain the output of the SSM branch. In the HC-Conv branch, we introduce dilated convolution to expand the receptive field of the convolution kernel to capture a wider range of contextual information. This technique is particularly suitable for medical images, as it improves the model\u2019s ability to perceive structures at different scales of the image without losing image details. Meanwhile, we use an expansion strategy with an expansion rate of 1,2,3,1 to avoid the gridding effect that occurs with discontinuous data. Meanwhile, compared with the expansion rate of 2,2,2, the expansion rate of 1,2,3 strategy can ensure the continuity of the sensory field, an example is shown in Figure 3. Figure 3: Comparison diagram between expansion rate of 1,2,3 (left) and expansion rate of 2,2,2 (right) In comparison to the use of three layers of normal convolution, a larger sensory field can be obtained, examples of which can be seen in Figure 4. Figure 4: Receptive field diagram using three layers of ordinary convolution Meanwhile, the use of a sawtooth-like expansion rate strategy(i.e., an expansion rate of 1,2,3,1) allows the refocusing of local features after multi-scale feature extraction and helps to maintain spatial continuity of features, while the use of a smaller expansion rate at the end of the sequence allows the model to refocus on smaller regions that may contain important information. 5 A PREPRINT MAY 9, 2024 Finally, we merge the outputs of the two branches along the channel dimension of the feature map and use a parameterfree lightweight operation, the channel shuffle operation, to facilitate information interaction between the channels of the two sub-inputs. 3 Experiments 3.1 Datasets We conduct comprehensive experiments on HC-Mamba for medical image segmentation tasks. Specifically, we evaluate the performance of HC-Mamba on medical image segmentation tasks on the ISIC17, ISIC18 datasets. \u2022 ISIC2017:The ISIC2017 dataset contains three categories of diseases, melanoma, seborrheic keratosis, and benign nevus, 2,750 images, ground truth, and category labels. There are 2,000 images in the training set, 150 images in the validation set, and 600 images in the test set, and the color depth of the skin disease images is 24 bits, and the image sizes range from 767\u00d7576 to 6,621\u00d74,441. The validation and test sets also include unlabeled hyperpixel images. The category labels are stored in tables and the datasets need to be preprocessed before training the model. \u2022 ISIC2018:The ISIC2018 dataset contains different numbers of disease images for classification and segmentation, for the segmentation task, a total of 2,594 images were used as the training set, and 100 and 1,000 images were used as the validation and test sets, respectively. For the classification task, a total of 12,500 images were included, of which the training set contained a total of 10,015 images of 7 categories of diseases, namely actinic keratoses (327), basal cell carcinoma (514), benign keratoses (1,099), dermatofibromas (115), melanomas (1,113), melanocytic naevi (6,705), and vascular skin lesions (142). The seven classes of images in the classification task dataset are mixed in the same folder, and the labels are stored in tables that require preprocessing. 3.2 Results We compare HC-Mamba with some state-of-the-art models and some recent mamba-based model, presenting the experimental results in Table 1. In order to fully demonstrate that HC-Mamba\u2019s potential in medical image segmentation tasks directly benefits from SSM, we did not use any pre-training strategies. For the ISIC2017 and ISIC2018 datasets, HC-Mamba performs well on mIoU and Dice compared to other models. Specifically, HC-Mamba has a 1.46% and 1% advantage over MedMamba on mIoU and Dice, respectively, while it has a 2.74% and 1.7% advantage over Unet on mIoU and Dice, respectively. Table 1: Comparative experimental results on the ISIC17 and ISIC18 dataset. (Bold indicates the best.) Dataset Model mIoU(%)\u2191 DSC(%)\u2191 Acc(%)\u2191 Spe(%)\u2191 Sen(%)\u2191 ISIC17 UNet[6] 76.98 85.99 94.65 97.43 86.82 UTNetV2[22] 76.35 86.23 94.84 98.05 84.85 TransFuse[9] 77.21 86.40 95.17 97.98 86.14 MALUNet[23] 76.78 87.13 95.18 98.47 84.78 VM-UNet[20] 77.59 87.03 95.40 97.47 86.13 MedMamba[21] 78.82 88.15 95.01 97.50 86.62 HC-Mamba 79.27 88.18 95.17 97.47 86.99 ISIC18 UNet[6] 77.86 87.55 94.05 96.69 85.86 UNet++ [24] 76.31 85.83 94.02 95.75 88.65 Att-UNet [25] 76.43 86.91 94.13 96.23 87.60 SANet [26] 77.52 86.59 94.39 95.97 89.46 VM-UNet[20] 77.95 87.61 94.13 96.99 85.23 MedMamba[21] 79.13 88.35 94.23 95.68 89.74 HC-Mamba 80.60 89.25 94.84 97.08 87.90 3.3 Ablation experiments We compare HC-Mamba with and without Dilated convolution and depthwise separable convolution(DW convolution), presenting the experimental results in Table 2.Compared with model without Dilated convolution and depthwise 6 A PREPRINT MAY 9, 2024 separable convolution, HC-Mamba has only 12M parameters, a reduction of nearly 60%, while maintaining the same high level of performance. Table 2: Ablation studies on dilated convolution and depthwise separable convolutions. Convolution Method Evaluation parameter count mIoU(%)\u2191DSC(%)\u2191 Count(M)\u2193 77.85 87.51 27.43 Dilated convolution 78.85 88.17 24.68 DW convolution 77.95 87.61 13.06 Both 80.60 89.25 13.88 4 Discussion We propose HC-Mamba, a SSM model based on optimized convolution of multiple medical images. Its performance on medical image segmentation tasks is due to some of the current state-of-the-art models and some of the recent Mamba-based models. We introduce the technique of dilated convolution in the HC-Mamba model. Dilated convolution technique enables the model to capture a more extensive range of contextual information without increasing the computational cost by extending the perceptual field of the convolution kernel. This technique is particularly well-suited to medical images because it enhances the model\u2019s ability to perceive structures at different scales of the image without losing image details. Concurrently, by inputting the features generated by the dilated convolution into SSM, the state transition capability of SSM can be utilized to enhance the spatial correlation between the features, thus compensating for the discontinuities introduced due to the voids, which is one of the reasons for the excellent performance of HC-Mamba on medical images. In addition, the HC-Mamba model employs depthwise separable convolutions, a convolution method that decomposes the traditional convolution operation into two parts: depthwise convolution and pointwise convolution, significantly reducing the number of parameters and the computational power of the model. By combining dilated convolution and depthwise separable convolutions, HC-Mamba is able to process large-scale medical image data at a much lower computational cost while maintaining a high level of performance. Compared with existing Mamba-based segmentation models, such as VM-Unet, which has nearly 30M parameters, and MedMamba, which has nearly 25M parameters, HC-Mamba has only 13M parameters, a reduction of nearly 50%, while maintaining the same high level of performance, which provides a better basis for deploying it on lower-end devices. 5 Conclusion We propose a Mamba for medical image segmentation (HC-Mamba) that incorporates multiple convolutional approaches optimized for medical images, and the HC-SSM module for enhancing the model\u2019s ability to extract features. The proposed method achieves excellent performance compared to some state-of-the-art models and some recent mambabased model. The results show that HC-Mamba has excellent performance in medical image segmentation tasks. In addition, compared to various architectures widely used in medical image segmentation tasks, HC-Mamba demonstrates strong competitiveness. In addition, we summarize our future work into the following points: 1) We will further explore and test the potential of HC-Mamba on medical datasets obtained from other imaging technologies. 2) We will further use explainable artificial intelligence to analyze HC-Mamba\u2019s decision-making mechanism."
17
+ }
intro_8K/test_introduction_short_2405.05216v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05216v1",
3
+ "title": "FinePOSE: Fine-Grained Prompt-Driven 3D Human Pose Estimation via Diffusion Models",
4
+ "abstract": "The 3D Human Pose Estimation (3D HPE) task uses 2D images or videos to\npredict human joint coordinates in 3D space. Despite recent advancements in\ndeep learning-based methods, they mostly ignore the capability of coupling\naccessible texts and naturally feasible knowledge of humans, missing out on\nvaluable implicit supervision to guide the 3D HPE task. Moreover, previous\nefforts often study this task from the perspective of the whole human body,\nneglecting fine-grained guidance hidden in different body parts. To this end,\nwe present a new Fine-Grained Prompt-Driven Denoiser based on a diffusion model\nfor 3D HPE, named \\textbf{FinePOSE}. It consists of three core blocks enhancing\nthe reverse process of the diffusion model: (1) Fine-grained Part-aware Prompt\nlearning (FPP) block constructs fine-grained part-aware prompts via coupling\naccessible texts and naturally feasible knowledge of body parts with learnable\nprompts to model implicit guidance. (2) Fine-grained Prompt-pose Communication\n(FPC) block establishes fine-grained communications between learned part-aware\nprompts and poses to improve the denoising quality. (3) Prompt-driven Timestamp\nStylization (PTS) block integrates learned prompt embedding and temporal\ninformation related to the noise level to enable adaptive adjustment at each\ndenoising step. Extensive experiments on public single-human pose estimation\ndatasets show that FinePOSE outperforms state-of-the-art methods. We further\nextend FinePOSE to multi-human pose estimation. Achieving 34.3mm average MPJPE\non the EgoHumans dataset demonstrates the potential of FinePOSE to deal with\ncomplex multi-human scenarios. Code is available at\nhttps://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024.",
5
+ "authors": "Jinglin Xu, Yijie Guo, Yuxin Peng",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Given monocular 2D images or videos, 3D Human Pose Estimation (3D HPE) aims to predict the positions of human 1 arXiv:2405.05216v1 [cs.CV] 8 May 2024 body joints in 3D space. It is vital in various applications, including self-driving [50, 56], sports analysis [13, 31, 46], abnormal detection [9, 45], and human-computer interaction [11, 25, 42]. Considering the expensive computational costs of directly obtaining 3D human poses from 2D contents, 3D HPE is usually decomposed into two stages: 1) detecting 2D keypoints in images or videos [5, 7, 24, 39], and 2) mapping 2D keypoints to 3D human poses [6, 10, 35, 48, 52]. In this work, we mainly focus on the second stage, estimating 3D human poses given 2D keypoints. Existing monocular 3D HPE methods [4, 6, 10, 17\u2013 19, 27, 28, 35, 36, 43, 44, 47, 48, 52, 54, 59, 61] usually have three challenges as follows: 1) Uncertainty: the depth ambiguity inherently exists in the mapping from 2D skele- tons to 3D ones (one-to-many); 2) Complexity: flexible human body structure, complex inter-joint relationships, and a high limb freedom degree lead to self-occlusion or rare and complicated poses; 3) Generalizability: current publicly available 3D HPE datasets have limited action classes, and thus, the models trained on such data are prone to overfitting and difficult to generalize to more diverse action classes. To address these issues, we consider improving the 3D HPE model performance by enhancing the input information. We found that existing methods ignore accessible texts and naturally feasible knowledge of humans while they promise to provide the model with more guidance. We explicitly utilize (1) the action class of human poses, (2) kinematic information \u201cspeed\u201d, and (3) the way that different human body parts (e.g., person, head, body, arms, and legs) move in human activities to build fine-grained part-aware prompts for the reconstruction task. Specifically, we incorporate a fine-grained part-aware prompt learning mechanism into our framework to drive 3D human pose estimation via vision- language pre-trained models. It is well known that text prompts play a crucial role in various downstream tasks for vision-language pre-training models (e.g., CLIP [30]). However, manually designing prompt templates is expensive and cannot ensure that the final prompt is optimal for the 3D HPE task. Thus, we create a new fine-grained part-aware prompt learning mechanism that adaptively learns modifiers for different human body parts to precisely describe their movements from multiple granularities, including action class, speed, the whole person, and fine-grained human body parts. This new mechanism, coupled with diffusion models, possesses controllable high-quality generation capability, which is beneficial in addressing the challenges of the 3D human pose estimation task. In this work, we propose a Fine-grained Prompt-driven Denoiser (FinePOSE) based on diffusion models for 3D human pose estimation, in Fig. 1, which is composed of a fine-grained part-aware prompt learning (FPP) block, fine-grained prompt-pose communication (FPC) block, and prompt-driven timestamp stylization (PTS) block. Con- cretely, the FPP block encodes three kinds of information about the human pose, including action class, coarse- and fine-grained parts of humans like \u201cperson, head, body, arms, legs\u201d, and kinematic information \u201cspeed\u201d, and integrates them with pose features for serving subsequent processes. Then, the FPC block injects fine-grained part-aware prompt embedding into noise 3D poses to establish fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. To handle 3D poses with different noise levels, the PTS block intro- duces the timestamp coupled with fine-grained part-aware prompt embedding into the denoising process to enhance its adaptability and refine the prediction at each noise level. Our contributions can be summarized as follows: \u2022 We propose a new fine-grained part-aware prompt learning mechanism coupled with diffusion models that possesses human body part controllable high-quality generation ca- pability, beneficial to the 3D human pose estimation task. \u2022 Our FinePOSE encodes multi-granularity information about action class, coarse- and fine-grained human parts, and kinematic information, and establishes fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. \u2022 Extensive experiments illustrate that our FinePOSE ob- tains substantial improvements on Human3.6M and MPI- INF-3DHP datasets and achieves state-of-the-art. More experiments on EgoHumans demonstrate the potential of FinePOSE to deal with complex multi-human scenarios.",
15
+ "main_content": "Diffusion Models. Diffusion models [12, 26, 37, 38] are a kind of generative models that sequentially add a series of noise with different levels to the raw data, gradually transforming it from an original data distribution to a noisy distribution, and subsequently reconstructing the original data by denoising. Diffusion models have strong capabilities in many applications, from 2D image or video generation/editing [1\u20133, 16, 49] to 3D human pose estimation/generation [10, 17, 19, 27, 35, 47, 48, 52, 54, 59]. The 3D HPE task, for example, encounters various difficulties, including occlusions, limited training data, and inherent ambiguity in pose representations. Therefore, diffusion models\u2019 ability to generate high-fidelity 3D human poses makes them more suitable for 3D HPE. 3D Human Pose Estimation. Considering that extracting 2D human skeletons from videos or images requires expensive costs, the 3D human pose estimation task is usually divided into two phases: (1) estimating 2D positions of human joints from images or videos [5, 7, 22, 41], and (2) mapping 2D positions to the 3D space to estimate the 3D positions of human joints [4, 6, 10, 17\u2013 19, 27, 28, 35, 36, 43, 47, 48, 52, 54, 59, 61]. In this work, mate the 3D positions of human joints [4, 6, 10, 17\u2013 19, 27, 28, 35, 36, 43, 47, 48, 52, 54, 59, 61]. In this work, we focus on the second phase. Early, TCN [29] used a 2 Diffusion Process Denoising Process Fine-grained Prompt-driven Denoiser (FinePOSE) Fine-grained Part-aware Prompt Learning (FPP) CLIP Training & Inference Add Noise Contaminated 3D poses: Training Fine-grained Prompt-driven Denoiser Fine-grained Prompt-pose Communication (FPC) Fine-grained Prompt-Pose MHCA PTS Spatial MHSA Temporal MHSA spatial temporal spatial-temporal SpatialTemporal MHSA Fine-grained Part-aware Prompts head arms body legs action class person speed 2D poses: Uncontaminated 3D poses: Reconstructed 3D poses: Figure 2. The architecture of the proposed FinePOSE. In the diffusion process, Gaussian noise is gradually added to the ground-truth 3D poses Y0, generating the noisy 3D poses Yt for the timestamp t. In the denoising process, Yt, X and t are fed to fine-grained prompt-driven denoiser D to reconstruct pure 3D poses \u02c6 Y0. D is composed of a Fine-grained Part-aware Prompt learning (FPP) block, a Fine-grained Prompt-pose Communication (FPC) block, and a Prompt-driven Timestamp Stylization (PTS) block, where FPP provides more precise guidance for all human part movements, FPC establishes fine-grained communications between learnable prompts and poses for enhancing the denoising capability, and PTS integrates learned prompt embedding and current timestamp for refining the prediction at each noise level. fully convolutional network based on dilated temporal convolutions over 2D keypoints to estimate 3D poses in video. SRNet [51] proposed a split-and-recombine approach, leading to appreciable improvements in predicting rare and unseen poses. Anatomy [6] decomposed the task into bone direction prediction and bone length prediction, from which the 3D joint locations can be derived entirely. Recently, MixSTE [52] used temporal and spatial transformers alternately to obtain better spatio-temporal features. MotionBERT [59] proposed a pretraining stage to recover the underlying 3D motion from noisy partial 2D observations. GLAGCN [48] globally modeled the spatio-temporal structure for 3D human pose estimation. D3DP [35] proposed the jointlevel aggregation strategy to benefit from all generated poses. Unlike previous methods, our approach proposes a new finegrained part-aware prompt learning mechanism coupled with diffusion models that possess controllable, high-quality generation capability of human body parts, which benefits the 3D human pose estimation task. Prompt Learning. Prompt learning has been widely used in the computer vision community [8, 21, 57, 58]. Typically, CoOp [58] utilized a continuous prompt optimization from downstream data instead of hand-craft design, the pioneering work that brings prompt learning to adapt pre-trained vision language models. CoCoOp [57] extended CoOp by learning image conditional prompts to improve generalization. ProDA [21] learned a prompt distribution over the output embedding space. VPT [8] introduced variational prompt tuning by combining a base learned prompt with a residual vector sampled from an instance-specific underlying distribution. PointCLIPV2 [60] combined CLIP [30] with GPT [20] to be a unified 3D open-world learner. Unlike the above methods, we propose a new fine-grained part-aware prompt learning mechanism, which encodes multi-granularity information about action class, coarseand fine-grained human parts, and kinematic data, and establishes fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. 3. The Proposed Approach: FinePOSE Given a 2D keypoints sequence X \u2208RN\u00d7J\u00d72, constructed by N frames with J joints in each, the proposed approach is formulated to predict the 3D pose sequence Y \u2208RN\u00d7J\u00d73. Considering the high-quality generation capability of the text-controllable denoising process of diffusion models, we develop a Fine-grained Prompt-driven Denoiser (FinePOSE) D for 3D human pose estimation. FinePOSE generates accurate 3D human poses enhanced by three core blocks: Finegrained Part-aware Prompt learning (FPP), Fine-grained Prompt-pose Communication (FPC), and Prompt-driven Timestamp Stylization (PTS) blocks. 3.1. Diffusion-Based 3D Human Pose Estimation Diffusion models are generative models that model the data distribution in the form of p\u03b8(Y0) := R p\u03b8(Y0:T )dY1:T through chained diffusion and reverse (denoising) processes. The diffusion process gradually adds Gaussian noise into the ground truth 3D pose sequence Y0 to corrupt it into an approximately Gaussian noise Yt(t\u2192T) using a variance schedule {\u03b2t}T t=1, which can be formulated as \\la b el {e q1 } q\\ l e f t ( \\mathbf {Y}_{t}\\mid \\mathbf {Y}_{0}\\right ):=\\sqrt {\\bar {\\alpha }_{t}} \\mathbf {Y}_{0}+\\epsilon \\sqrt {1-\\bar {\\alpha }_{t}}, (1) where \u00af \u03b1t :=Qt s=0\u03b1s and \u03b1t :=1\u2212\u03b2t. Afterward, the denoising process reconstructs the uncontaminated 3D poses by a 3 denoiser D. Since the degraded data is well approximated by a Gaussian distribution after the diffusion process, we can obtain initial 3D poses YT by sampling noise from a unit Gaussian. Passing YT (t = T) to the denoiser D, we obtain \u02c6 Y0 that is thereafter used to generate the noisy 3D poses \u02c6 Yt \u2212 1 as inputs to the denoiser D at timestamp t\u22121 via DDIM [37], which can be formulated as \\ l a be l { e q :DDIM } \\ m at h bf { Y }_{t\\!-\\!1}=\\sqrt {\\bar {\\alpha }_{t\\!-\\!1}}\\hat {\\mathbf {Y}}_0\\!+\\!\\epsilon _t\\sqrt {1\\!-\\!\\bar {\\alpha }_{t\\!-\\!1}\\!-\\!\\sigma ^2_t}\\!+\\!\\sigma _t\\epsilon , (2) where t is from T to 1, \u03f5 \u223cN(0, I) is standard Gaussian noise independent of Yt, and \\ e psilo n _ t &= \\ l e ft (\\m athb f { Y }_t \\ !\\ !\\ s qrt {\\b a r {\\a lpha } _ t}\\cdot \\hat {\\mathbf {Y}}_0\\right )/\\sqrt {1\\!-\\!\\bar {\\alpha }_t}, \\\\ \\sigma _t&=\\sqrt {\\left (1\\!-\\!\\bar {\\alpha }_{t\\!-\\!1}\\right )/\\left (1\\!-\\!\\bar {\\alpha }_t\\right )}\\cdot \\sqrt {1\\!-\\!(\\bar {\\alpha }_t/\\bar {\\alpha }_{t\\!-\\!1})}, (3b) where \u03f5t is the noise at timestamp t, and \u03c3t controls how stochastic the diffusion process is. 3.2. Fine-grained Prompt-driven Denoiser Fine-grained Part-aware Prompt Learning (FPP). To assist the reconstruction of pure 3D poses \u02c6 Y0 from contaminated 3D poses Yt with additional information, FinePOSE guides the denoising process with regular 2D keypoints X, timestamp t, and fine-grained part-aware prompt embedding P. We design the FPP block to learn P. It encodes three pose-related information in the prompt embedding space, including its action class, coarseand fine-grained parts of humans like \u201cperson, head, body, arms, legs\u201d, and kinematic information \u201cspeed\u201d. Afterward, P is integrated with pose features for subsequent processes. A learnable prompt embedding P = {p}K k=1 is with the shape of K \u00d7 L \u00d7 D, where K denotes the number of text prompts, L indicates the number of tokens in each text prompt, and D is the dimension of token embedding. Since the number of valid tokens is found to be three to four through the text encoder Etx, the first four tokens are taken as representations \u02dc pk for each text. Moreover, since modifiers help precisely describe the movements of human body parts, we design a learnable vector rk \u2208R(Lk\u22124)\u00d7D to wrap the representations as pk. The above can be formulated as \\ t ilde {\\bm {p }}_ k & =\\m ath cal {E } _{\\text {t x }}(\\text {text}_k)[:4],\\ k \\in [1, K],\\\\ \\bm {p}_k&=\\text {Concat}(\\bm {r}_k, \\tilde {\\bm {p}}_k), (4b) where K = 7 and {textk}7 k=1 indicate {person, [Action Class], speed, head, body, arms, legs}. rk is initialized with Gaussian distribution of \u00b5 = 0 and \u03c3 = 0.02, and {Lk}7 k=1 ={7, 12, 10, 10, 10, 14, 14}, which sums to 77 regarding the text embedding dimension of CLIP [30]. In short, the FPP block builds multi-granularity text prompts and learnable modifiers, providing precise guidance for each human body part, as shown in Fig. 2. Fine-grained Prompt-pose Communication (FPC). After obtaining fine-grained part-aware prompt embedding P, we establish fine-grained communications between learned partaware prompts and poses using the FPC block to improve the denoising quality. Specifically, when processing the noised 3D poses Yt, it injects prompt embedding P, 2D keypoints X, and timestamp t within. First, FPC integrates Yt and guidance information (i.e., X, t, and P) by a series of concatenation and addition operations, as Zt = Concat(Yt, X)+P[L]+F(t). F is the timestamp embedding network containing a sinusoidal function followed by two Linear layers connected by a GELU non-linearity. The timestep embedding adaptively adjusts the quantity of Gaussian noise additions. Since the denoiser D works iteratively, providing detailed information about the current timestamp t is crucial for D to handle 3D poses containing different noise levels effectively. Then, Zt is encoded by a spatial transformer, where the multi-head self-attention (MHSA) mechanism helps to focus on the fine-grained relationships between joints within each frame, obtaining Zs t. To completely inject prompt embedding P into Zs t, we implement a multi-head cross-attention model, where the query, key, and value are as Q = WQZs t, K = WKP, V = WV P. The value is aggregated with cross-attention A to generate fine-grained prompt-driven pose features Zsp t , achieving fine-grained prompt-pose communication. The mechanism can be formulated as \\mathbf { A }&= \\ tex t {s oft m a x } (\\ m ath b f {Q}\\ o times \\mathbf {K}^\\top /\\sqrt {d}),\\\\ \\mathbf {Z}_t^{sp}&=\\mathbf {A}\\otimes \\mathbf {V},\\ \\tilde {\\mathbf {Z}}_t^{sp}=\\mathcal {P}(\\mathbf {Z}_t^{sp}), (5b) where d = D/H and H is the number of attention heads. P indicates the PTS block that bring timestamp t into the generation process to obtain timestamp stylized output \u02dc Zsp t . On the other hand, to model inter-frame relationships between poses, \u02dc Zsp t is encoded using a temporal transformer via MHSA to obtain \u02dc Zspf t . Finally, we utilize a spatialtemporal transformer accompanied by permutation operations between spatial and temporal dimensions to extract more compact fine-grained prompt-driven pose features from \u02dc Zspf t , which are decoded as the predicted 3D poses \u02c6 Y0. Prompt-driven timestamp Stylization (PTS). As mentioned, providing timestamp embedding to the denoising process is critical for handling 3D poses with different noise levels. Therefore, inspired by Motiondiffuse [53], we introduce the PTS block that explicitly embeds timestamp t by positional embedding [40] and sums it with the learnable prompt embedding P obtained by the FPP block, as v=P[L]+F(t). Given the intermediate output Zsp t of the FPC block, the PTS block calculates \u02dc Zsp t = Zsp t \u00b7 \u03c8w(\u03d5(v))+\u03c8b(\u03d5(v)), where \u03c8b, \u03c8w, \u03d5 are three different linear projections, and (\u00b7) is the Hadamard product. 4 Method N Human3.6M (DET) Human3.6M (GT) Year Detector MPJPE \u2193 P-MPJPE \u2193 Detector MPJPE \u2193 P-MPJPE \u2193 TCN [29] 243 CPN 46.8 36.5 GT 37.8 / CVPR\u201919 Anatomy [6] 243 CPN 44.1 35.0 GT 32.3 / CSVT\u201921 P-STMO [33] 243 CPN 42.8 34.4 GT 29.3 / ECCV\u201922 MixSTE [52] 243 HRNet 39.8 30.6 GT 21.6 / CVPR\u201922 PoseFormerV2 [54] 243 CPN 45.2 35.6 GT 35.5 / CVPR\u201923 MHFormer [19] 351 CPN 43.0 34.4 GT 30.5 / CVPR\u201922 Diffpose [10] 243 CPN 36.9 28.7 GT 18.9 / CVPR\u201923 GLA-GCN [48] 243 CPN 44.4 34.8 GT 21.0 17.6 ICCV\u201923 ActionPrompt [55] 243 CPN 41.8 29.5 GT 22.7 / ICME\u201923 MotionBERT [59] 243 SH 37.5 / GT 16.9 / ICCV\u201923 D3DP [34] 243 CPN 35.4 28.7 GT 18.4 / ICCV\u201923 FinePOSE (Ours) 243 CPN 31.9 25.0 GT 16.7 12.7 (-3.5) (-3.7) (-0.2) (-4.9) Table 1. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the Human3.6M dataset. N: the number of input frames. CPN, HRNet, SH: using CPN [7], HRNet [39], and SH [24] as the 2D keypoint detectors to generate the inputs. GT: using the ground truth 2D keypoints as inputs. The best and second-best results are highlighted in bold and underlined formats. 3.3. Training & Inference Training. The contaminated 3D poses Yt is sent to a finegrained prompt-driven denoiser D to reconstruct the 3D poses \u02c6 Y0 =D(Yt, X, t, P) without noise. The entire framework is optimized by minimizing the MSE loss \u2225Y0 \u2212\u02c6 Y0\u22252. Inference. Since the distribution of YT is nearly an isotropic Gaussian distribution, we sample H initial 3D poses {Yh T }H h=1 from a unit Gaussian. After passing them to the denoiser D, we obtain H feasible 3D pose hypotheses { \u02c6 Yh 0}H h=1. Each hypothesis \u02c6 Yh 0 is used to generate the noisy 3D poses \u02c6 Yh t\u22121 as inputs to the denoiser D for the next timestamp t\u22121. Then, we regenerate { \u02c6 Yh 0}H h=1 using { \u02c6 Yh t \u2212 1}H h=1 as inputs to the denoiser D for the next timestamp t\u22122. Analogously, this process iterates M times starting from the timestamp T, so each iteration m \u2208[1, M] is with the timestamp t=T(1\u2212m M ). Following Joint-Wise ReprojectionBased Multi-Hypothesis Aggregation (JPMA) in [35], we reproject { \u02c6 Yh 0}H h=1 to the 2D camera plane using known or estimated intrinsic camera parameters and then choose joints with minimum projection errors with the input X, as h '&= \\ma thop {\\ arg \\ mi n }\\l i mits _{ h\\in [1,H] } \\ |\\m a thca l {P} _R(\\hat {\\mathbf {Y}}_0^h)[j]-\\mathbf {X}[j]\\|_2,\\\\ \\hat {\\mathbf {Y}}_0[j]&=\\hat {\\mathbf {Y}}_0^{h'}[j],\\ j\\in [1,J], (6b) where PR is the reprojection function, j is the index of joints, and h\u2032 indicates the index of selected hypothesis. JPMA enables us to select joints from distinct hypotheses automatically to form the final prediction \u02c6 Y0. 3.4. Extension to 3D Multi-Human Pose Estimation We append a post-integration to FinePOSE to apply for the multi-human scenario, avoiding incorporating extra computational cost. Specifically, given a multi-human 2D keypoints sequence Xmul \u2208RC\u00d7N\u00d7J\u00d72, which involves C human characters, FinePOSE first predicts \u02c6 Yc 0 for each character c \u2208[1, C]. Considering that some characters may temporarily leave the camera field of view, their positions in those frames are set as zeros to ensure synchronization of all characters\u2019 states in Xmul. Next, we integrate { \u02c6 Yc 0}C c=1 by stacking over the character dimension, obtaining the final prediction \u02c6 YC 0 \u2208RC\u00d7N\u00d7J\u00d73. 4. Experiments 4.1. Datasets and Metrics Human3.6M [14] is a widely used benchmark dataset in human pose estimation tasks, which provides a large-scale collection of accurate 3D joint annotations on diverse human activities. Human3.6M consists of 3.6 million RGB images, captured from multiple camera views, of 11 professional actors performing 15 activities, e.g., walking, running, and jumping. Following previous efforts [19, 29, 34], our FinePOSE is trained on five subjects (S1, S5, S6, S7, S8) and evaluated on two subjects (S9, S11). We calculate the mean per joint position error (i.e., MPJPE) to measure the average Euclidean distance in millimeters between the ground truth and estimated 3D joint positions for evaluation. We also report procrustes MPJPE (i.e., P-MPJPE) that calculates MPJPE after aligning the estimated poses to the ground truth using a rigid transformation. MPI-INF-3DHP [23] provides synchronized RGB video sequences with accurate 3D joint annotations for 3D human pose estimation. It comprises 8 activities conducted by 8 actors in the training set, while the test set encompasses 7 activities. We calculate MPJPE, the percentage of correctly estimated keypoints (i.e., PCK) within a 150mm range, and the area under the curve (i.e., AUC). EgoHumans [15] collects multi-human ego-exo videos covering 7 sports activities. Recently, a subset of 2D to 3D 5 Method / MPJPE \u2193 Human3.6M (DET) Dir. Disc. Eat Greet Phone Photo Pose Pur. Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg TCN [29] 45.2 46.7 43.3 45.6 48.1 55.1 44.6 44.3 57.3 65.8 47.1 44.0 49.0 32.8 33.9 46.8 SRNet [51] 46.6 47.1 43.9 41.6 45.8 49.6 46.5 40.0 53.4 61.1 46.1 42.6 43.1 31.5 32.6 44.8 RIE [32] 40.8 44.5 41.4 42.7 46.3 55.6 41.8 41.9 53.7 60.8 45.0 41.5 44.8 30.8 31.9 44.3 Anatomy [6] 41.4 43.5 40.1 42.9 46.6 51.9 41.7 42.3 53.9 60.2 45.4 41.7 46.0 31.5 32.7 44.1 P-STMO [33] 38.9 42.7 40.4 41.1 45.6 49.7 40.9 39.9 55.5 59.4 44.9 42.2 42.7 29.4 29.4 42.8 MixSTE [52] 36.7 39.0 36.5 39.4 40.2 44.9 39.8 36.9 47.9 54.8 39.6 37.8 39.3 29.7 30.6 39.8 PoseFormerV2 [54] 45.2 MHFormer [19] 39.2 43.1 40.1 40.9 44.9 51.2 40.6 41.3 53.5 60.3 43.7 41.1 43.8 29.8 30.6 43.0 Diffpose [10] 33.2 36.6 33.0 35.6 37.6 45.1 35.7 35.5 46.4 49.9 37.3 35.6 36.5 24.4 24.1 36.9 GLA-GCN [48] 41.3 44.3 40.8 41.8 45.9 54.1 42.1 41.5 57.8 62.9 45.0 42.8 45.9 29.4 29.9 44.4 ActionPrompt [55] 37.7 40.2 39.8 40.6 43.1 48.0 38.8 38.9 50.8 63.2 42.0 40.0 42.0 30.5 31.6 41.8 MotionBERT [59] 36.1 37.5 35.8 32.1 40.3 46.3 36.1 35.3 46.9 53.9 39.5 36.3 35.8 25.1 25.3 37.5 D3DP [34] 33.0 34.8 31.7 33.1 37.5 43.7 34.8 33.6 45.7 47.8 37.0 35.0 35.0 24.3 24.1 35.4 FinePOSE (Ours) 31.4 31.5 28.8 29.7 34.3 36.5 29.2 30.0 42.0 42.5 33.3 31.9 31.4 22.6 22.7 31.9 (-1.6) (-3.3) (-2.9) (-2.4) (-3.2) (-7.2) (-5.6) (-3.6) (-3.7) (-5.3) (-3.7) (-3.1) (-3.6) (-1.7) (-1.4) (-3.5) Table 2. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the Human3.6M dataset using 2D keypoint detectors to generate the inputs. Dir., Disc.,\u00b7 \u00b7 \u00b7 , and WalkT. correspond to 15 action classes. Avg indicates the average MPJPE among 15 action classes. The best and second-best results are highlighted in bold and underlined formats. Method N MPI-INF-3DHP Year PCK\u2191 AUC\u2191 MPJPE \u2193 TCN [29] 81 86.0 51.9 84.0 CVPR\u201919 Anatomy [6] 81 87.9 54.0 78.8 CSVT\u201921 P-STMO [33] 81 97.9 75.8 32.2 ECCV\u201922 MixSTE [52] 27 94.4 66.5 54.9 CVPR\u201922 PoseFormerV2 [54] 81 97.9 78.8 27.8 CVPR\u201923 MHFormer [19] 9 93.8 63.3 58.0 CVPR\u201922 Diffpose [10] 81 98.0 75.9 29.1 CVPR\u201923 GLA-GCN [48] 81 98.5 79.1 27.8 ICCV\u201923 D3DP [34] 243 98.0 79.1 28.1 ICCV\u201923 FinePOSE (Ours) 243 98.9 80.0 26.2 (+0.4) (+0.9) (-1.6) Table 3. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the MPI-INF-3DHP dataset using ground truth 2D keypoints as inputs. N: the number of input frames. The best and second-best results are highlighted in bold and underlined formats. keypoints annotations has been released covering tagging, lego-assembling, and fencing. It contains 105 RGB videos taken by ego cameras. Between 1 and 3 human characters appear in each video, resulting in a total of 238 subsequences. We report the average MPJPE per video. 4.2. Implementation Details We take MixSTE [52] as the backbone of the denoiser D and CLIP as the frozen text encoder Etx. The numbers of MHSA-MLP-LN building blocks of the spatial, temporal, and spatio-temporal transformer in the FPC block are 1, 1, and 3. The training epoch in all the experiments below is 100, and the batch size is 4. We adopt AdamW optimizer with the momentum parameters of \u03b21 = 0.9, \u03b22 = 0.999, and the weight decay of 0.1. The learning rate starts from 6e\u22125 and shrinks after each epoch with a factor of 0.993. For fair Method Human3.6M (DET) MPJPE \u2193 P-MPJPE \u2193 w/o Prompt 37.2 29.1 M-Prompt 35.8 28.1 S-Prompt 36.2 28.9 C-Prompt 34.7 27.4 AL-Prompt 34.6 27.4 FinePOSE (Ours) 31.9 25.0 Table 4. Ablation study on different designs of prompt learning in the FPP block. w/o Prompt: without any textual information and learnable prompts. M-Prompt: using the action class to design the prompt manually. S-Prompt: using a learnable prompt combined with the action class. C-Prompt: employing the action class and coarse-grained information to create the prompt. AL-Prompt: only learnable prompts without any manual design. comparisons, we set the number of hypotheses H = 1 and iterations M = 1 during training, and H = 20 and M = 10 during inference, as in D3DP [34]. 4.3. Comparison with the State-of-the-Arts Human3.6M. Tab. 1 reports comparisons between our FinePOSE with state-of-the-art (SOTA) 3D HPE methods on the Human3.6M dataset. FinePOSE significantly achieves new SOTA performance, especially when using detected 2D keypoints as inputs. Compared with existing 3D HPE methods, FinePOSE surpasses the SOTA method D3DP [34] by 3.5mm in MPJPE and 3.7mm in P-MPJPE. When using ground truth 2D keypoints as inputs, FinePOSE also significantly outperforms the SOTA method MotionBERT [59], improving MPJPE by 0.2mm. Tab. 2 provides detailed comparisons between on each action class using 2D keypoint detectors as inputs. For example, our FinePOSE achieves noticeable improvements (43.7mm\u219236.5mm) for the ac6 Method Configuration MPJPE \u2193 P-MPJPE \u2193 FPP FPC PTS Baseline 37.2 29.1 w FPP \u2713 35.3 28.0 w/o FPP \u2713 37.1 29.2 w/o FPC \u2713 \u2713 35.7 27.8 w/o PTS \u2713 \u2713 36.6 29.0 FinePOSE (Ours) \u2713 \u2713 \u2713 31.9 25.0 Table 5. Ablation study on different configurations of FinePOSE on Human3.6M using 2D keypoint detectors as inputs. Baseline: the method without any textual information via prompt learning. w FPP: the method only contains the FPP block and adds P[L] to the input. w/o FPP: the method without the FPP block leads to an infeasible FPC block. w/o FPC: the method without the FPC block. w/o PTS: the method without the PTS block. tion class \u201cPhoto\u201d and decreases average MPJPE by 3.5mm (35.4mm\u219231.9mm). MPI-INF-3DHP. Tab. 3 reports comparisons between our FinePOSE and SOTA 3D HPE methods on the MPI-INF3DHP dataset, using ground truth 2D keypoints as inputs. Compared with the SOTA existing method GLA-GCN [48], FinePOSE decreases MPJPE by 1.6mm and increases the PCK by 0.4% and AUC by 0.9%. Overall, these experimental results demonstrate that our FinePOSE benefits from fine-grained part-aware prompt learning and pose-prompt communications, resulting in higher denoising quality and estimation accuracy. 4.4. Ablation Study We conduct a series of analysis experiments of our FinePOSE on the Human3.6M dataset to investigate the effects on the performance of different prompt learning designs in the FPP block and different blocks in FinePOSE. Effects of Different Designs in FPP. We design various versions of the FPP block for our FinePOSE, including a) w/o Prompt, b) M-Prompt, c) S-Prompt, d) C-Prompt, and e) ALPrompt. Specifically, w/o Prompt denotes FinePOSE without introducing textual information and learnable prompts. MPrompt indicates using the action class to design the prompt manually instead of the FPP block. Taking the action class \u201cDirections\u201d as an example, the manually designed prompt is \u201ca person is pointing directions with hands\u201d. There are 15 action classes available in the Human3.6M dataset corresponding to 15 kinds of manually designed prompts. S-Prompt indicates utilizing learnable prompts combined with the action class. C-Prompt indicates employing the action class and coarse-grained information like \u201cperson\u201d and \u201cspeed\u201d to create the prompt. Finally, AL-Prompt means only using learnable prompts without any manual design. We first evaluate the effect of manually designed prompts (i.e., M-Prompt) on Human3.6M. As shown in Tab. 4, compared to w/o Prompt, M-Prompt achieves a decrease of 1.4mm on MPJPE and 1.0mm on P-MPJPE, indicating that Method / MPJPE \u2193 EgoHumans Tag. Lego Fenc. Avg D3DP [35] 30.7 29.0 46.6 35.4 FinePOSE (Ours) 30.0 26.7 46.2 34.3 (-0.7) (-2.3) (-0.4) (-1.1) Table 6. Quantitative comparison with D3DP on the EgoHumans dataset using 2D keypoints as inputs. Tag., Lego, and Fenc. correspond to 3 action classes. Avg indicates the average MPJPE among 3 action classes. manually designing prompts is a practical strategy even though they cannot guarantee the prompt is optimal during the denoising process for the 3D HPE task. To evaluate the effectiveness of S-Prompt, we compare it with w/o Prompt. As shown in Tab. 4, MPJPE and P-MPJPE are reduced by 1.0mm and 0.2mm, respectively, for S-Prompt, which demonstrates that with the help of learnable prompts, integrating textual information can improve the performance on 3D HPE task. While compared to M-Prompt, S-Prompt results in performance degradation, indicating that learnable prompts must be meticulously designed. In addition, we also investigate the impact of manual intervention degrees on 3D HPE performance using two groups of comparative experiments. In the first group, we used only learnable prompts without any textual information and manual intervention, named AL-Prompt, which differs from S-Prompt with the action class. The second group designed a coarse-grained prompt involving action class, \u201cperson\u201d, \u201cspeed\u201d, and corresponding learnable prompts, denoted as C-Prompt. We see that both AL-Prompt and C-Prompt outperform S-Prompt since AL-Prompt is without interference from uncomplete textual information and C-Prompt contains some important textual information like action class, \u201cperson\u201d, and \u201cspeed\u201d, which provide the action subject and kinematic data. Finally, it is observed that our FinePOSE outperforms various versions of prompt learning on both MPJPE and P-MPJPE, indicating the effectiveness of the fine-grained part-aware prompt learning mechanism in FinePOSE. Effects of Different Blocks in FinePOSE. In Tab. 5, we provide different settings of our FinePOSE to evaluate the effects of different blocks for the 3D HPE performance, including Baseline, w FPP, w/o FPP, w/o FPC, and w/o PTS. Specifically, Baseline denotes FinePOSE without introducing textual information and learnable prompts, the same as the configuration of w/o Prompt. w FPP indicates FinePOSE only contains the FPP block without introducing the FPC and PTS blocks and only adds textual information P[L] to the input. w/o FPP denotes FinePOSE without the FPP block, leading to the FPC block being infeasible and only utilizing the PTS block. w/o FPC means FinePOSE without the FPC block but using the FPP and PTS blocks. w/o PTS refers to FinePOSE without the PTS block but using the FPP and FPC blocks to integrate textual information for fine-grained 7 SittingDown MotionBERT D3DP FinePOSE WalkDog Sitting Purchases Discussion Photo Posing Figure 3. Qualitative comparisons of our FinePOSE with MotionBERT [59] and D3DP [34] on Human3.6M. The gray skeleton is the ground-truth 3D pose. The blue skeleton represents the prediction of the human left part, and the orange indicates the human right part. The red dashed line represents the incorrect regions of the compared methods, and the blue dashed line indicates the counterparts of FinePOSE. part-aware prompt learning. Compared w FPP and Baseline, we observe that the former can achieve 1.9mm and 1.1mm improvements on MPJPE and P-MPJPE. This is because our FinePOSE contains the FPP block, which adds the prompt embedding P[L] into the input Zt of denoiser D, significantly improving the denoising capability. We observe that the results between w/o FPP and Baseline are almost equivalent. The baseline has already brought timestamp t into the denoising process, while the PTS block refines the prediction at each noise level by reusing the timestamp to the denoising process after the FPP and FPC block. Thus, there is nearly no effect in adding only the PTS block without FPP and FPC blocks to the denoiser. Making a comparison between w/o FPC and w/o FPP, the former achieves a decrease of 1.4mm on both MPJPE and P-MPJPE over w/o FPP, indicating that the FPP block in the denoiser plays a critical role in the fine-grained part-aware prompt learning mechanism. Finally, we observe that FinePOSE achieves a decrease of 4.7mm on MPJPE and 4.0mm on P-MPJPE compared to w/o PTS, indicating the necessity to integrate learned prompt embeddings and timestamps in the PTS block. 4.5. Results on 3D Multi-Human Pose Estimation In real-world applications, the multi-human scenario is more common than the single-human one. However, its complexity hinders existing work from handling it. In Sec. 3.4, we present a post-integration to extend FinePOSE for the multihuman pose estimation task. We implemented the extension using the SOTA method D3DP for a convincing comparison. The experimental results on EgoHumans are reported in Tab. 6, demonstrating that (1) the integration strategy indeed has potential feasibility and (2) FinePOSE has a dominant performance even in the complex multi-human scenario. 4.6. Visualization Fig. 3 shows the visualization results of D3DP [35], MotionBERT [59] and our FinePOSE on Human3.6M. These methods have performed well for actions in which the body, legs, and other parts of the person in the scene are relatively clear. For the actions with simple shapes, e.g., \u201cDiscussion\u201d and \u201cPhoto\u201d, the 3D poses predicted by FinePOSE match better with ground-truth 3D poses than those of D3DP and MotionBERT, especially in the left knee, right arm, and right hip of \u201cDiscussion\u201d and in the left knee of \u201cPhoto\u201d. For the actions with complex shapes, e.g., \u201cSitting\u201d and \u201cSittingDown\u201d, FinePOSE is more accurate at various joints, especially for arms and legs, while the 3D poses predicted by D3DP and MotionBERT differ significantly from groundtruth 3D poses. 5. Conclusion and Discussion This work has presented FinePOSE, a new fine-grained prompt-driven denoiser for 3D human pose estimation. FinePOSE was composed of FPP, FPC, and PTS blocks. FPP learned fine-grained part-aware prompts to provide precise guidance for each human body part. FPC established fine-grained communication between learnable part-aware prompts and poses to enhance denoising capability. PTS brought timestamp information to the denoising process, strengthening the ability to refine the prediction at each noise level. Experimental results on two benchmarks demonstrated that FinePOSE surpasses the state-of-the-art methods. We have also extended FinePOSE from single-human scenarios to multi-human ones, exhibiting that our model performs well in complex multi-human scenarios. Limitations. FinePOSE is not designed explicitly for the multi-person scenario. The diffusion model-based 3D HPE method is relatively computationally expensive. 8"
16
+ }
intro_8K/test_introduction_short_2405.05252v1.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05252v1",
3
+ "title": "Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models",
4
+ "abstract": "Diffusion Models (DMs) have exhibited superior performance in generating\nhigh-quality and diverse images. However, this exceptional performance comes at\nthe cost of expensive architectural design, particularly due to the attention\nmodule heavily used in leading models. Existing works mainly adopt a retraining\nprocess to enhance DM efficiency. This is computationally expensive and not\nvery scalable. To this end, we introduce the Attention-driven Training-free\nEfficient Diffusion Model (AT-EDM) framework that leverages attention maps to\nperform run-time pruning of redundant tokens, without the need for any\nretraining. Specifically, for single-denoising-step pruning, we develop a novel\nranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify\nredundant tokens, and a similarity-based recovery method to restore tokens for\nthe convolution operation. In addition, we propose a Denoising-Steps-Aware\nPruning (DSAP) approach to adjust the pruning budget across different denoising\ntimesteps for better generation quality. Extensive evaluations show that AT-EDM\nperforms favorably against prior art in terms of efficiency (e.g., 38.8% FLOPs\nsaving and up to 1.53x speed-up over Stable Diffusion XL) while maintaining\nnearly the same FID and CLIP scores as the full model. Project webpage:\nhttps://atedm.github.io.",
5
+ "authors": "Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Niraj K. Jha, Yuchen Liu",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "cs.LG",
13
+ "eess.IV",
14
+ "eess.SP"
15
+ ],
16
+ "label": "Original Paper",
17
+ "paper_cat": "Diffusion AND Model",
18
+ "gt": "Diffusion Models (DMs) [9, 29] have revolutionized com- puter vision research by achieving state-of-the-art perfor- mance in various text-guided content generation tasks, in- cluding image generation [28], image editing [12], super resolution [17], 3D objects generation [27], and video gen- eration [10]. Nonetheless, the superior performance of DMs comes at the cost of an enormous computation budget. Al- though Latent Diffusion Models (LDMs) [28, 34] make text-to-image generation much more practical and afford- able for normal users, their inference process is still too slow. For example, on the current flagship mobile phone, *Work was partly done during an internship at Adobe. \u2020Corresponding Author. generating a single 512px image requires 90 seconds [19]. To address this issue, numerous approaches geared at ef- ficient DMs have been introduced, which can be roughly categorized into two regimes: (1) efficient sampling strat- egy [24, 30] and (2) efficient model architecture [19, 38]. While efficient sampling methods can reduce the number of denoising steps, they do not reduce the memory footprint and compute cost for each step, making it still challeng- ing to use on devices with limited computational resources. On the contrary, an efficient architecture reduces the cost of each step and can be further combined with sampling strategies to achieve even better efficiency. However, most prior efficient architecture works require retraining of the DM backbone, which can take thousands of A100 GPU hours. Moreover, due to different deployment settings on various platforms, different compression ratios of the back- bone model are required, which necessitate multiple retrain- ing runs later. Such retraining costs are a big concern even for large companies in the industry. To this end, we propose the Attention-driven Training- free Efficient Diffusion Model (AT-EDM) framework, which accelerates DM inference at run-time without any retraining. To the best of our knowledge, training-free ar- chitectural compression of DMs is a highly uncharted area. Only one prior work, Token Merging (ToMe) [1], addresses this problem. While ToMe demonstrates good performance on Vision Transformer (ViT) acceleration [2], its perfor- mance on DMs still has room to improve. To further en- rich research on training-free DM acceleration, we start our study by profiling the floating-point operations (FLOPs) of the state-of-the-art model, Stable Diffusion XL (SD- XL) [26], through which we find that attention blocks are the dominant workload. In a single denoising step, we thus propose to dynamically prune redundant tokens to accel- erate attention blocks. We pioneer a fast graph-based al- gorithm, Generalized Weighted Page Rank (G-WPR), in- spired by Zero-TPrune [35], and deploy it on attention maps in DMs to identify superfluous tokens. Since SD-XL con- tains ResNet blocks, which require a full number of to- kens for the convolution operations, we propose a novel similarity-based token copy approach to recover pruned to- kens, again leveraging the rich information provided by the 1 arXiv:2405.05252v1 [cs.CV] 8 May 2024 SD-XL @ 6.7 TFLOPs AT-EDM @ 4.1 TFLOPs Figure 1. Examples of applying AT-EDM to SD-XL [26]. Compared to the full-size model (top row), our accelerated model (bottom row) has around 40% FLOPs reduction while enjoying competitive generation quality at various aspect ratios. attention maps. This token recovery method is critical to maintaining image quality. We find that naive interpolation or padding of pruned tokens adversely impacts generation quality severely. In addition to single-step token pruning, we also investigate cross-step redundancy in the denoising process by analyzing the variance of attention maps. This leads us to a novel pruning schedule, dubbed as Denoising- Steps-Aware Pruning (DSAP), in which we adjust the prun- ing ratios across different denoising timesteps. We find DSAP not only significantly improves our method, but also helps improve other run-time pruning methods like ToMe [1]. Compared to ToMe, our approach shows a clear im- provement by generating clearer objects with sharper details and better text-image alignment under the same acceleration ratio. In summary, our contributions are four-fold: \u2022 We propose the AT-EDM framework, which leverages rich information from attention maps to accelerate pre- trained DMs without retraining. \u2022 We design a token pruning algorithm for a single de- noising step. We pioneer a fast graph-based algo- rithm, G-WPR, to identify redundant tokens, and a novel similarity-based copy method to recover missing tokens for convolution. \u2022 Inspired by the variance trend of attention maps across de- noising steps, we develop the DSAP schedule, which im- proves generation quality by a clear margin. The schedule also provides improvements over other run-time acceler- ation approaches, demonstrating its wide applicability. \u2022 We use AT-EDM to accelerate a top-tier DM, SD-XL, and conduct both qualitative and quantitative evaluations. No- ticeably, our method shows comparable performance with an FID score of 28.0 with 40% FLOPs reduction relative to the full-size SD-XL (FID 27.3), achieving state-of-the- art results. Visual examples are shown in Fig. 1.",
19
+ "main_content": "Text-to-Image Diffusion Models. DMs learn to reverse the diffusion process by denoising samples from a normal distribution step by step. In this manner, the diffusion-based generative models enable high-fidelity image synthesis with variant text prompts [4, 9]. However, DMs in the pixel space suffer from large generation latency, which severely limits their applications [36]. The LDM [28] was the first to train a Variational Auto-Encoder (VAE) to encode the pixel space into a latent space and apply the DM to the latent space. This reduces computational cost significantly while maintaining generation quality, thus greatly enhancing the application of DMs. Subsequently, several improved versions of the LDM, called Stable Diffusion Models (SDMs), have been released. The most recent and powerful opensource version is SD-XL [26], which outperforms previous versions by a large margin. SD-XL is our default backbone in this work. Efficient Diffusion Models. Researchers have made enormous efforts to make DMs more efficient. Existing efficient DMs can be divided into two types: (1) Efficient sampling to reduce the required number of denoising steps [22, 30\u201332]. A recent efficient sampling work [24] managed to reduce the number of denoising steps to as low as one. It achieves this by iterative distillation, halving the number of denoising steps each time. (2) Architectural compression to make each sampling step more efficient [11, 19, 36, 38]. A recent work [13] removes multiple ResNet and attention blocks in the U-Net through distillation. Although these methods can save computational costs while maintaining decent image quality, they require retraining of the DM backbone to enhance efficiency, needing thousands of A100 GPU hours. Thus, a trainingfree method to enhance the efficiency of DMs is needed. Note that our proposed training-free framework, AT-EDM, 2 is orthogonal to these efficiency enhancement methods and can be stacked with them to further improve their efficiency. We provide corresponding experimental evidence in Supplementary Material. Training-Free Efficiency Enhancement. Training-free (i.e., post-training) efficiency enhancement schemes have been widely explored for CNNs [14, 33, 39] and ViTs [2, 7, 15, 35]. However, training-free schemes for DMs are still poorly explored. To the best of our knowledge, the only prior work in this field is ToMe [1]. It uses token embedding vectors to obtain pair-wise similarity and merges similar tokens to reduce computational overheads. While ToMe achieves a decent speed-up when applied to SD-v1.x and SD-v2.x, we find that it does not help much when applied to the state-of-the-art DM backbone, SD-XL, whilst our method achieves a clear improvement over it (see experimental results in Section 4). This is mainly due to (1) the significant architectural change of SD-XL (see Supplementary Material); (2) our better algorithm design to identify redundant tokens. Exploiting Attention Maps. We aim to design a method that exploits information present in pre-trained models. ToMe only uses embedding vectors of tokens and ignores the correlation between tokens. We take inspiration from recent image editing works [3, 5, 8, 25], in which attention maps clearly demonstrate which parts of a generated image are more important. This inspires us to use the correlations and couplings between tokens indicated by attention maps to identify unimportant tokens and prune them. Specifically, we can convert attention maps to directed graphs, where nodes represent tokens, without information loss. Based on this idea, we develop the G-WPR algorithm for token pruning in a single denoising step. Non-Uniform Denoising Steps. Various existing works [6, 18, 21, 37] demonstrate that denoising steps contribute differently to the quality of generated images; thus, it is not optimum to use uniform denoising steps. OMS-DPM [21] builds a model zoo and uses different models in different denoising steps. It trains a performance predictor to assist in searching for the optimal model schedule. DDSM [37] employs a spectrum of neural networks and adapts their sizes to the importance of each denoising step. AutoDiffusion [18] employs evolutionary search to skip some denoising steps and some blocks in the U-Net. Diff-Pruning [6] uses a Taylor expansion over pruned timesteps to disregard noncontributory diffusion steps. All existing methods either require an intensive training/fine-tuning/searching process to obtain and deploy the desired denoising schedule or are not compatible with our proposed G-WPR token pruning algorithm due to the U-Net architecture change. On the contrary, based on our investigation of the variance of attention maps across denoising steps, we propose DSAP. Its schedule can be determined via simple ablation experiments and 6731 5108 1623 0 1000 2000 3000 4000 5000 6000 7000 8000 U-Net Attn Conv+Res GFLOPs Figure 2. U-Net FLOPs breakdown of SD-XL [26] measured with 1024px image generation. Among components of U-Net (convolution blocks, ResNet blocks, and attention blocks), attention blocks cost the most. it is compatible with any token pruning scheme. DSAP can potentially be migrated to existing efficient DMs to help improve their image quality. 3. Methodology We start our investigation by profiling the FLOPs of the state-of-the-art DM, SD-XL, as shown in Fig. 2. Noticeably, among compositions of the sampling module (U-Net), attention blocks, which consist of several consecutive attention layers, dominate the workload for image generation. Therefore, we propose AT-EDM to accelerate attention blocks in the model through token pruning. AT-EDM contains two important parts: a single-denoising-step token pruning scheme and the DSAP schedule. We provide an overview of these two parts and then discuss them in detail. 3.1. Overview Fig. 3 illustrates the two main parts of AT-EDM: Part I: Token pruning scheme in a single denoising step. Step 1: We obtain the attention maps from an attention layer in the U-Net. We can potentially obtain the attention maps from self-attention or cross-attention. We compare the two choices and analyze them in detail through ablation experiments. Step 2: We use a scoring module to assign an importance score to each token based on the obtained attention map. We use an algorithm called G-WPR to assign importance scores to each token. This is described in Section 3.2. Step 3: We generate pruning masks based on the calculated importance score distribution. Currently, we simply use the top-k approach to determine the retained tokens, i.e., prune tokens with less importance scores. Step 4: We use the generated mask to perform token pruning. We do this after the feed-forward layer of attention layers. We may also perform pruning early before the feedforward layers. We provide ablative experimental results for it in Supplementary Material. Step 5: We repeat Steps 1-4 for each consecutive attention layer. Note that we do not apply pruning to the last attention layer before the ResNet layer. 3 Attention Layer Self-Attention Cross-Attention FFN Pruning Module Attention Layer ResNet Layer Refilling Module \u2460Get the attention map \u2462Generate pruning masks \u2463Run-time pruning \u2464Repeat \u2460-\u2463for consecutive layers \u2465Similarity-based copy to fill pruned tokens before passing to ResNet layers Denoising timestep \ud835\udc61 \ud835\udc41 \ud835\udc41\u2212\ud835\udf0f+ 1 \ud835\udc41\u2212\ud835\udf0f 0 Single-Denoising-Step Token Pruning Denoising-StepsAware Pruning Prune Less ... ... ... Prune More Pruning Module Attention Layer Retained token Pruned token Filled token ... Attention Block Graph Signal Strong Weak \u2461Calculate importance scores via G-WPR Figure 3. Overview of our proposed efficiency enhancement framework AT-EDM. Single-Denoising-Step Token Pruning: (1) We get the attention map from self-attention. (2) We calculate the importance score for each token using G-WPR. (3) We generate pruning masks. (4) We apply the masks to tokens after the feed-forward network to realize token pruning. (5) We repeat Steps (1)-(4) for each consecutive attention layer. (6) Before passing feature maps to the ResNet block, we recover pruned tokens through similarity-based copy. DenoisingSteps-Aware Pruning Schedule: In early steps, we propose to prune fewer tokens and to have less FLOPs reduction. In later steps, we prune more aggressively for higher speedup. Step 6: Finally, before passing the pruned feature map to the ResNet block, we need to fill (i.e., try to recover) the pruned tokens. A simple approach is to pad zeros, which means we do not fill anything. The method that we currently use is to copy tokens to corresponding locations based on similarity. This is described in detail in Section 3.2. Part II: DSAP schedule. Attention maps in early denoising steps are more chaotic and less informative than those in later steps, which is indicated by their low variance. Thus, they have a weaker ability to differentiate unimportant tokens [8]. Based on this intuition, we design the DSAP schedule that prunes fewer tokens in early denoising steps. Specifically, we select some attention blocks in the up-sampling and down-sampling stages and leave them unpruned, since they contribute more to the generated image quality than other attention blocks [19]. We demonstrate the schedule in detail in Section 3.3. 3.2. Part I: Token Pruning in a Single Step Notation. Suppose A(h,l) \u2208RM\u00d7N is the attention map of the h-th head in the l-th layer. It reflects the correlations between M Query tokens and N Key tokens. We refer to A(h,l) as A for simplicity in the following discussion. Let Ai,j denote its element in the i-th row, j-th column. A can be thought of as the adjacency matrix of a directed graph in the G-WPR algorithm. In this graph, the set of nodes with input (output) edges is referred to as \u03a6in (\u03a6out). Nodes in \u03a6in (\u03a6out) represent Key (Query) tokens, i.e., \u03a6in = {kj}N j=1 (\u03a6out = {qi}M i=1). Let st K (st Q) denote the vector that represents the importance score of Key (Query) tokens in the t-th iteration of the G-WPR algorithm. In the case of self-attention, Query tokens are the same as Key tokens. Specifically, we let {xi}N i=1 denote the N tokens and s denote their importance scores in the description of our token recovery method. The G-WPR Algorithm. WPR [35] uses the attention map as an adjacency matrix of a directed complete graph. It uses a graph signal to represent the importance score distribution among nodes in this graph. This signal is initialized uniformly. WPR uses the adjacency matrix as a graph operator, applying it to the graph signal iteratively until convergence. In each iteration, each node votes for which node is more important. The weight of the vote is determined by its importance in the last iteration. However, WPR, as proposed in [35], constrains the used attention map to be a self-attention map. Based on this, we propose the G-WPR algorithm, which is compatible with both self-attention and cross-attention, as shown in Algorithm 1. The attention 4 from Query qi to Key kj weights the edge from qi to kj in the graph generated by A. In each iteration of the vanilla WPR, by multiplying with the attention map, we map the importance of Query tokens st Q to the importance of Key tokens st+1 K , i.e., each node in \u03a6out votes for which \u03a6in node is more important. For self-attention, st+1 Q = st+1 K since Query and Key tokens are the same. For cross-attention, Query tokens are image tokens and Key tokens are text prompt tokens. Based on the intuition that important image tokens should devote a large portion of their attention to important text prompt tokens, we define function f(A, sK) that maps st+1 K to st+1 Q . One entropy-based implementation is st+1 Q (qi) = f(A, st+1 K ) = PN j=1 Ai,j \u00b7 st+1 K (kj) \u2212PN j=1 Ai,j \u00b7 ln Ai,j (1) where Ai,j is the attention from Query qi to Key kj. This is the default setting for cross-attention-based WPR in the following sections. We discuss and compare other implementations in Supplementary Material. Note that for selfattention, f(A, st+1 K ) = st+1 K . The G-WPR algorithm has an O(M \u00d7 N) complexity, where M (N) is the number of Query (Key) tokens. We employ this algorithm in each head and then obtain the root mean square of scores from different heads (to reward tokens that obtain very high importance scores in a few heads). Algorithm 1 The G-WPR algorithm for both self-attention and cross-attention Require: M, N > 0 is the number of nodes in \u03a6out, \u03a6in; A \u2208 RM\u00d7N; sQ \u2208RM, sK \u2208RN; f(A, sk) maps the importance of Key to that of Query Ensure: s \u2208RM represents the importance score of image tokens s0 Q \u2190 1 M \u00d7 eM t \u21900 while (|st Q \u2212st\u22121 Q | > \u03f5) or (t = 0) do st+1 K \u2190AT \u00d7 st Q st+1 Q \u2190f(A, st+1 K ) st+1 Q \u2190st+1 Q /|st+1 Q | t \u2190t + 1 end while s \u2190st Q Recovering Pruned Tokens. We have fewer tokens after token pruning, leading to efficiency enhancement. However, retained tokens form irregular maps and thus cannot be used for convolution, as shown in Fig. 4. We need to recover the pruned tokens to make them compatible with the following convolutional operations in the ResNet layer. (I) Padding Zeros. One straightforward way to do this is to pad zeros. However, to maintain the high quality of generated images, we hope to recover the pruned tokens as precisely as possible, as if they were not pruned. 1 2 3 4 Token pruning is not natively compatible with ResNet Similarity-based copy resolves the incompatibility 1 2 3 4 Token pruning 1 2 4 3 Reshape 1 4 ResNet Layer \u00d7 Similaritybased Copy \u221a Not compatible due to the non-square shape Reshape \u2460Attention map averaged cross heads Key Query 1 2 3 4 \u2461Delete rows of pruned tokens \u2462Find the highest attention received for each pruned token \u2463Get the most similar token of pruned tokens 2: 1 3: 4 \u2464Copy retained tokens to fill pruned tokens 1 4 Similarity-based Copy Retained token Pruned token Filled token 2 3 1 2 3 4 1 4 1 2 3 4 \u221a \u221a 1 2 3 4 Token pruning 1 2 4 3 1 2 4 3 1 4 2 3 1 2 4 3 ResNet Layer 2 3 Figure 4. Our similarity-based copy method for token recovering resolves the incompatibility between token pruning and ResNet. Token pruning incurs the non-square shape of feature maps and thus is not compatible with ResNet. To address this issue, we propose similarity-based copy to recover the pruned tokens. It first averages the attention map across heads and deletes the rows of pruned tokens to avoid selecting them as the most similar one. Then, it finds the source of the highest attention received for each pruned token and copies the corresponding retained tokens for recovery. After recovering, the tokens can be translated into a spatially-complete feature map to serve as input to ResNet blocks. (II) Interpolation. Interpolation methods, such as bicubic interpolation, are not suitable in this context. To use the interpolation algorithm, we first pad zeros to fill the pruned tokens and form a feature map of size N \u00d7 N. Then we downsample it to N 2 \u00d7 N 2 and upsample it back to N \u00d7 N with the interpolation algorithm. We keep the values of retained tokens fixed and only use the interpolated values of pruned tokens. Due to the high pruning rates (usually larger than 50%), most tokens that represent the background get pruned, leading to lots of pruned tokens that are surrounded by other pruned tokens instead of retained tokens. Interpolation algorithms assign nearly zero values to these tokens. (III) Direct copy. Another possible method is to use the corresponding values before pruning is applied (i.e., before being processed by the following attention layers) to fill the pruned tokens. The problem with this method is that the value distribution changes significantly after being processed by multiple attention layers, and copied values are far from the values of these tokens if they are not pruned and are processed by the following attention layers. To avoid the effect of distribution shift, we propose the similarity-based copy technique, as shown in Fig. 4. Instead of copying values that are not processed by attention layers, we select tokens that are similar to pruned tokens from the retained tokens. We use the self-attention map to determine the source of the highest attention received for each pruned token and use that as the most similar one. This is based on the intuition that attention from token xa to token xb, Aa,b, is determined by two factors: (1) importance 5 Variance Step Region I Region II Region III Region IV Figure 5. Variance of attention maps in different denoising steps. We divide the denoising steps into four typical regions: (I) Veryearly steps: Variance of attention maps is small and increases rapidly. (II) Mid-early steps: Variance of attention maps is large and increases slowly. (III) Middle steps: Variance of attention maps is large and almost constant. (IV) Last several steps. of token xb, i.e., s(xb), and (2) similarity between token xa and xb. If we observe the attention that xb receives, i.e., compare {Ai,b}i\u2208N, since s(xb) is fixed, index i = \u03b7 that maximizes {Ai,b}i\u2208N is the index of the most similar token, i.e., x\u03b7. Finally, we copy the value of token x\u03b7 to fill (i.e., recover) the pruned token xb. 3.3. Part II: Denoising-Steps-Aware Pruning Early denoising steps determine the layout of generated images and, thus, are crucial. On the contrary, late denoising steps aim at refining the generated image, natively including redundant computations since many regions of the image do not need refinement. In addition, early denoising steps have a weaker ability to differentiate unimportant tokens, and late denoising steps yield informative attention maps and differentiate unimportant tokens better. To support this claim, we investigate the variance of feature maps in different denoising steps, as shown in Fig. 5. It indicates that attention maps in early steps are more uniform. They assign similar attention scores to both important and unimportant tokens, making it harder to precisely identify unimportant tokens and prune them in early steps. Based on these intuitions, we propose DSAP that employs a prune-less schedule in early denoising steps by leaving some of the layers unpruned. The Prune-Less Schedule. In SD-XL, each down-stage includes two attention blocks and each up-stage includes three attention blocks (except for stages without attention). The mid-stage also includes one attention block. Each attention block includes 2-10 attention layers. In our prune-less schedule, we select some attention blocks to not perform token pruning. Since previous works [13, 19] indicate that the mid-stage contributes much less to the generated image quality than the up-stages and down-stages, we do not select the attention block in the mid-stage. Based on the ablation study, we choose to leave the first attention block in each down-stage and the last attention block in each upstage unpruned. We use this prune-less schedule for the first \u03c4 denoising steps. We explore setting \u03c4 in different regions shown in Fig. 5 and find \u03c4 = 15 is the optimal choice. We present all the related ablative experimental results in Section 4.4. A detailed description of the less aggressive pruning schedule is provided in Supplementary Material. To further consolidate our intuitions, we also investigate a more aggressive pruning schedule in early denoising steps and find it is inferior to our current approach (see Supplementary Material). 4. Experimental Results In this section, we evaluate AT-EDM and ToMe on SD-XL. We provide both visual and quantitative experimental results to demonstrate the advantages of AT-EDM over ToMe. 4.1. Experimental Setup Common Settings. We implement both our AT-EDM method and ToMe on the official repository of SD-XL and evaluate their performance. The resolution of generated images is 1024\u00d71024 pixels and the default FLOPs budget for each denoising step is assumed to be 4.1T, which is 38.8% smaller than that of the original model (6.7T) unless otherwise noted. The default CFG-scale for image generation is 7.0 unless otherwise noted. We set the total number of sampling steps to 50. We use the default sampler of SD-XL, i.e., EulerEDMSampler. AT-EDM. For a concise design, we only insert a pruning layer after the first attention layer of each attention block and set the pruning ratio for that layer to \u03c1. To meet the FLOPs budget of 4.1T, we set \u03c1 = 63%. For the DSAP setting, we choose to leave the first attention block in each down-stage and the last attention block in each up-stage unpruned. We use this prune-less schedule for the first \u03c4 = 15 denoising steps. ToMe. The SD-XL architecture has changed significantly compared to previous versions of SDMs (see Supplementary Material). Thus, the default setting of ToMe does not lead to enough FLOPs savings. To meet the FLOPs budget, it is necessary to use a more aggressive merging setting. Therefore, we expand the application range of token merging (1) from attention layers at the highest feature level to all attention layers, and (2) from self-attention to self-attention, cross-attention, and the feedforward network. We set the merging ratio r = 50% to meet the FLOPs budget of 4.1T. Evaluations. We first compare the generated images with manually designed challenging prompts in Section 4.2. Then, we report FID and CLIP scores of zero-shot image generation on the MS-COCO 2017 validation dataset [20] in Section 4.3. Tested models generate 1024\u00d71024 px images based on the captions of 5k images in the validation set. We provide ablative experimental results and analyze them in Section 4.4 to justify our design choices. We provide more implementation details in Supplementary Material. 6 \u201cUltra realistic illustration of an old man cyborg, cyberpunk, sci-fi fantasy\u201d \u201cclose up of mystic dog, like a phoenix, red and blue colors digital\u201d (a) SD-XL (b) ToMe (d) ToMe (c) Ours (e) Ours (AT-EDM) \u201c15mm wide-angle lens photo of a rapper in 1990 New York holding a kitten up to the camera\u201d w/o DSAP w/ DSAP \u201cA single beam of light enters the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon.\u201d Figure 6. Comparing AT-EDM to the state-of-the-art approach, ToMe [2]. While the full-size SD-XL [26] (Col. a) consumes 6.7 TFLOPs, we compare the accelerated models (Col. b-e) at the same budget of 4.1 TFLOPs. Compared to ToMe, we find that AT-EDM\u2019s token pruning algorithm provides clearer generated objects with sharper details and finer textures, and a better text-image alignment where it better retains the semantics in the prompt (see the fourth row). Moreover, we find that DSAP provides better structural layout of the generated images, which is effective for both ToMe and our approach. AT-EDM combines the novel token pruning algorithm and the DSAP schedule (Col. e), outperforming the state of the art. 4.2. Visual Examples for Qualitative Analysis We use manually designed challenging prompts to evaluate ToMe and our proposed AT-EDM framework. The generated images are compared in Fig. 6. We compare more generated images in Supplementary Material. Visual examples indicate that with the same FLOPs budget, AT-EDM demonstrates better main object preservation and textimage alignment than ToMe. For instance, in the first example, AT-EDM preserves the main object, the face of the old man, much better than ToMe does. AT-EDM\u2019s strong ability to preserve the main object is also exhibited in the second example. ToMe loses high-frequency features of the main object, such as texture and hair, while AT-EDM retains them well, even without DSAP. The third example again illustrates the advantage of AT-EDM over ToMe in preserving the rapper\u2019s face. The fourth example uses a relatively 7 24 26 28 30 32 34 36 38 40 0.28 0.29 0.3 0.31 0.32 0.33 FID score CLIP score (Open Clip ViT-g14) [email protected] [email protected] [email protected] AT-EDM\[email protected] Figure 7. FID-CLIP score curves. The used CFG scales are [1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, 6.0, 7.0, 9.0, 12.0, 15.0]. This figure is zoomed in to the bottom-right corner to show the comparison between the best trade-off points. AT-EDM outperforms ToMe by a clear margin. See complete curves in Supplementary Material. complex prompt that describes relationships between multiple objects. ToMe misunderstands \u201da Rembrandt painting of a raccoon\u201d as being a random painting on the easel and a painting of a raccoon on the wall. On the contrary, the image generated by AT-EDM understands and preserves these relationships very well, even without DSAP. As a part of our AT-EDM framework, DSAP is not only effective in ATEDM but also beneficial to ToMe in improving image quality and text-image alignment. When we deploy DSAP in ToMe, we select corresponding attention blocks to not perform token merging, while keeping the FLOPs cost fixed. 4.3. Quantitative Evaluations FID-CLIP Curves. We explore the trade-off between the CLIP and FID scores through various Classifer-Free Guidance (CFG) scales. We show the results in Fig. 7. ATEDM\u2020 does not deploy pruning at the second feature level (see Supplementary Material). It indicates that for most CFG scales, AT-EDM not only lowers the FID score but also results in higher CLIP scores than ToMe, implying that images generated by AT-EDM not only have better quality but also better text-image alignment. Specifically, when the CFG scale equals 7.0, AT-EDM results in [FID, CLIP] = [28.0, 0.321], which is almost the same as the full-size one ([27.3, 0.323], CFG scale=4.0). For comparison, ToMe results in [35.3, 0.320] with a CFG scale of 7.0. Thus, ATEDM reduces the FID gap from 8.0 to 0.7. Various FLOPs Budgets. We deploy ToMe and AT-EDM on SD-XL under various FLOPs budgets and quantitatively compare their performance in Table 1. The FLOPs cost in this table refers to the average FLOPs cost of a denoising step. Table 1 indicates that AT-EDM achieves better image quality than ToMe (lower FID scores) under all FLOPs budgets. When the FLOPs budget is extremely low (less than 50% of the full model), ToMe achieves higher CLIP Table 1. Deploying ToMe and AT-EDM in SD-XL under different FLOPs budgets. We generate all images with the CFG-scale of 7.0, except for SD-XL\u2020, for which we use a CFG-scale of 4.0. Model FID CLIP TFLOPs SD-XL 31.94 0.3284 6.7 SD-XL\u2020 27.30 0.3226 6.7 ToMe-a 58.76 0.2954 2.9 AT-EDM-a 52.00 0.2784 2.9 ToMe-b 40.94 0.3154 3.6 AT-EDM-b 29.80 0.3095 3.6 ToMe-c 35.27 0.3198 4.1 AT-EDM-c 28.04 0.3209 4.1 ToMe-d 32.46 0.3235 4.6 AT-EDM-d 27.23 0.3245 4.5 scores than AT-EDM. When the FLOPs saving is 30-40%, AT-EDM achieves not only better image quality (lower FID scores) but also better text-image alignment (higher CLIP scores) than ToMe. Note that under the same CFG-scale, AT-EDM achieves a lower FID score than the full-size model while reducing FLOPs by 32.8%. In the case that it trades text-image alignment for image quality (via reducing the CFG scale to 4.0), AT-EDM achieves not only a lower FID score but also a higher CLIP score than the full-size model while reducing FLOPs by 32.8%. We provide more visual examples under various FLOPs budgets in Supplementary Material. Latency Analysis. SD-XL uses the Fused Operation (FO) library, xformers [16], to boost its generation. The Current Implementation (CI) of xformers does not provide attention maps as intermediate results; hence, we need to additionally calculate the attention maps. We discuss the sampling latency for three cases: (I) without FO, (II) with FO under CI, and (III) with FO under the Desired Implementation (DI), which provides attention maps as intermediate results. Table 2 shows that with FO, the cost of deploying pruning at the second feature level exceeds the latency reduction it leads to. Hence, AT-EDM\u2020 is faster than AT-EDM. Fig. 8 shows the extra latency incurred by different pruning steps shown in Fig. 3. With a negligible quality loss, ATEDM achieves 52.7%, 15.4%, 17.6% speed-up in terms of latency w/o FO, w/ FO under CI, w/ FO under DI, respectively, which outperforms the state-of-the-art work by a clear margin. We present the memory footprint of ATEDM in Supplementary Material. 4.4. Ablation Study Self-Attention (SA) vs. Cross-Attention (CA). G-WPR can potentially use attention maps from self-attention (SAbased WPR) and cross-attention (CA-based WPR). We provide a detailed comparison between the two implementations. We visualize their pruning masks and provide gener8 Table 2. Comparison between sampling latency in different cases. \u2020 means not deploying pruning at the second feature level. Model SD-XL ToMe AT-EDM AT-EDM\u2020 Ave. FLOPs/step 6.7 T 4.1 T 4.1 T 4.5 T w/o FO 31.0s 21.0s 20.3s 22.1s w/ FO under CI 18.0s 17.7s 18.3s 15.6s w/ FO under DI 18.0s 17.7s 16.3s 15.3s 0 500 1000 1500 2000 Step 1 Step 2 Step 3 Step 4 Step 6 Latecncy (ms) AT-EDM AT-EDM\u2020 Figure 8. Latency incurred by different pruning steps shown in Fig. 3. Measured w/ FO under CI. Note that under DI, the latency of Step 1 (get the attention map) is eliminated. (a) SD-XL (b) CA-WPR (c) SA-WPR (d) CA-WPR (e) SA-WPR Generated Image Pruning Mask (Black: Pruned Tokens) Figure 9. Comparison between different implementations of GWPR: CA-based WPR and SA-based WPR. In general, CA-based WPR may remove too many background tokens, making the background not recoverable, while SA-based WPR preserves the image quality better. ated image examples for a visual comparison in Fig. 9. This figure indicates that SA-based WPR outperforms CA-based WPR. The reason is that CA-based WPR prunes too many background tokens, making it hard to recover the background via similarity-based copy. Similarity-based Copy. We provide comparisons between different methods to fill the pruned pixels in Fig. 10, which demonstrate the advantages of our similarity-based copy method. Images generated by bicubic interpolation are quite similar to those generated by padding zeros because interpolation usually assigns near-zero values to pruned tokens that are surrounded by other pruned tokens and can hardly recover them. Direct copy means directly copying corresponding token values before the first pruning layer in the attention block to recover the pruned tokens, where the following attention layers do not process the copied values. Thus, the copied values cannot recover the information in pruned tokens and even negatively affect the retained tokens. On the contrary, similarity-based copy uses attention maps and tokens that are retained to recover the pruned to(a) SD-XL (b) Padding Zeros (c) Bicubic Interpolation (d) Direct Copy (e) Similarity-based Copy Generated image Feature map Figure 10. Different methods to recover the pruned tokens. Zero padding (Col. b), bicubic interpolation (Col. c), and direct copy (Col. d) can hardly recover pruned tokens and result in noticeable image degradation with blurry background (incomplete moon). On the other hand, similarity-based copy (Col. e) provides better image quality and keeps the complete moon in the original image. Better viewed when zoomed in. (a) SD-XL (b) 0 Step (d) 15 Steps (e) 30 Steps (f) 45 Steps (c) 5 Steps Figure 11. Comparison between different numbers of early pruneless steps where 0 step is the same as without DSAP. We find that pruning less on the first 15 steps achieves the best quality. kens, providing significantly higher image quality. Denoising-Steps-Aware Pruning. We explore different design choices for DSAP. (1) The prune-less schedule selects one attention block from each down-stage and up-stage in the U-Net and skips the token pruning in it. According to ablation results shown in Supplementary Material, F-L (First-Last) appears to be the best one, i.e., leaving the first attention block of downstages and the last attention block of up-stages unpruned in early denoising steps. (2) We then explore how the number of early prune-less denoising steps affects the generated image quality in Fig. 11. Note that we keep the FLOPs budget fixed and adjust the pruning rate accordingly when we change the number of prune-less steps. This figure shows that the setting of 15 early prune-less steps provides the best image quality. Note that the setting of zero prune-less step is identical to the setting without DSAP, and 5, 15, 30, 45 prune-less steps represents setting the boundary in Region I, II, III, IV of Fig. 5, respectively. The results indicate that placing the boundary between the prune-less and normal schedule in Region II performs best. This meets our expectation because the variance of attention maps becomes high enough to identify unimportant tokens well in Region II. 9 5. Conclusion In this article, we proposed AT-EDM, a novel framework for accelerating DMs at run-time without retraining. AT-EDM has two components: a single-denoising-step token pruning algorithm and a cross-step pruning schedule (DSAP). In the single-denoising-step token pruning, AT-EDM exploits attention maps in pre-trained DMs to identify unimportant tokens and prunes them to accelerate the generation process. To make the pruned feature maps compatible with the latter convolutional blocks, AT-EDM again uses attention maps to reveal similarities between tokens and copies similar tokens to recover the pruned ones. DSAP further improves the generation quality of AT-EDM. We find such a pruning schedule can also be applied to other methods like ToMe. Experimental results demonstrate the superiority of AT-EDM with respect to image quality and text-image alignment compared to state-of-the-art methods. Specifically, on SD-XL, AT-EDM achieves a 38.8% FLOPs saving and up to 1.53\u00d7 speed-up while obtaining nearly the same FID and CLIP scores as the full-size model, outperforming prior art. Acknowledgment This work was supported in part by an Adobe summer internship and in part by NSF under Grant No. CCF2203399."
20
+ }
intro_8K/test_introduction_short_2405.05259v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05259v1",
3
+ "title": "OpenESS: Event-based Semantic Scene Understanding with Open Vocabularies",
4
+ "abstract": "Event-based semantic segmentation (ESS) is a fundamental yet challenging task\nfor event camera sensing. The difficulties in interpreting and annotating event\ndata limit its scalability. While domain adaptation from images to event data\ncan help to mitigate this issue, there exist data representational differences\nthat require additional effort to resolve. In this work, for the first time, we\nsynergize information from image, text, and event-data domains and introduce\nOpenESS to enable scalable ESS in an open-world, annotation-efficient manner.\nWe achieve this goal by transferring the semantically rich CLIP knowledge from\nimage-text pairs to event streams. To pursue better cross-modality adaptation,\nwe propose a frame-to-event contrastive distillation and a text-to-event\nsemantic consistency regularization. Experimental results on popular ESS\nbenchmarks showed our approach outperforms existing methods. Notably, we\nachieve 53.93% and 43.31% mIoU on DDD17 and DSEC-Semantic without using either\nevent or frame labels.",
5
+ "authors": "Lingdong Kong, Youquan Liu, Lai Xing Ng, Benoit R. Cottereau, Wei Tsang Ooi",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.RO"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Semantic AND Segmentation AND Image",
15
+ "gt": "Event cameras, often termed bio-inspired vision sensors, stand distinctively apart from traditional frame-based cam- eras and are often merited by their low latency, high dy- namic range, and low power consumption [28, 44, 76]. The realm of event-based vision perception, though nascent, has rapidly evolved into a focal point of contemporary research [99]. Drawing parallels with frame-based perception and recognition methodologies, a plethora of task-specific ap- plications leveraging event cameras have burgeoned [25]. Event-based semantic segmentation (ESS) emerges as one of the core event perception tasks and has gained in- creasing attention [2, 6, 38, 79]. ESS inherits the challenges of traditional image segmentation [11, 12, 19, 39, 58], while 1 arXiv:2405.05259v1 [cs.CV] 8 May 2024 also contending with the unique properties of event data [2], which opens up a plethora of opportunities for explo- ration. Although accurate and efficient dense predictions from event cameras are desirable for practical applications, the learning and annotation of the sparse, asynchronous, and high-temporal-resolution event streams pose several chal- lenges [47, 49, 61]. Stemming from the image segmentation community, existing ESS models are trained on densely an- notated events within a fixed and limited set of label map- ping [2, 79]. Such closed-set learning from expensive anno- tations inevitably constrains the scalability of ESS systems. An obvious approach will be to make use of the image domain and transfer knowledge to event data for the same vision tasks. Several recent attempts [30, 61, 79] resort to unsupervised domain adaptation to avoid the need for paired image and event data annotations for training. These meth- ods demonstrate the potential of leveraging frame annota- tions to train a segmentation model for event data. How- ever, transferring knowledge across frames and events is not straightforward and requires intermediate representa- tions such as voxel grids, frame-like reconstructions, and bio-inspired spikes. Meanwhile, it is also costly to annotate dense frame labels for training, which limits their usage. A recent trend inclines to the use of multimodal founda- tion models [13, 50, 67, 69, 94] to train task-specific mod- els in an open-vocabulary and zero-shot manner, removing dependencies on human annotations. This paper continues such a trend. We propose a novel open-vocabulary frame- work for ESS, aiming at transferring pre-trained knowledge from both image and text domains to learn better represen- tations of event data for the dense scene understanding task. Observing the large domain gap in between heterogeneous inputs, we design two cross-modality representation learn- ing objectives that gradually align the event streams with images and texts. As shown in Fig. 1, given raw events and text prompts as the input, the learned feature represen- tations from our OpenESS framework exhibit promising re- sults for known and unknown class segmentation and can be extended to more open-ended texts such as \u201cadjectives\u201d, \u201cfine-grained\u201d, and \u201ccoarse-grained\u201d descriptions. To sum up, this work poses key contributions as follows: \u2022 We introduce OpenESS, a versatile event-based seman- tic segmentation framework capable of generating open- world dense event predictions given arbitrary text queries. \u2022 To the best of our knowledge, this work represents the first attempt at distilling large vision-language models to assist event-based semantic scene understanding tasks. \u2022 We propose a frame-to-event (F2E) contrastive distilla- tion and a text-to-event (T2E) consistency regularization to encourage effective cross-modality knowledge transfer. \u2022 Our approach sets up a new state of the art in annotation- free, annotation-efficient, and fully-supervised ESS set- tings on DDD17-Seg and DSEC-Semantic benchmarks.",
16
+ "main_content": "Event-based Vision. The microsecond-level temporal resolution, high dynamic range (typically 140 dB vs. 60 dB of standard cameras), and power consumption efficiency of event cameras have posed a paradigm shift from traditional frame-based imaging [25, 60, 77, 108]. A large variety of event-based recognition, perception, localization, and reconstruction tasks have been established, encompassing object recognition [18, 29, 48, 68], object detection [27, 31, 103, 109], depth estimation [17, 36, 42, 62, 65, 70], optical flow [7, 20, 33, 34, 53, 81, 105], intensity-image reconstruction [23, 24, 73, 98, 107], visual odometry and SLAM [43, 56, 72], stereoscopic panoramic imaging [4, 75], etc. In this work, we focus on the recently-emerged task of eventbased semantic scene understanding [2, 79]. Such a pursuit is anticipated to tackle sparse, asynchronous, and hightemporal-resolution events for dense predictions, which is crucial for safety-critical in-drone or in-vehicle perceptions. Event-based Semantic Segmentation. The focus of ESS is on categorizing events into semantic classes for enhancing scene interpretation. Alonso et al. [2] contributed the first benchmark based on DDD17 [5]. Subsequent works are tailored to improve the accuracy while mitigating the need for extensive event annotations [30]. EvDistill [84] and DTL [83] utilized aligned frames to enhance event-based learning. EV-Transfer [61] and ESS [79] leveraged domain adaptation to transfer knowledge from existing image datasets to events. Recently, HALSIE [6] and HMNet [38] innovated ESS in cross-domain feature synthesis and memorybased event encoding. Another line of research pursues to use of spiking neural networks for energy-efficient ESS [10, 49, 63, 90]. In this work, different from previous pursuits, we aim to train ESS models in an annotation-free manner by distilling pre-trained vision-language models, hoping to address scalability and annotation challenges. Open-Vocabulary Learning. Recent advances in visionlanguage models open up new possibilities for visual perceptions [13, 88, 106]. Such trends encompass image-based zero-shot and open-vocabulary detection [26, 52, 89, 96], as well as semantic [35, 51, 55, 97, 100], instance [45, 87], and panoptic [21, 41, 93] segmentation. As far as we know, only three works studied the adaptation of CLIP for event-based recognition. EventCLIP [92] proposed to convert events to a 2D grid map and use an adapter to align event features with CLIP\u2019s knowledge. E-CLIP [102] uses a hierarchical triple contrastive alignment that jointly unifies the event, image, and text feature embedding. Ev-LaFOR [18] designed category-guided attraction and category-agnostic repulsion losses to bridge event with CLIP. Differently, we present the first attempt at adapting CLIP for dense predictions on sparse and asynchronous event streams. Our work is also close to superpixel-driven contrastive learning [46, 74], where pre-processed superpixels are used to 2 \ud835\udc65 \ud835\udc66 Time \u201croad\u201d \u201csidewalk\u201d F2E \u2131!! \"#$ Group Group \u2131 !\" %&' \u2131 !# ()%* \ud835\udc3c%&' \ud835\udc3c+* \ud835\udc3c\"#$ Dense \u201cbuilding\u201d \u2026 \u2026 \u201cdriveable\u201d \u201cwalkable\u201d \u201cmanmade\u201d Calibration Input Encoding Grouping Decoding Contrastive Prompt T2E \ud835\udcab ,! \"#$ \ud835\udcab ,\" %&' \ud835\udc1f%&' \ud835\udc1f\"#$ \u2131!$ $-$ \ud835\udc21\"#$ \ud835\udc21$-$ Figure 2. Architecture overview of the OpenESS framework. We distill off-the-shelf knowledge from vision-languages models to event representations (cf. Sec. 3.1). Given a calibrated event Ievt and a frame Iimg, we extract their features from the event network F evt \u03b8e and the densified CLIP\u2019s image encoder F clip \u03b8c , which are then combined with the text embedding from CLIP\u2019s text encoder F txt \u03b8t for open-world prediction (cf. Sec. 3.2). To better serve for cross-modality knowledge transfer, we propose a frame-to-event (F2E) contrastive objective (cf. Sec. 3.3) via superpixel-driven distillation and a text-to-event (T2E) consistency objective (cf. Sec. 3.4) via scene-level regularization. establish contrastive objectives with modalities from other tasks, e.g., point cloud understanding [57], remote sensing [37], medical imaging [82], and so on. In this work, we propose OpenESS to explore superpixel-to-event representation learning. Extensive experiments verify that such an approach is promising for annotation-efficient ESS. 3. Methodology Our study serves as an early attempt at leveraging visionlanguage foundation models like CLIP [69] to learn meaningful event representations without accessing ground-truth labels. We start with a brief introduction of the CLIP model (cf. Sec. 3.1), followed by a detailed elaboration on our proposed open-vocabulary ESS (cf. Sec. 3.2). To encourage effective cross-modal event representation learning, we introduce a frame-to-event contrastive distillation (cf. Sec. 3.3) and a text-to-event consistency regularization (cf. Sec. 3.4). An overview of the OpenESS framework is shown in Fig. 2. 3.1. Revisiting CLIP CLIP [69] learns to associate images with textual descriptions through a contrastive learning framework. It leverages a dataset of 400 million image-text pairs, training an image encoder (based on a ResNet [39] or Vision Transformer [22]) and a text encoder (using a Transformer architecture [80]) to project images and texts into a shared embedding space. Such a training paradigm enables CLIP to perform zero-shot classification tasks, identifying images based on textual descriptions without specific training on those categories. To achieve annotation-free classification on a custom dataset, one needs to combine class label mappings with hand-crafted text prompts as the input to generate the text embedding. In this work, we aim to leverage the semantically rich CLIP feature space to assist open-vocabulary dense prediction on sparse and asynchronous event streams. 3.2. Open-Vocabulary ESS Inputs. Given a set of N event data acquired by an event camera, we aim to segment each event ei among the temporally ordered event streams \u03b5i, which are encoded by the pixel coordinates (xi, yi), microsecond-level timestamp ti, and the polarity pi \u2208{\u22121, +1} which indicates either an increase or decrease of the brightness. Each event camera pixel generates a spike whenever it perceives a change in logarithmic brightness that surpasses a predetermined threshold. Meanwhile, a conventional camera captures gray-scale or color frames Iimg i \u2208R3\u00d7H\u00d7W which are spatially aligned and temporally synchronized with the events or can be aligned and synchronized to events via sensor calibration, where H and W are the spatial resolutions. Event Representations. Due to the sparsity, high temporal resolution, and asynchronous nature of event streams, it is common to convert raw events \u03b5i into more regular representations Ievt i \u2208RC\u00d7H\u00d7W as the input to the neural network [25], where C denotes the number of embedding channels which is depended on the event representations 3 themselves. Some popular choices of such embedding include spatiotemporal voxel grids [29, 104, 105], frame-like reconstructions [73], and bio-inspired spikes [49]. We investigate these three methods and show an example of taking voxel grids as the input in Fig. 2. More analyses and comparisons using reconstructions and spikes are in later sections. Specifically, with a predefined number of events, each voxel grid is built from non-overlapping windows as: I^ { e v t}_i = \\sum _{\\mat h bf {e}_ j \\in \\ var epsilon _i} p_j \\delta (\\mathbf {x}_j \\mathbf {x}) \\delta (\\mathbf {y}_j \\mathbf {y}) \\max \\{1 |t^{*}_j t| , 0\\},~ (1) where \u03b4 is the Kronecker delta function; t\u2217 j = (B \u22121) tj\u2212t0 \u2206T is the normalized event timestamp with B as the number of temporal bins in an event stream; \u2206T is the time window and t0 denotes the time of the first event in the window. Cross-Modality Encoding. Let Fevt \u03b8e : RC\u00d7H\u00d7W 7\u2192 RD1\u00d7H1\u00d7W1 be an event-based segmentation network with trainable parameters \u03b8e, which takes as input an event embedding Ievt i and outputs a D1-dimensional feature of downsampled spatial sizes H1 and W1. Meanwhile, we integrate CLIP\u2019s image encoder Fclip \u03b8c : R3\u00d7H\u00d7W 7\u2192 RD2\u00d7H2\u00d7W2 into our framework and keep the parameters \u03b8c fixed. The output is a D2-dimensional feature of sizes H2 and W2. Our motivation is to transfer general knowledge from Fclip \u03b8c to Fevt \u03b8e , such that the event branch can learn useful representations without using dense event annotations. To enable open-vocabulary ESS predictions, we leverage CLIP\u2019s text encoder Ftxt \u03b8t with pre-trained parameters \u03b8t. The input of Ftxt \u03b8t comes from predefined text prompt templates and the output will be a text embedding extracted from CLIP\u2019s rich semantic space. Densifications. CLIP was originally designed for imagebased recognition tasks and does not provide per-pixel outputs for dense predictions. Several recent attempts explored the adaptation from global, image-level recognition to local, pixel-level prediction, via either model structure modification [100] or fine-tuning [51, 71, 97]. The former directly reformulates the value-embedding layer in CLIP\u2019s image encoder, while the latter uses semantic labels to gradually adapt the pre-trained weights to generate dense predictions. In this work, we implement both solutions to densify CLIP\u2019s outputs and compare their performances in our experiments. Up until now, we have presented a preliminary framework capable of conducting open-vocabulary ESS by leveraging knowledge from the CLIP model. However, due to the large domain gap between the event and image modalities, a na\u00a8 \u0131ve adaptation is sub-par in tackling the challenging event-based semantic scene understanding task. 3.3. F2E: Frame-to-Event Contrastive Distillation Since our objective is to encourage effective cross-modality knowledge transfer for holistic event scene perception, it thus becomes crucial to learn meaningful representations for both thing and stuff classes, especially their boundary information. However, the sparsity and asynchronous nature of event streams inevitably impede such objectives. Superpixel-Driven Knowledge Distillation. To pursue a more informative event representation learning at higher granularity, we propose to first leverage calibrated frames to generate coarse, instance-level superpixels and then distill knowledge from a pre-trained image backbone to the event segmentation network. Superpixel groups pixels into conceptually meaningful atomic regions, which can be used as the basis for higher-level perceptions [1, 54, 85]. The semantically coherent frame-to-event correspondences can thus be found using pre-processed or online-generated superpixels. Such correspondences tend to bridge the sparse events to dense frame pixels in a holistic manner without involving extra training or annotation efforts. Superpixel & Superevent Generation. We resort to the following two ways of generating the superpixels. The first way is to leverage heuristic methods, e.g. SLIC [1], to efficiently groups pixels from frame Iimg i into a total of Mslic segments with good boundary adherence and regularity as Isp i = {I1 i , I2 i , ..., IMslic i }, where Mslic is a hyperparameter that needs to be adjusted based on the inputs. The generated superpixels satisfy I1 i \u222aI2 i \u222a...\u222aIMslic i = {1, 2, ..., H\u00d7 W}. For the second option, we use the recent Segment Anything Model (SAM) [50] which takes Iimg i as the input and outputs Msam class-agnostic masks. For simplicity, we use M to denote the number of superpixels used during knowledge distillation, i.e., {Isp i = {I1 i , ..., Ik i }|k = 1, ..., M} and show more comparisons between SLIC [1] and SAM [50] in later sections. Since Ievt i and Iimg i have been aligned and synchronized, we can group events from Ievt i into superevents {V sp i = {V1 i , ..., Vl i}|l = 1, ..., M} by using the known event-pixel correspondences. Frame-to-Event Contrastive Learning. To encourage better superpixel-level knowledge transfer, we leverage a pretrained image network Fimg \u03b8f : R3\u00d7H\u00d7W 7\u2192RD3\u00d7H3\u00d7W3 as the teacher and distill information from it to the event branch Fevt \u03b8e . The parameters of Fimg \u03b8f , which can come from either CLIP [69] or other pretext task pre-trained backbones such as [8, 15, 64], are kept frozen during the distillation. With Fevt \u03b8e and Fimg \u03b8f , we generate the superevent and superpixel features as follows: \\m a t h bf { f } ^ {ev t} _ i =& ~ \\ frac { 1}{|V ^ {s p } _i| } \\su m _ {l\\i n V ^{sp} _ i}\\m at h cal {P }_{\\o m eg a _e}^{evt}~(~\\mathcal {F}^{evt}_{\\theta _{e}} ~(I^{evt}_i)_l~ )~,\\\\ \\mathbf {f}^{img}_i =& ~\\frac {1}{|I^{sp}_i|}\\sum _{k\\in I^{sp}_i}\\mathcal {P}_{\\omega _f}^{img}~(~\\mathcal {F}^{img}_{\\theta _{f}} ~(I^{img}_i)_k~ )~, (3) where Pevt \u03c9e and Pimg \u03c9f are projection layers with trainable parameters \u03c9e and \u03c9f, respectively, for the event branch and frame branch. In the actual implementation, Pevt \u03c9e and 4 Pimg \u03c9f consist of linear layers which map the D1and D3dimensional event and frame features to the same shape. The following contrastive learning objective is applied to the event prediction and the frame prediction: \\mathc al {L} _ { F 2 E}( \\ t heta _{ e }, \\om e ga _e , \\ome ga _ f) = \\su m _i \\ l o g \\left [ \\frac {e^{(\\langle \\mathbf {f}^{evt}_i, \\mathbf {f}^{img}_i \\rangle /\\tau _1 )}}{\\sum _{j\\neq i} e^{(\\langle \\mathbf {f}^{evt}_i, \\mathbf {f}^{img}_j \\rangle /\\tau _1 )}} \\right ]~, \\label {eq:f2e} (4) where \u27e8\u00b7, \u00b7\u27e9denotes the scalar product between the superevent and superpixel embedding; \u03c41 > 0 is a temperature coefficient that controls the pace of knowledge transfer. Role in Our Framework. Our F2E contrastive distillation establishes an effective pipeline for transferring superpixellevel knowledge from dense, visual informative frame pixels to sparse, irregular event streams. Since we are targeting the semantic segmentation task, the learned event representations should be able to reason in terms of instances and instance parts at and in between semantic boundaries. 3.4. T2E: Text-to-Event Consistency Regularization Although the aforementioned frame-to-event knowledge transfer provides a simple yet effective way of transferring off-the-shelf knowledge from frames to events, the optimization objective might encounter unwanted conflicts. Intra-Class Optimization Conflict. During the model pretraining, the superpixel-driven contrastive loss takes the corresponding superevent and superpixel pair in a batch as the positive pair, while treating all remaining pairs as negative samples. Since heuristic superpixels only provide a coarse grouping of conceptually coherent segments (kindly refer to our Appendix for more detailed analysis), it is thus inevitable to encounter self-conflict during the optimization. That is to say, from hindsight, there is a chance that the superpixels belonging to the same semantic class could be involved in both positive and negative samples. Text-Guided Semantic Regularization. To mitigate the possible self-conflict in Eq. (4), we propose a text-to-event semantic consistency regularization mechanism that leverages CLIP\u2019s text encoder to generate semantically more consistent text-frame pairs {Iimg i , Ti}, where Ti denotes the text embedding extracted from Ftxt \u03b8t . Such a paired relationship can be leveraged via CLIP without additional training. We then construct event-text pairs {Ievt i , Ti} by propagating the alignment between events and frames. Specifically, the paired event and text features are extracted as follows: \\m a t hbf {h }^{ev t} _i =& ~\\ m athc a l {Q} _{ \\ome ga _q}^{evt}~(\\mathcal {F}^{evt}_{\\theta _{e}} ~(I^{evt}_i) )~, ~~ \\mathbf {h}^{txt}_i = \\mathcal {F}^{txt}_{\\theta _{t}}~(T_{i} )~, (5) where Qevt \u03c9q is a projection layer with trainable parameters \u03c9q, which is similar to that of Pevt \u03c9e . Now assume there are a total of Z classes in the event dataset, the following objective is applied to encourage the consistency regularization: &\\math cal {L} _ { T 2E} (\\t h e ta _{e},\\ o mega _q ) =\\\\ & \\su m _{z=1}^Z \\log \\le f t [ \\fr a c {\\s u m _{T _ i \\in z,I^{evt}_i} e^{(\\langle \\mathbf {h}^{evt}_i, \\mathbf {h}^{txt}_i \\rangle /\\tau _2 )}}{\\sum _{j\\neq i, T_i \\in z,T_i \\not \\in I^{evt}_i} e^{(\\langle \\mathbf {h}^{evt}_j, \\mathbf {h}^{txt}_i \\rangle /\\tau _2 )}} \\right ]~, \\label {eq:t2e} (7) where \u03c42 > 0 is a temperature coefficient that controls the pace of knowledge transfer. The overall optimization objective of our OpenESS framework is to minimize L = LF 2E + \u03b1LT 2E, where \u03b1 is a weight balancing coefficient. Role in Our Framework. Our T2E semantic consistency regularization provides a global-level alignment to compensate for the possible self-conflict in the superpixel-driven frame-to-event contrastive learning. As we will show in the following sections, the two objectives work synergistically in improving the performance of open-vocabulary ESS. Inference-Time Configuration. Our OpenESS framework is designed to pursue segmentation accuracy in annotationfree and annotation-efficient manners, without sacrificing event processing efficiency. As can be seen from Fig. 2, after the cross-modality knowledge transfer, only the event branch will be kept. This guarantees that there will be no extra latency or power consumption added during the inference, which is in line with the practical requirements. 4. Experiments 4.1. Settings Datasets. We conduct experiments on two popular ESS datasets. DDD17-Seg [2] is a widely used ESS benchmark consisting of 40 sequences acquired by a DAVIS346B. In total, 15950 training and 3890 testing events of spatial size 352 \u00d7 200 are used, along with synchronized gray-scale frames provided by the DAVIS camera. DSEC-Semantic [79] provides semantic labels for 11 sequences in the DSEC [32] dataset. The training and testing splits contain 8082 and 2809 events of spatial size 640 \u00d7 440, accompanied by color frames (with sensor calibration parameters available) recorded at 20Hz. More details are in the Appendix. Benchmark Setup. In addition to the conventional fullysupervised ESS, we establish two open-vocabulary ESS settings for annotation-free and annotation-efficient learning, respectively. The former aims to train an ESS model without using any dense event labels, while the latter assumes an annotation budget of 1%, 5%, 10%, or 20% of events in the training set. We treat the first few samples from each sequence as labeled and the remaining ones as unlabeled. Implementation Details. Our framework is implemented using PyTorch [66]. Based on the use of event representations, we form frame2voxel, frame2recon, and frame2spike settings, where the event branch will adopt E2VID [73], ResNet-50 [39], and SpikingFCN [49], respectively, with an AdamW [59] optimizer with cosine learning rate scheduler. The frame branch uses a pre-trained ResNet50 [8, 9, 15] and is kept frozen. The number of superpixels 5 Table 1. Comparative study of existing ESS approaches under the annotation-free, fully-supervised, and open-vocabulary ESS settings, respectively, on the test sets of the DDD17-Seg [5] and DSEC-Semantic [79] datasets. All scores are in percentage (%). The best score from each learning setting is highlighted in bold. Method Venue DDD17 DSEC Acc mIoU Acc mIoU Annotation-Free ESS MaskCLIP [100] ECCV\u201922 81.29 31.90 58.96 21.97 FC-CLIP [97] NeurIPS\u201923 88.66 51.12 79.20 39.42 OpenESS Ours 90.51 53.93 86.18 43.31 Fully-Supervised ESS Ev-SegNet [2] CVPRW\u201919 89.76 54.81 88.61 51.76 E2VID [73] TPAMI\u201919 85.84 48.47 80.06 44.08 Vid2E [30] CVPR\u201920 90.19 56.01 EVDistill [84] CVPR\u201921 58.02 DTL [83] ICCV\u201921 58.80 PVT-FPN [86] ICCV\u201921 94.28 53.89 SpikingFCN [49] NCE\u201922 34.20 EV-Transfer [61] RA-L\u201922 51.90 15.52 63.00 24.37 ESS [79] ECCV\u201922 88.43 53.09 84.17 45.38 ESS-Sup [79] ECCV\u201922 91.08 61.37 89.37 53.29 P2T-FPN [91] TPAMI\u201923 94.57 54.64 EvSegformer [47] TIP\u201923 94.72 54.41 HMNet-B [38] CVPR\u201923 88.70 51.20 HMNet-L [38] CVPR\u201923 89.80 55.00 HALSIE [6] WACV\u201924 92.50 60.66 89.01 52.43 Open-Vocabulary ESS MaskCLIP [100] ECCV\u201922 90.50 61.27 89.81 55.01 FC-CLIP [97] NeurIPS\u201923 90.68 62.01 89.97 55.67 OpenESS Ours 91.05 63.00 90.21 57.21 involved in the calculation of F2E contrastive loss is set to 100 for DSEC-Semantic [79] and 25 for DDD17-Seg [2]. For evaluation, we extract the feature embedding for each text prompt offline from a frozen CLIP text encoder using pre-defined templates. For linear probing, the pre-trained event network Fevt \u03b8e is kept frozen, followed by a trainable point-wise linear classification head. Due to space limits, kindly refer to our Appendix for additional details. 4.2. Comparative Study Annotation-Free ESS. In Tab. 1, we compare OpenESS with MaskCLIP [100] and FC-CLIP [97] in the absence of event labels. Our approach achieves zero-shot ESS results of 53.93% and 43.31% on DDD17-Seg [2] and DSEC-Semantic [79], much higher than the two competitors and even comparable to some fully-supervised methods. This validates the effectiveness of conducting ESS in an annotation-free manner for practical usage. Meanwhile, we observe that a fine-tuned CLIP encoder [97] could generate much better semantic predictions than the structure adaptation method [100], as mentioned in Sec. 3.2. Comparisons to State-of-the-Art Methods. As shown in Tab. 1, the proposed OpenESS sets up several new state-ofthe-art results in the two ESS benchmarks. Compared to the 31.7 32.1 35.9 35.7 35.1 32.9 34.1 38.3 38.3 37.0 29 33 37 41 30.1 32.3 35.7 36.1 34.5 31.1 33.7 37.1 38.1 36.5 29 33 37 41 31.1 32.8 35.6 35.4 35.2 32.6 34.2 37.9 38.3 36.1 29 33 37 41 25 (SAM) 25 (SLIC) 50 (SAM) 50 (SLIC) 100 (SAM) 100 (SLIC) 150 (SAM) 150 (SLIC) 200 (SAM) 200 (SLIC) DINO SwAV MoCoV2 Figure 3. Ablation study on the number of superpixels (provided by either SAM [50] or SLIC [1]) involved in calculating the frameto-event contrastive loss. Models after pre-training are fine-tuned with 1% annotations. All mIoU scores are in percentage (%). previously best-performing methods, OpenESS is 1.63% and 2.21% better in terms of mIoU scores on DDD17-Seg [2] and DSEC-Semantic [79], respectively. It is worth mentioning that in addition to the performance improvements, our approach can generate open-vocabulary predictions that are beyond the closed sets of predictions of existing methods, which is more in line with the practical usage. Annotation-Efficient Learning. We establish a comprehensive benchmark for ESS under limited annotation scenarios and show the results in Tab. 3. As can be seen, the proposed OpenESS contributes significant performance improvements over random initialization under linear probing, few-shot fine-tuning, and fully-supervised learning settings. Specifically, using either voxel grid or event reconstruction representation, our approach achieves > 30% relative gains in mIoU on both datasets under liner probing and around 2% higher than prior art in mIoU with full supervisions. We also observe that using voxel grids to represent raw event streams tends to yield overall better ESS performance. Qualitative Assessment. Fig. 4 provides visual comparisons between OpenESS and other approaches on DSECSemantic [79]. We find that OpenESS tends to predict more consistent semantic information from sparse and irregular event inputs, especially at instance boundaries. We include more visual examples and failure cases in the Appendix. Open-World Predictions. One of the core advantages of OpenESS is the ability to predict beyond the fixed label set from the original training sets. As shown in Fig. 1, our approach can take arbitrary text prompts as inputs and generate semantically coherent event predictions without using event labels. This is credited to the alignment between event features and CLIP\u2019s knowledge in T2E. Such a flexible way of prediction enables a more holistic event understanding. Other Representation Learning Approaches. In Tab. 2, we compare OpenESS with recent reconstruction-based [3, 6 Background Building Fence Person Pole Road Sidewalk Vegetation Car Wall Traffic-Sign Event Reconstruction MaskCLIP ESS-Sup OpenESS FC-CLIP GT Figure 4. Qualitative comparisons of state-of-the-art ESS approaches on the test set of DSEC-Semantic [79]. Each color corresponds to a distinct semantic category. GT denotes the ground truth semantic maps. Best viewed in colors and zoomed-in for additional details. Table 2. Comparative study of different representation learning methods applied on event data. OV denotes whether supporting open-vocabulary predictions. All mIoU scores are in percentage (%). The best score from each dataset is highlighted in bold. Method Venue Backbone OV DDD17 DSEC Random ViT-S/16 \u2717 48.76 40.53 MoCoV3 [16] ICCV\u201921 ViT-S/16 \u2717 53.65 49.21 IBoT [101] ICLR\u201922 ViT-S/16 \u2717 49.94 42.53 ECDP [95] ICCV\u201923 ViT-S/16 \u2717 54.66 47.91 Random ViT-B/16 \u2717 43.89 38.24 BeiT [3] ICLR\u201922 ViT-B/16 \u2717 52.39 46.52 MAE [40] CVPR\u201922 ViT-B/16 \u2717 52.36 47.56 Random ResNet-50 \u2717 56.96 57.60 SimCLR [14] ICML\u201920 ResNet-50 \u2717 57.22 59.06 ECDP [95] ICCV\u201923 ResNet-50 \u2717 59.15 59.16 Random ResNet-50 \u2717 55.56 52.86 OpenESS Ours ResNet-50 \u2713 57.01 55.01 Random E2VID \u2717 61.06 54.96 OpenESS Ours E2VID \u2713 63.00 57.21 40, 95, 101] and contrastive learning-based [14, 16] pretraining methods. As can be seen, the proposed OpenESS achieves competitive results over existing approaches. It is worth highlighting again that our framework distinct from prior arts by supporting open-vocabulary learning. 4.3. Ablation Study Cross-Modality Representation Learning. Tab. 4 provides a comprehensive ablation study on the frame-to-event (F2E) and text-to-event (T2E) learning objectives in OpenESS using three event representations. We observe that 43.17 45.58 48.94 49.74 28.90 34.77 38.90 42.53 23.95 30.42 34.11 39.25 20 25 30 35 40 45 50 55 1% 5% 10% 20% mIoU (%) 52.02 55.11 55.66 56.07 49.89 53.72 55.02 55.70 45.30 52.03 53.02 54.05 44 46 48 50 52 54 56 58 1% 5% 10% 20% mIoU (%) Random ID Random ID DDD17-Seg DSEC-Semantic OOD OOD Figure 5. Cross-dataset representation learning results of comparing OpenESS pre-training using in-distribution (ID) and outof-distribution (OOD) data in-between the DDD17-Seg [5] and DSEC-Semantic [79] datasets. Models after pre-training are finetuned with 1%, 5%, 10%, and 20% annotations, respectively. both F2E and T2E contribute to an overt improvement over random initialization under linear probing and few-shot fine-tuning settings, which verifies the effectiveness of our proposed approach. Once again, we find that the voxel grids tend to achieve better performance than other representations. The spike-based methods [49], albeit being computationally more efficient, show sub-par performance compared to voxel grids and reconstructions. Superpixel Generation. We study the utilization of SLIC [1] and SAM [50] in our frame-to-event contrastive distillation and show the results in Fig. 3. Using either frame net7 Table 3. Comparative study of different open-vocabulary semantic segmentation methods [97, 100] under the linear probing (LP) and few-shot fine-tuning, and full supervision (Full) settings, respectively, on the test sets of the DDD17-Seg [5] and DSEC-Semantic [79] datasets. All mIoU scores are given in percentage (%). The best mIoU scores from each learning configuration are highlighted in bold. Method Configuration DSEC-Semantic DDD17-Seg LP 1% 5% 10% 20% Full LP 1% 5% 10% 20% Full Random Voxel Grid 6.70 26.62 31.22 33.67 41.31 54.96 12.30 52.13 54.87 58.66 59.52 61.06 MaskCLIP [100] 33.08 33.89 37.03 38.83 42.40 55.01 31.91 53.91 56.27 59.32 59.97 61.27 FC-CLIP [97] Voxel Grid 43.00 39.12 43.71 44.09 47.77 55.67 54.07 56.38 58.50 60.05 60.85 62.01 OpenESS (Ours) frame2voxel 44.26 41.41 44.97 46.25 48.28 57.21 55.61 57.58 59.07 61.03 61.78 63.00 Improve \u2191 +33.56 +14.79 +13.75 +12.58 +6.97 +2.25 +43.31 +5.45 +4.20 +2.37 +2.26 +1.94 Random Reconstruction 6.22 23.95 30.42 34.11 39.25 52.86 13.89 45.30 52.03 53.02 54.05 55.56 MaskCLIP [100] 27.09 30.73 36.33 40.13 43.37 52.97 29.81 49.02 53.65 54.11 54.75 56.12 FC-CLIP [97] Reconstruction 40.08 38.99 43.34 45.35 47.18 53.05 52.17 51.01 54.09 54.99 55.05 56.34 OpenESS (Ours) frame2recon 44.08 43.17 45.58 48.94 49.74 55.01 53.61 52.02 55.11 55.66 56.07 57.01 Improve \u2191 +37.86 +19.22 +15.16 +14.83 +10.49 +2.15 +39.72 +6.72 +3.08 +2.64 +2.02 +1.45 Table 4. Ablation study of OpenESS under linear probing (LP) and few-shot fine-tuning settings from three learning configurations on the test set of DDD17-Seg [5]. F2E denotes the frame-toevent contrastive learning. T2E denotes the text-to-event semantic regularization. All mIoU scores are given in percentage (%). Configuration F2E T2E DDD17-Seg LP 1% 5% 10% 20% Voxel Grid Random 12.30 52.13 54.87 58.66 59.52 frame2voxel \u2713 52.60 55.41 57.07 59.77 60.21 \u2713 54.11 56.77 58.95 60.12 60.99 \u2713 \u2713 55.61 57.58 59.07 61.03 61.78 Reconstruction Random 13.89 45.30 52.03 53.02 54.05 frame2recon \u2713 50.21 50.96 53.67 54.21 54.92 \u2713 52.62 51.63 54.27 55.00 55.17 \u2713 \u2713 53.61 52.02 55.11 55.66 56.07 Spike Random 12.04 10.01 20.02 25.81 26.03 frame2spike \u2713 15.07 14.31 21.77 26.89 27.07 \u2713 16.11 14.67 22.61 27.97 29.01 \u2713 \u2713 16.27 14.89 23.54 28.51 29.98 works pre-trained by DINO [9], MoCoV2 [15], or SwAV [8], the SAM-generated superpixels consistently exhibit better performance for event representation learning. The number of superpixels involved in calculating tends to affect the effectiveness of contrastive learning. A preliminary search to determine this hyperparameter is required. We empirically find that setting M to 100 for DSEC-Semantic [79] and 25 for DDD17-Seg [2] will likely yield the best possible segmentation performance in our framework. Cross-Dataset Knowledge Transfer. Since we are targeting annotation-free representation learning, it is thus intuitive to see the cross-dataset adaptation effect. As shown in Fig. 5, pre-training on OOD datasets also brings appealing improvements over the random initialization baseline. This result highlights the importance of conducting representation learning for an effective transfer to downstream tasks. 6.70 26.62 31.22 33.67 41.31 10.05 27.84 32.79 34.21 42.13 44.26 41.41 44.97 46.25 48.28 0 14 28 42 56 70 LP 1% 5% 10% 20% mIoU (%) Figure 6. Single-modality OpenESS representation learning study on the DSEC-Semantic [79] dataset. The results are from models of random initialization (\u25a0 \u25a1), recon2voxel pre-training (\u25a0 \u25a1), and frame2voxel pre-training (\u25a0 \u25a1), respectively, after linear probing (LP) and annotation-efficient fine-tuning. Framework with Event Camera Only. Lastly, we study the scenario where the frame camera becomes unavailable. We replace the input to the frame branch with event reconstructions [73] and show the results in Fig. 6. Since the limited visual cues from the reconstruction tend to degrade the quality of representation learning, its performance is subpar compared to the frame-based knowledge transfer. 5. Conclusion In this work, we introduced OpenESS, an open-vocabulary event-based semantic segmentation framework tailored to perform open-vocabulary ESS in an annotation-efficient manner. We proposed to encourage cross-modality representation learning between events and frames using frameto-event contrastive distillation and text-to-event semantic consistency regularization. Through extensive experiments, we validated the effectiveness of OpenESS in tackling dense event-based predictions. We hope this work could shed light on the future development of more scalable ESS systems. Acknowledgement. This work is under the programme DesCartes and is supported by the National Research Foundation, Prime Minister\u2019s Office, Singapore, under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. 8 Appendix A . Additional Implementation Details 9 A.1 . Datasets . . . . . . . . . . . . . . . . . . . . 9 A.2 . Text Prompts . . . . . . . . . . . . . . . . . 9 A.3 . Superpixels . . . . . . . . . . . . . . . . . . 10 A.4 . Backbones . . . . . . . . . . . . . . . . . . 14 A.5 . Evaluation Configuration . . . . . . . . . . . 14 B . Additional Experimental Results 16 B.1. Annotation-Free ESS . . . . . . . . . . . . . 16 B.2. Annotation-Efficient ESS . . . . . . . . . . 16 C . Qualitative Assessment 16 C.1. Open-Vocabulary Examples . . . . . . . . . 16 C.2. Visual Comparisons . . . . . . . . . . . . . 16 C.3. Failure Cases . . . . . . . . . . . . . . . . . 16 C.4. Video Demos . . . . . . . . . . . . . . . . . 17 D . Broader Impact 17 D.1 . Positive Societal influence . . . . . . . . . . 17 D.2 . Potential Limitation . . . . . . . . . . . . . 18 E . Public Resources Used 18 E.1. Public Datasets Used . . . . . . . . . . . . . 18 E.2. Public Implementations Used . . . . . . . . 18 A. Additional Implementation Details In this section, we provide additional details to assist the implementation and reproduction of the approaches in the proposed OpenESS framework. A.1. Datasets In this study, we follow prior works [2, 38, 47, 79] by using the DDD17-Seg [2] and DSEC-Semantic [79] datasets for evaluating and validating the baselines, prior methods, and the proposed OpenESS framework. Some specifications related to these two datasets are listed as follows. \u2022 DDD17-Seg [2] serves as the first benchmark for ESS. It is a semantic segmentation extension of the DDD17 [5] dataset, which includes hours of driving data, capturing a variety of driving conditions such as different times of day, traffic scenarios, and weather conditions. Alonso and Murillo [2] provide the semantic labels on top of DDD17 to enable event-based semantic segmentation. Specifically, they proposed to use the corresponding gray-scale images along with the event streams to generate an approximated set of semantic labels for training, which was proven effective in training models to segment directly on event-based data. A three-step procedure is applied: i) train a semantic segmentation model on the gray-scale images in the Cityscapes dataset [19]; ii) Use the trained model to label the gray-scale images in DDD17; and iii) Conduct a post-processing step on the generated pseudo labels, including class merging and image cropping. The dataset specification is shown in Tab. 5. In total, there are 15950 training and 3890 test samples in the DDD17Seg dataset. Each pixel is labeled across six semantic classes, including flat, background, object, vegetation, human, and vehicle. For each sample, we convert the event streams into a sequence of 20 voxel grids, each consisting of 32000 events and with a spatial resolution of 352 \u00d7 200. For additional details of this dataset, kindly refer to http://sensors.ini. uzh.ch/news_page/DDD17.html. \u2022 DSEC-Semantic [79] is a semantic segmentation extension of the DSEC (Driving Stereo Event Camera) dataset [32]. DSEC is an extensive dataset designed for advanced driver-assistance systems (ADAS) and autonomous driving research, with a particular focus on event-based vision and stereo vision. Different from DDD17 [5], the DSEC dataset combines data from event-based cameras and traditional RGB cameras. The inclusion of event-based cameras (which capture changes in light intensity) alongside regular cameras provides a rich, complementary data source for perception tasks. The dataset typically features high-resolution images and event data, providing detailed visual information from a wide range of driving conditions, including urban, suburban, and highway environments, various weather conditions, and different times of the day. This diversity is crucial for developing systems that can operate reliably in real-world conditions. Based on such a rich collection, Sun et al. [79] adopted a similar pseudo labeling procedure as DDD17-Seg [2] and generated the semantic labels for eleven sequences in DSEC, dubbed as DSEC-Semantic. The dataset specification is shown in Tab. 6. In total, there are 8082 training and 2809 test samples in the DSEC-Semantic dataset. Each pixel is labeled across eleven semantic classes, including background, building, fence, person, pole, road, sidewalk, vegetation, car, wall, and traffic-sign. For each sample, we convert the event streams into a sequence of 20 voxel grids, each consisting of 100000 events and with a spatial resolution of 640 \u00d7 440. For additional details of this dataset, kindly refer to https://dsec.ifi.uzh.ch/dsec-semantic. A.2. Text Prompts To enable the conventional evaluation of our proposed open-vocabulary approach on an event-based semantic segmentation dataset, we need to use the pre-defined class names as text prompts to generate the text embedding. Specifically, we follow the standard templates [69] when generating the embedding. The dataset-specific text prompts defined in our framework are listed as follows. \u2022 DDD17-Seg. There is a total of six semantic classes in 9 Table 5. The specifications of the DDD17-Seg dataset [2]. Training Test Seq dir0 dir3 dir4 dir6 dir7 dir1 # Frames 11785 20051 41071 28411 58650 71680 # Events 5550 1320 6945 1140 995 3890 Resolution 352 \u00d7 200 352 \u00d7 200 # Classes 6 Classes 6 Classes Table 6. The specifications of the DSEC-Semantic dataset [79]. Training Test Seq 00 a 01 a 02 a 04 a 05 a 06 a 07 a 08 a 13 a 14 c 15 a # Frames 939 681 235 701 1753 1523 1463 787 379 1191 1239 # Events 933 675 229 695 1747 1517 1457 781 373 1185 1233 Resolution 640 \u00d7 440 640 \u00d7 440 # Classes 11 Classes 11 Classes the DDD17-Seg dataset [2], with static and dynamic components of driving scenes. Our defined text prompts of this dataset are summarized in Tab. 7. For each semantic class, we generate for each text prompt the text embedding using the CLIP text encoder and then average the text embedding of all text prompts as the final embedding of this class. \u2022 DSEC-Semantic. There is a total of eleven semantic classes in the DSEC-Semantic dataset [79], ranging from static and dynamic components of driving scenes. Our defined text prompts of this dataset are summarized in Tab. 8. For each semantic class, we generate for each text prompt the text embedding using the CLIP text encoder and then average the text embedding of all text prompts as the final embedding of this class. A.3. Superpixels In image processing and computer vision, superpixels can be defined as a scheme that groups pixels in an image into perceptually meaningful atomic regions, which are used to replace the rigid structure of the pixel grid [1]. Superpixels provide a more natural representation of the image structure, often leading to more efficient and effective image processing. Here are some of their key aspects: \u2022 Grouping Pixels. Superpixels are often formed by clustering pixels based on certain criteria like color similarity, brightness, texture, and other low-level patterns [1], or more recently, semantics [50]. This results in contiguous regions in the image that are more meaningful than individual pixels for many applications [13, 57, 67, 94]. \u2022 Reducing Complexity. By aggregating pixels into superpixels, the complexity of image data is significantly reduced [78]. This reduction helps in speeding up subsequent image processing tasks, as algorithms have fewer elements (superpixels) to process compared to the potentially millions of pixels in an image. \u2022 Preserving Edges. One of the primary goals of superpixel segmentation is to preserve important image edges. Superpixels often adhere closely to the boundaries of objects in the image, making them useful for tasks that rely on accurate edge information, like object recognition and scene understanding. In this work, we propose to first leverage calibrated frames to generate coarse, instance-level superpixels and then distill knowledge from a pre-trained image backbone to the event segmentation network. Specifically, we resort to the following two ways to generate the superpixels. \u2022 SLIC. The first way is to leverage the heuristic Simple Linear Iterative Clustering (SLIC) approach [1] to efficiently group pixels from frame Iimg i into a total of Mslic segments with good boundary adherence and regularity. The superpixels are defined as Isp i = {I1 i , I2 i , ..., IMslic i }, where Mslic is a hyperparameter that needs to be adjusted based on the inputs. The generated superpixels satisfy I1 i \u222aI2 i \u222a... \u222aIMslic i = {1, 2, ..., H \u00d7 W}. Several examples of the SLICgenerated superpixels are shown in the second row of Fig. 7, where each of the color-coded patches represents one distinct and semantically coherent superpixel. \u2022 SAM. For the second option, we use the recent Seg10 Table 7. The text prompts defined on the DDD17-Seg dataset [2] (6 classes) used for generating the CLIP text embedding. DDD17 (6 classes) # class text prompt 0 flat \u2018road\u2019, \u2018driveable\u2019, \u2018street\u2019, \u2018lane marking\u2019, \u2018bicycle lane\u2019, \u2018roundabout lane\u2019, \u2018parking lane\u2019, \u2018terrain\u2019, \u2018grass\u2019, \u2018soil\u2019, \u2018sand\u2019, \u2018lawn\u2019, \u2018meadow\u2019, \u2018turf\u2019 1 background \u2018sky\u2019, \u2018building\u2019 2 object \u2018pole\u2019, \u2018traffic sign pole\u2019, \u2018traffic light pole\u2019, \u2018traffic light box\u2019, \u2018traffic-sign\u2019, \u2018parking-sign\u2019, \u2018direction-sign\u2019 3 vegetation \u2018vegetation\u2019, \u2018vertical vegetation\u2019, \u2018tree\u2019, \u2018tree trunk\u2019, \u2018hedge\u2019, \u2018woods\u2019, \u2018terrain\u2019, \u2018grass\u2019, \u2018soil\u2019, \u2018sand\u2019, \u2018lawn\u2019, \u2018meadow\u2019, \u2018turf\u2019 4 human \u2018person\u2019, \u2018pedestrian\u2019, \u2018walking people\u2019, \u2018standing people\u2019, \u2018sitting people\u2019, \u2018toddler\u2019 5 vehicle \u2018car\u2019, \u2018jeep\u2019, \u2018SUV\u2019, \u2018van\u2019, \u2018caravan\u2019, \u2018truck\u2019, \u2018box truck\u2019, \u2018pickup truck\u2019, \u2018trailer\u2019, \u2018bus\u2019, \u2018public bus\u2019, \u2018train\u2019, \u2018vehicle-on-rail\u2019, \u2018tram\u2019, \u2018motorbike\u2019, \u2018moped\u2019, \u2018scooter\u2019, \u2018bicycle\u2019 Table 8. The ext prompts defined on the DSEC-Semantic dataset [79] (11 classes) used for generating the CLIP text embedding. DSEC-Semantic (11 classes) # class text prompt 0 background \u2018sky\u2019 1 building \u2018building\u2019, \u2018skyscraper\u2019, \u2018house\u2019, \u2018bus stop building\u2019, \u2018garage\u2019, \u2018carport\u2019, \u2018scaffolding\u2019 2 fence \u2018fence\u2019, \u2018fence with hole\u2019 3 person \u2018person\u2019, \u2018pedestrian\u2019, \u2018walking people\u2019, \u2018standing people\u2019, \u2018sitting people\u2019, \u2018toddler\u2019 4 pole \u2018pole\u2019, \u2018electric pole\u2019, \u2018traffic sign pole\u2019, \u2018traffic light pole\u2019 5 road \u2018road\u2019, \u2018driveable\u2019, \u2018street\u2019, \u2018lane marking\u2019, \u2018bicycle lane\u2019, \u2018roundabout lane\u2019, \u2018parking lane\u2019 6 sidewalk \u2018sidewalk\u2019, \u2018delimiting curb\u2019, \u2018traffic island\u2019, \u2018walkable\u2019, \u2018pedestrian zone\u2019 7 vegetation \u2018vegetation\u2019, \u2018vertical vegetation\u2019, \u2018tree\u2019, \u2018tree trunk\u2019, \u2018hedge\u2019, \u2018woods\u2019, \u2018terrain\u2019, \u2018grass\u2019, \u2018soil\u2019, \u2018sand\u2019, \u2018lawn\u2019, \u2018meadow\u2019, \u2018turf\u2019 8 car \u2018car\u2019, \u2018jeep\u2019, \u2018SUV\u2019, \u2018van\u2019, \u2018caravan\u2019, \u2018truck\u2019, \u2018box truck\u2019, \u2018pickup truck\u2019, \u2018trailer\u2019, \u2018bus\u2019, \u2018public bus\u2019, \u2018train\u2019, \u2018vehicle-on-rail\u2019, \u2018tram\u2019, \u2018motorbike\u2019, \u2018moped\u2019, \u2018scooter\u2019, \u2018bicycle\u2019 9 wall \u2018wall\u2019, \u2018standing wall\u2019 10 traffic-sign \u2018traffic-sign\u2019, \u2018parking-sign\u2019, \u2018direction-sign\u2019, \u2018traffic-sign without pole\u2019, \u2018traffic light box\u2019 ment Anything Model (SAM) [50] which takes Iimg i as the input and outputs Msam class-agnostic masks. For simplicity, we use M to denote the number of superpixels used during knowledge distillation, i.e., {Isp i = {I1 i , ..., Ik i }|k = 1, ..., M}. Several examples of the SAM-generated superpixels are shown in the third row of Fig. 7, where each of the color-coded patches represents one distinct and semantically coherent superpixel. We calculate the SLIC and SAM superpixel distributions on the training set of the DSEC-Semantic dataset [79] and show the corresponding statistics in Fig. 8. As can be observed, the SLIC-generated superpixels often contain 11 Frame Superpixel (SLIC) Superpixel (SAM) Figure 7. Examples of superpixels generated by SLIC [1] (the 2nd row) and SAM [50] (the 3rd row). The parameter Mslic in the SLIC algorithm is set to 100. Each colored patch represents one distinct and semantically coherent superpixel. Best viewed in colors. (a) Histogram of SLIC-Generated Superpixels (b) Histogram of SAM-Generated Superpixels Figure 8. The statistical distributions of superpixels generated by SLIC [1] (subfigure a) and SAM [50] (subfigure b). more low-level visual cues, such as color similarity, brightness, and texture. On the contrary, superpixels generated by SAM exhibit clear semantic coherence and often depict the boundaries of objects and backgrounds. As verified in the main body of this paper, the semantically richer SAM superpixels bring higher performance gains in our Frameto-Event Contrastive Learning framework. Meanwhile, we provide more fine-grained examples of the SLIC algorithm using different Mslic, i.e., 25, 50, 100, 150, and 200. The results are shown in Fig. 9. Specifically, the number of superpixels Mslic should reflect the complexity and detail of the image. For images with high detail or complexity (like those with many objects or textures), a larger Mslic can capture more of this detail. Conversely, for simpler images, fewer superpixels might be sufficient. Usually, more superpixels mean smaller superpixels. Smaller superpixels can adhere more closely to object boundaries and capture finer details, but they might also capture more noise. Fewer superpixels result in larger, more homogeneous regions but may lead to a loss of detail, especially 12 Frame SLIC = 25 SLIC = 100 SLIC = 50 SLIC = 150 SLIC = 200 Figure 9. Examples of superpixels generated by SLIC [1] with different numbers of superpixels Mslic (25, 50, 100, 150, and 200). Each colored patch represents one distinct and semantically coherent superpixel. Best viewed in colors. at the edges of objects. The choice also depends on the specific application. For instance, in object detection or segmentation tasks where boundary adherence is crucial, a higher number of superpixels might be preferable. In contrast, for tasks like image compression or abstraction, fewer superpixels might be more appropriate. Often, the optimal number of superpixels is determined empirically. This involves experimenting with different values and evaluating the results based on the specific criteria of the task or application. In our event-based semantic segmentation task, we choose Mslic = 100 for our Frame-to-Event Contrastive Learning on the DSEC-Semantic dataset [79], and Mslic = 25 on the DDD17-Seg dataset [2]. Since Ievt i and Iimg i have been aligned and synchro13 nized, we can group events from Ievt i into superevents {V sp i = {V1 i , ..., Vl i}|l = 1, ..., M} by using the known event-pixel correspondences. A.4. Backbones As mentioned in the main body of this paper, we establish three open-vocabulary event-based semantic segmentation settings based on the use of three different event representations, i.e., frame2voxel, frame2recon, and frame2spike. It is worth noting that these three event representations tend to have their own advantages. We supplement additional implementation details regarding the used event representations as follows. \u2022 Frame2Voxel. For the use of voxel grids as the event embedding, we follow Sun et al. [79] by converting the raw events \u03b5i into the regular voxel grids Ivox i \u2208RC\u00d7H\u00d7W as the input to the event-based semantic segmentation network. This representation is intuitive and aligns well with conventional event camera data processing techniques. It is suitable for convolutional neural networks as it maintains spatial and temporal relationships. Specifically, with a predefined number of events, each voxel grid is built from non-overlapping windows as follows: I^ { v o x}_i = \\sum _{\\mat hbf {e}_j\\in \\var epsilon _i} p_j \\delta (\\mathbf {x}_j \\mathbf {x}) \\delta (\\mathbf {y}_j \\mathbf {y}) \\max \\{1 |t^{*}_j t| , 0\\},~ \\label {eq:vox} (8) where \u03b4 is the Kronecker delta function; t\u2217 j = (B \u2212 1) tj\u2212t0 \u2206T is the normalized event timestamp with B as the number of temporal bins in an event stream; \u2206T is the time window and t0 denotes the time of the first event in the window. It is worth noting that voxel grids can be memory-intensive, especially for high-resolution sensors or long-time windows. They might also introduce quantization errors due to the discretization of space and time. For additional details on the use of voxel grids, kindly refer to https://github.com/uzh-rpg/ess. \u2022 Frame2Recon. For the use of event reconstructions as the event embedding, we follow Sun et al. [79] and Rebecq et al. [73] by converting the raw events \u03b5i into the regular frame-like event reconstructions Irec i \u2208RH\u00d7W as the input to the event-based semantic segmentation network. This can be done by accumulating events over short time intervals or by using algorithms to interpolate or simulate frames. This approach is compatible with standard image processing techniques and algorithms developed for frame-based vision. It is more familiar to practitioners used to working with conventional cameras. In this work, we adopt the E2VID model [73] to generate the event reconstructions. This process can be described as follows: r\\m a t hbf {z}^{re c } _k = &~ ~E _ { \\t ext {e 2vi d}} ( I ^{vox}_k, \\mathbf {z}^{rec}_{k-1}), ~~~ k = 1, ..., N,~ \\\\ I^{rec}_i =&~ ~D_{\\text {e2vid}} (\\mathbf {z}^{rec}),~ (10) where Ivox k denotes the voxel grids as defined in Eq. (8); Ee2vid and De2vid are the encoder of decoder of the E2VID model [73], respectively. It is worth noting that event reconstructions can lose the fine temporal resolution that event cameras provide. They might also introduce artifacts or noise, especially in scenes with fast-moving objects or low event rates. For additional details on the use of event reconstructions, kindly refer to https: //github.com/uzh-rpg/rpg_e2vid. \u2022 Frame2Spike. For the use of spikes as the event embedding, we follow Kim et al. [49] by converting the raw events \u03b5i into spikes Ispk i \u2208RH\u00d7W as the input to the event-based semantic segmentation network. The spike representation keeps the data in its raw form \u2013 as individual spikes or events. This representation preserves the high temporal resolution of the event data and is highly efficient in terms of memory and computation, especially for sparse scenes. The rate coding is used as the spike encoding scheme due to its reliable performance across various tasks. Each pixel value with a random number ranging between [smin, smax] at every time step is recorded, where smin and smax are the minimum and maximum possible pixel intensities, respectively. If the random number is greater than the pixel intensity, the Poisson spike generator outputs a spike with amplitude 1. Otherwise, the Poisson spike generator does not yield any spikes. The spikes in a certain time window are accumulated to generate a frame, where such frames will serve as the input to the event-based semantic segmentation network. It is worth noting that processing raw spike data requires specialized algorithms, often inspired by neuromorphic computing. It might not be suitable for traditional image processing techniques and can be challenging to interpret and visualize. For additional details on the use of spikes, kindly refer to https:// github.com/Intelligent-Computing-LabYale/SNN-Segmentation. To sum up, each event representation has its unique characteristics and is suitable for different applications or processing techniques. Our proposed OpenESS framework is capable of leveraging each of the above event representations for efficient and accurate event-based semantic segmentation in an annotation-free and open-vocabulary manner. Such a versatile and flexible way of learning verifies the broader application potential of our proposed framework. A.5. Evaluation Configuration Following the convention, we use the Intersection-overUnion (IoU) metric to measure the semantic segmentation performance for each semantic class. The IoU score can be calculated via the following equation: \\ t ex tt t { I oU } = \\frac {TP}{TP+FP+FN}~, (11) 14 Table 9. The per-class segmentation results of annotation-free event-based semantic segmentation approaches on the test set of DSECSemantic [79]. Scores reported are IoUs in percentage (%). For each semantic class, the best score in each column is highlighted in bold. Method mIoU background building fence person pole road sidewalk vegetation car wall traffic-sign Acc Annotation-Free ESS MaskCLIP [100] 21.97 26.45 52.59 0.20 0.04 4.19 65.76 2.96 48.02 40.67 0.67 0.08 58.96 FC-CLIP [97] 39.42 87.49 69.68 14.39 17.53 0.29 71.76 34.56 71.30 63.19 2.98 0.50 79.20 OpenESS (Ours) 43.31 92.53 74.22 11.96 0.00 0.41 87.32 55.09 74.23 64.25 7.98 8.47 86.18 Table 10. The per-class segmentation results of annotation-efficient event-based semantic segmentation approaches on the test set of DSEC-Semantic [79]. All approaches adopted the frame2voxel representation. Scores reported are IoUs in percentage (%). For each semantic class under each experimental setting, the best score in each column is highlighted in bold. Method mIoU background building fence person pole road sidewalk vegetation car wall traffic-sign Acc Linear Probing Random 6.70 7.85 3.37 0.00 0.00 0.00 38.60 0.00 23.83 0.01 0.00 0.00 37.94 MaskCLIP [100] 33.08 75.04 65.06 4.63 0.00 6.47 77.06 17.07 55.89 52.17 0.69 9.78 76.39 FC-CLIP [97] 43.00 92.53 72.59 12.43 0.02 0.00 88.14 52.84 71.92 64.02 10.54 7.95 86.00 OpenESS (Ours) 44.26 93.64 75.40 11.82 1.16 0.75 90.29 57.96 73.15 65.36 9.69 7.67 87.55 Fine-Tuning (1%) Random 26.62 81.63 33.13 1.77 0.97 7.58 76.81 17.45 51.05 18.64 0.37 3.40 70.04 MaskCLIP [100] 33.89 87.56 53.24 2.34 0.60 8.92 81.71 25.76 59.37 42.56 2.52 8.24 77.79 FC-CLIP [97] 39.12 91.64 59.78 8.93 0.00 7.84 87.58 46.58 66.87 51.30 4.74 5.10 82.12 OpenESS (Ours) 41.41 93.01 74.01 3.21 10.78 14.58 84.50 34.78 69.82 55.12 4.47 11.21 84.41 Fine-Tuning (5%) Random 31.22 77.13 50.32 12.36 1.26 0.00 86.03 41.22 21.48 50.67 2.96 0.04 71.38 MaskCLIP [100] 37.03 91.09 60.52 4.35 11.90 11.73 81.24 23.56 61.77 45.93 2.75 12.45 79.58 FC-CLIP [97] 43.71 92.91 71.21 10.84 0.00 5.60 90.11 57.54 71.30 61.04 11.41 8.81 86.38 OpenESS (Ours) 44.97 93.58 70.18 8.44 18.22 11.01 89.72 57.76 67.44 56.06 9.59 12.70 85.46 Fine-Tuning (10%) Random 33.67 85.79 49.85 6.78 8.00 15.51 80.78 25.72 58.18 29.97 0.82 8.93 76.69 MaskCLIP [100] 38.83 92.34 69.96 3.64 5.85 12.98 82.23 23.61 66.39 53.23 3.47 13.46 82.36 FC-CLIP [97] 44.09 93.62 72.86 10.88 0.00 8.23 89.81 57.05 71.95 60.64 9.58 10.42 86.66 OpenESS (Ours) 46.25 93.92 73.34 8.13 18.61 15.41 89.03 52.56 71.76 61.71 9.99 14.26 86.72 Fine-Tuning (20%) Random 41.31 91.08 67.90 4.68 17.90 17.41 85.11 43.24 66.62 43.95 5.03 11.55 82.99 MaskCLIP [100] 42.40 93.19 72.49 5.52 18.21 16.17 84.29 35.04 69.44 54.47 2.43 15.15 84.09 FC-CLIP [97] 47.77 91.05 70.90 7.04 21.10 14.84 91.13 64.28 71.62 61.73 13.25 18.55 86.95 OpenESS (Ours) 48.28 94.21 74.66 10.49 20.46 16.27 90.15 57.66 73.71 63.95 11.20 18.29 87.57 where TP (True Positive) denotes pixels correctly classified as belonging to the class; FP (False Positive) denotes pixels incorrectly classified as belonging to the class; and FN (False Negative) denotes pixels that belong to the class but are incorrectly classified as something else. The IoU metric measures the overlap between the predicted segmentation and the ground truth for a specific class. It returns a value between 0 (no overlap) and 1 (perfect overlap). It is a way to summarize the mIoU values for each class into a single metric that captures the overall performance of the model across all classes, i.e., mean IoU (mIoU). The mIoU of a given prediction is calculated as: \\t e x t t t {m IoU} = \\frac {1}{C}\\sum _{i=1}^C \\texttt {IoU}_i~, (12) 15 where C is the number of classes and IoUi denotes the score of class i. mIoU provides a balanced measure since each class contributes equally to the final score, regardless of its size or frequency in the dataset. A higher mIoU indicates better semantic segmentation performance. A score of 1 would indicate perfect segmentation for all classes, while a score of 0 would imply an absence of correct predictions. In this work, all the compared approaches adopt the same mIoU calculation as in the ESS benchmarks [2, 79]. Additionally, we also report the semantic segmentation accuracy (Acc) for the baselines and the proposed framework. B. Additional Experimental Results In this section, we provide the class-wise IoU scores for the experiments conducted in the main body of this paper. B.1. Annotation-Free ESS The per-class zero-shot event-based semantic segmentation results are shown in Tab. 9. For almost every semantic class, we observe that the proposed OpenESS achieves much higher IoU scores than MaskCLIP [100] and FCCLIP [97]. This validates the effectiveness of OpenESS for conducting efficient and accurate event-based semantic segmentation without using either the event or frame labels. B.2. Annotation-Efficient ESS The per-class linear probing event-based semantic segmentation results are shown in the first block of Tab. 10 and Tab. 11. Specifically, compared to the random initialization baseline, a self-supervised pre-trained network always provides better features. The quality of representation learning often determines the linear probing performance. The network pre-trained using our frame-to-event contrastive distillation and text-to-event consistency regularization tends to achieve higher event-based semantic segmentation results than MaskCLIP [100] and FC-CLIP [97]. Notably, such improvements are holistic across almost all eleven semantic classes in the dataset. These results validate the effectiveness of the proposed OpenESS framework in tackling the challenging event-based semantic segmentation task. The per-class annotation-efficient event-based semantic segmentation results of the frame2vodel and frame2recon settings under 1%, 5%, 10%, and 20% annotation budgets are shown in Tab. 10 and Tab. 11, respectively. Similar to the findings and conclusions drawn above, we observe clear superiority of the proposed OpenESS framework over the random initialization, MaskCLIP [100], and FC-CLIP [97] approaches. Such consistent performance improvements validate again the effectiveness and superiority of the proposed frame-to-event contrastive distillation and text-to-event consistency regularization. We hope our framework can lay a solid foundation for future works in the established annotation-efficient event-based semantic segmentation. C. Qualitative Assessment In this section, we provide sufficient qualitative examples to further attest to the effectiveness and superiority of the proposed framework. C.1. Open-Vocabulary Examples The key advantage of our proposed OpenESS framework is its capability to leverage open-world vocabularies from the CLIP text embedding space. Unlike prior event-based semantic segmentation, which relies on pre-defined and fixed categories, our open-vocabulary segmentation aims to understand and categorize image regions into a broader, potentially unlimited range of categories. We provide more openvocabulary examples in Fig. 10. As can be observed, given proper text prompts like \u201croad\u201d, \u201csidewalk\u201d, and \u201cbuilding\u201d, our proposed OpenESS framework is capable of generating semantically meaningful attention maps for depicting the corresponding regions. Such a flexible framework can be further adapted to new or unseen categories without the need for extensive retraining, which is particularly beneficial in dynamic environments where new objects or classes might frequently appear. Additionally, the open-vocabulary segmentation pipeline allows users to work with a more extensive range of objects and concepts, enhancing the user experience and interaction capabilities. C.2. Visual Comparisons In this section, we provide more qualitative comparisons of our proposed OpenESS framework over prior works [79, 100] on the DSEC-Semantic dataset. Specifically, the visual comparisons are shown in Fig. 11 and Fig. 12. As can be observed, OpenESS shows superior event-based semantic segmentation performance over prior works across a wide range of event scenes under different lighting and weather conditions. Such consistent segmentation performance improvements provide a solid foundation to validate the effectiveness and superiority of the proposed frameto-event contrastive distillation and text-to-event consistency regularization. For additional qualitative comparisons, kindly refer to Appendix C.4. C.3. Failure Cases As can be observed from Fig. 10, Fig. 11, and Fig. 12, the existing event-based semantic segmentation approaches still have room for further improvements. Similar to the conventional semantic segmentation task, it is often hard to accurately segment the boundaries between the semantic objects and backgrounds. In the context of event-based semantic segmentation, such a problem tends to be particularly overt. Unlike traditional cameras that capture dense, 16 Table 11. The per-class segmentation results of annotation-efficient event-based semantic segmentation approaches on the test set of DSEC-Semantic [79]. All approaches adopted the frame2recon representation. Scores reported are IoUs in percentage (%). For each semantic class under each experimental setting, the best score in each column is highlighted in bold. Method mIoU background building fence person pole road sidewalk vegetation car wall traffic-sign Acc Linear Probing Random 6.22 7.55 5.48 0.00 0.00 0.00 39.79 0.00 15.64 0.01 0.00 0.00 36.60 MaskCLIP [100] 27.09 59.82 62.14 1.60 0.00 4.54 69.71 5.34 47.85 38.51 0.40 8.12 70.59 FC-CLIP [97] 40.08 89.22 69.08 14.62 26.90 0.00 83.14 21.79 69.56 57.78 7.86 0.92 82.70 OpenESS (Ours) 44.08 88.56 61.43 6.05 21.54 12.36 91.43 63.04 64.01 60.52 6.18 9.76 84.48 Fine-Tuning (1%) Random 23.95 76.37 29.59 1.73 0.00 5.75 78.12 9.73 48.96 11.56 0.28 1.38 69.20 MaskCLIP [100] 30.73 79.25 47.26 0.13 1.17 5.04 78.78 19.72 56.13 43.74 1.13 5.70 74.25 FC-CLIP [97] 38.99 87.75 61.48 3.47 4.60 8.06 88.96 55.12 64.41 47.16 3.61 4.23 82.90 OpenESS (Ours) 43.17 87.85 66.15 8.82 21.52 12.41 89.36 55.35 72.45 48.76 3.40 8.81 84.56 Fine-Tuning (5%) Random 30.42 80.25 38.43 5.50 13.45 9.08 83.45 30.88 51.75 19.53 0.16 2.19 73.65 MaskCLIP [100] 36.33 85.80 60.43 2.60 8.70 7.47 83.10 34.04 64.80 39.60 3.07 10.00 80.37 FC-CLIP [97] 43.34 88.28 64.90 6.94 20.96 9.58 91.18 62.35 68.09 52.39 4.93 7.16 84.93 OpenESS (Ours) 45.58 89.11 70.83 10.92 20.21 1.99 91.04 60.76 72.07 67.91 12.90 3.69 86.93 Fine-Tuning (10%) Random 34.11 81.85 46.28 4.87 11.30 10.20 85.32 43.16 55.34 32.72 1.28 2.90 77.48 MaskCLIP [100] 40.13 87.31 62.54 4.93 5.09 12.86 88.30 50.60 64.74 55.21 0.32 9.51 83.52 FC-CLIP [97] 45.35 89.71 69.00 6.64 22.37 8.33 91.20 64.09 69.34 61.73 7.23 9.19 86.29 OpenESS (Ours) 48.94 90.63 71.68 12.41 29.32 9.42 92.53 66.19 73.76 69.03 10.71 12.71 87.84 Fine-Tuning (20%) Random 39.25 87.14 61.80 6.77 3.51 13.19 88.53 56.12 61.95 44.65 1.29 6.84 82.51 MaskCLIP [100] 43.37 89.83 69.80 7.07 8.93 10.67 88.88 52.65 70.71 60.03 3.10 15.39 85.69 FC-CLIP [97] 47.18 91.20 71.39 11.53 24.92 9.60 91.58 63.88 71.52 63.44 7.55 12.36 87.07 OpenESS (Ours) 49.74 91.28 73.43 10.69 27.18 13.85 92.84 67.59 74.20 69.22 10.62 16.21 88.26 synchronous frames, event cameras generate sparse, asynchronous events, which brings extra difficulties for accurate boundary segmentation. Meanwhile, the current framework finds it hard to accurately predict the minor classes, such as fence, pole, wall, and traffic-sign. We believe these are potential directions that future works can explore to further improve the event-based semantic segmentation performance on top of existing frameworks. C.4. Video Demos In addition to the qualitative examples shown in the main body and this supplementary file, we also provide several video clips to further validate the effectiveness and superiority of the proposed approach. Specifically, we provide three video demos in the attachment, named demo1.mp4, demo2.mp4, and demo3.mp4. The first two video demos show open-vocabulary event-based semantic segmentation examples using the class names and open-world vocabularies as the input text prompts, respectively. The third video demo contains qualitative comparisons of the semantic segmentation predictions among our proposed OpenESS and prior works. All the provided video sequences validate again the unique advantage of the proposed openvocabulary event-based semantic segmentation framework. Kindly refer to our GitHub repository1 for additional details on accessing these video demos. D. Broader Impact In this section, we elaborate on the positive societal influence and potential limitations of the proposed openvocabulary event-based semantic segmentation framework. D.1. Positive Societal influence Event-based cameras can capture extremely fast motions that traditional cameras might miss, making them ideal for dynamic environments. In robotics, this leads to better object detection and scene understanding, enhancing the capabilities of robots in the manufacturing, healthcare, and 1https://github.com/ldkong1205/OpenESS 17 service industries. In autonomous driving, event-based semantic segmentation provides high temporal resolution and low latency, which is crucial for detecting sudden changes in the environment. This can lead to faster and more accurate responses, potentially reducing accidents and enhancing road safety. Our proposed OpenESS is designed to reduce the annotation budget and training burden of existing event-based semantic segmentation approaches. We believe such an efficient way of learning helps increase the scalability of event-based semantic segmentation systems and in turn contributes positively to impact society by enhancing safety, efficiency, and performance in various aspects. D.2. Potential Limitation Although our proposed framework is capable of conducting annotation-free and open-vocabulary event-based semantic segmentation and achieves promising performance, there tend to exist several potential limitations. Firstly, our current framework requires the existence of synchronized event and RGB cameras, which might not be maintained by some older event camera systems. Secondly, we directly adopt the standard text prompt templates to generate the text embedding, where a more sophisticated design could further improve the open-vocabulary learning ability of the existing framework. Thirdly, there might still be some self-conflict problems in our frame-to-event contrastive distillation and text-to-event consistency regularization. The design of a better representation learning paradigm on the event-based data could further resolve these issues. We believe these are promising directions that future works can explore to further improve the current framework. E. Public Resources Used In this section, we acknowledge the use of public resources, during the course of this work. E.1. Public Datasets Used We acknowledge the use of the following public datasets, during the course of this work: \u2022 DSEC2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CC BY-SA 4.0 \u2022 DSEC-Semantic3 . . . . . . . . . . . . . . . . . . . . . . CC BY-SA 4.0 \u2022 DDD174 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CC BY-SA 4.0 \u2022 DDD17-Seg5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unknown \u2022 E2VID-Driving6 . . . . . GNU General Public License v3.0 E.2. Public Implementations Used We acknowledge the use of the following public implementations, during the course of this work: 2https://dsec.ifi.uzh.ch 3https://dsec.ifi.uzh.ch/dsec-semantic 4http://sensors.ini.uzh.ch/news_page/DDD17.html 5https://github.com/Shathe/Ev-SegNet 6https://rpg.ifi.uzh.ch/E2VID.html \u2022 ESS7 . . . . . . . . . . . . . . . GNU General Public License v3.0 \u2022 E2VID8 . . . . . . . . . . . . .GNU General Public License v3.0 \u2022 HMNet9 . . . . . . . . . . . . . . . . . . . . . . . BSD 3-Clause License \u2022 EV-SegNet10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Unknown \u2022 SNN-Segmentation11 . . . . . . . . . . . . . . . . . . . . . . . Unknown \u2022 CLIP12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MIT License \u2022 MaskCLIP13 . . . . . . . . . . . . . . . . . . . . . . Apache License 2.0 \u2022 FC-CLIP14 . . . . . . . . . . . . . . . . . . . . . . . Apache License 2.0 \u2022 SLIC-Superpixels15 . . . . . . . . . . . . . . . . . . . . . . . . Unknown \u2022 Segment-Anything16 . . . . . . . . . . . . . . .Apache License 2.0 7https://github.com/uzh-rpg/ess 8https://github.com/uzh-rpg/rpg_e2vid 9https://github.com/hamarh/HMNet_pth 10https://github.com/Shathe/Ev-SegNet 11https://github.com/Intelligent-Computing-LabYale/SNN-Segmentation 12https://github.com/openai/CLIP 13https://github.com/chongzhou96/MaskCLIP 14https://github.com/bytedance/fc-clip 15https://github.com/PSMM/SLIC-Superpixels 16https://github.com/facebookresearch/segmentanything 18 Background Building Fence Person Pole Road Sidewalk Vegetation Car Wall Traffic-Sign Event Reconstruction \u201croad\u201d \u201dsidewalk\u201d GT \u201dbuilding\u201d Figure 10. Qualitative examples of the language-guided attention maps generated by the proposed OpenESS framework. For each sample, the regions with a high similarity score to the text prompts are highlighted. Best viewed in colors and zoomed-in for additional details. 19 Background Building Fence Person Pole Road Sidewalk Vegetation Car Wall Traffic-Sign Event Reconstruction MaskCLIP ESS-Sup OpenESS GT Figure 11. Qualitative comparisons (1/2) among different ESS approaches on the test set of DSEC-Semantic [79]. Best viewed in colors. 20 Background Building Fence Person Pole Road Sidewalk Vegetation Car Wall Traffic-Sign Event Reconstruction MaskCLIP ESS-Sup OpenESS GT Figure 12. Qualitative comparisons (2/2) among different ESS approaches on the test set of DSEC-Semantic [79]. Best viewed in colors. 21"
17
+ }
intro_8K/test_introduction_short_2405.05330v1.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05330v1",
3
+ "title": "Chemo-dynamical Evolution of Simulated Satellites for a Milky Way-like Galaxy",
4
+ "abstract": "The chemical abundances of Milky Way's satellites reflect their star\nformation histories (SFHs), yet, due to the difficulty of determining the ages\nof old stars, the SFHs of most satellites are poorly measured. Ongoing and\nupcoming surveys will obtain around ten times more medium-resolution spectra\nfor stars in satellites than are currently available. To correctly extract SFHs\nfrom large samples of chemical abundances, the relationship between chemical\nabundances and SFHs needs to be clarified. Here, we perform a high-resolution\ncosmological zoom-in simulation of a Milky Way-like galaxy with detailed models\nof star formation, supernova feedback, and metal diffusion. We quantify SFHs,\nmetallicity distribution functions, and the $\\alpha$-element (Mg, Ca, and Si)\nabundances in satellites of the host galaxy. We find that star formation in\nmost simulated satellites is quenched before infalling to their host. Star\nformation episodes in simulated satellites are separated by a few hundred Myr\nowing to supernova feedback; each star formation event produces groups of stars\nwith similar [$\\alpha$/Fe] and [Fe/H]. We then perform a mock observation of\nthe upcoming Subaru Prime Focus Spectrograph (PFS) observations. We find that\nSubaru PFS will be able to detect distinct groups of stars in [$\\alpha$/Fe] vs.\n[Fe/H] space, produced by episodic star formation. This result means that\nepisodic SFHs can be estimated from the chemical abundances of $\\gtrsim$ 1,000\nstars determined with medium-resolution spectroscopy.",
5
+ "authors": "Yutaka Hirai, Evan N. Kirby, Masashi Chiba, Kohei Hayashi, Borja Anguiano, Takayuki R. Saitoh, Miho N. Ishigaki, Timothy C. Beers",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "astro-ph.GA",
9
+ "cats": [
10
+ "astro-ph.GA",
11
+ "astro-ph.HE",
12
+ "astro-ph.IM",
13
+ "astro-ph.SR"
14
+ ],
15
+ "label": "Original Paper",
16
+ "paper_cat": "Diffusion AND Model",
17
+ "gt": "1.",
18
+ "main_content": "\u2217JSPS Research Fellow stellar masses (M\u2217) less than \u2248109 M\u2299are typically categorized as dwarf galaxies. Among them, gas-free dwarf galaxies with M\u2217\u2273105 M\u2299are called classical dwarf spheroidal galaxies (dSphs), while those with M\u2217\u2272105 M\u2299are identified as ultrafaint dwarf galaxies (UFDs). Many of the dwarf galaxies in the Local Group are satellites of the MW or M31; interactions with their more massive hosts could affect the chemodynamical properties of these satellites (Genina et al. 2019; Kvasova et al. 2024). Satellites exhibit a wide variety of star formation histories (SFHs) and chemical abundances. The SFHs of Local Group dwarf galaxies can be derived by colormagnitude diagrams (CMDs, e.g., de Boer et al. 2012a,b; Weisz et al. 2014; Ren et al. 2024). Weisz et al. (2014) comprehensively studied SFHs in the Local Group dwarf arXiv:2405.05330v1 [astro-ph.GA] 8 May 2024 2 Hirai et al. galaxies. They found that more massive systems tend to have more extended SFHs. They also showed that MW or M31 satellites have a shorter duration of star formation than those in the field populations. Chemical abundances reflect the SFHs and nucleosynthesis pathways in satellites (e.g., Tolstoy et al. 2009; Kirby et al. 2010, 2011a,b; Ishigaki et al. 2014; Hill et al. 2019; Sk\u00b4 ulad\u00b4 ottir et al. 2024). Kirby et al. (2011b) analyzed metallicity distribution functions (MDFs) of the MW\u2019s satellites with Keck/DEIMOS with a chemical evolution model. They found that the MDFs of more-luminous systems are well-fit with their Extra Gas Model, which assumes gas infall. However, their best-fit effective yields suggested that gas outflow also played an important role in the chemical evolution of less-luminous systems. Thanks to the difference in the delay times between core-collapse supernovae (CCSNe) and type Ia supernovae (SNe Ia), the ratios of \u03b1-elements (e.g., Mg, Ca, and Si) to Fe are often used as an indicator for the rate of chemical evolution. For example, Hill et al. (2019) reported high-resolution spectroscopy of 99 stars in the Sculptor dSph. They found that the decreasing trend of [\u03b1/Fe]1 toward higher metallicity starts at [Fe/H] = \u22121.8. This metallicity is lower than the start of this trend in the MW, indicating that the chemical evolution of Sculptor dSph proceeded more slowly. Numerical simulations have been performed to understand the SFHs and chemical evolution of dwarf galaxies (e.g., Revaz et al. 2009; Okamoto et al. 2010; Revaz & Jablonka 2012, 2018; Hirai et al. 2015, 2017, 2018, 2019; Jeon et al. 2017; Escala et al. 2018; Simpson et al. 2018; Garrison-Kimmel et al. 2019; Applebaum et al. 2021; Di Cintio et al. 2021; Samuel et al. 2022; Rodr\u00b4 \u0131guez et al. 2022). Di Cintio et al. (2021) found that 25% of their simulated satellite dwarf galaxies exhibit an enhancement of star formation after infall to their host. In contrast, the star formation in satellites with little gas or small pericentric distances is quenched after infall due to ram pressure stripping. Escala et al. (2018) introduced the process of metal diffusion in cosmological zoom-in simulations of the Feedback in Realistic Environment (FIRE) project (Hopkins et al. 2014), and analyzed chemical abundances in their simulated dwarf galaxies. They found that the MDFs and intrinsic scatter in [\u03b1/Fe] are similar in satellite and isolated dwarf galaxies, suggesting that internal chemical evo1 [X/Y] = =log(NX/NY) \u2212log(NX/NY)\u2299, where NX and NY are the number densities of elements X and Y, respectively. lution plays a more important role than environmental effects. Ongoing and upcoming surveys will significantly enlarge the number of stars in satellites of the MW with available spectroscopy (e.g., Takada et al. 2014; Cooper et al. 2023; Jin et al. 2023). For example, the Dark Energy Spectroscopic Instrument (DESI) Milky Way Survey will observe 7 million stars with magnitudes 16 < r < 20 at Galactic latitudes |b| > 20\u25e6(Cooper et al. 2023). Their footprint includes 31 Local Group dwarf galaxies. This potentially could yield medium-resolution (R \u223c5, 000) spectroscopy of the member stars in some of these galaxies from their centers to their outskirts. The upcoming Subaru Prime Focus Spectrograph (PFS) will target 7 Local Group dwarf galaxies in their Galactic Archaeology survey (Takada et al. 2014). Thanks to their wide field of view (1.25 square degrees) and massively multiplexed spectroscopic capability (2,394 fibers), they can obtain medium-resolution (R \u223c5, 000) spectroscopy for stars with magnitudes g \u227223 in these galaxies. The Subaru PFS will yield radial velocities, [Fe/H], carbon, \u03b1-elements, and nickel abundance measurements in each galaxy for \u22481,000 to 14,000 stars, more than ten times larger than the current numbers of stars with these measurements. Comparison with cosmological zoom-in simulations and these observations will greatly advance our understanding of the chemo-dynamical properties of dwarf galaxies. This study aims to understand the relationship between star formation and chemical evolution in satellite galaxies. With our high-resolution cosmological zoomin simulation of a MW-like galaxy, we examine SFHs, MDFs, and \u03b1-element abundances in satellites with M\u2217 \u223c105\u2013107 M\u2299, corresponding to the mass ranges of satellite dSphs of the MW. We show how SFHs are reflected in MDFs and \u03b1-element abundances using our simulation. We then evaluate the capability of upcoming surveys to reconstruct the SFHs from the chemical abundances of dwarf galaxies. This paper is organized as follows. Section 2 describes our code, the adopted initial conditions, and the procedures used for carrying out mock observations. In Section 3, we describe the chemo-dynamical properties of our simulated satellites. Section 4 discusses how SFHs are reflected in chemical abundances, and how these can be observed in future surveys. Our conclusions are presented in Section 5. 2. METHODS 2.1. Code We have computed the evolution of satellite galaxies in a cosmological zoom-in simulation of a MW-like Simulated Dwarf Satellites of the Milky Way 3 galaxy performed by Hirai et al. (2022). In this simulation, we adopted the N-body/density-independent smoothed particle hydrodynamics code asura (Saitoh et al. 2008, 2009; Saitoh & Makino 2013, 2016). For cooling and heating calculations, we adopted cloudy ver. 13.05 (Ferland et al. 2013). Gas particles probabilistically form stars if they are in a region with a number density of hydrogen atoms higher than 100 cm\u22123, the temperature is lower than 1,000 K, and there are converging flows (\u2207\u00b7 v < 0, e.g., Hirai et al. 2021). Each star particle is treated as a simple stellar population (SSP) with the initial mass function (IMF) of Chabrier (2003) from 0.1 M\u2299to 100 M\u2299. Star particles with ages less than 10 Myr heat the surrounding gas to 104 K (Fujii et al. 2021). We implemented momentum-based supernova feedback following Hopkins et al. (2018a). Metal diffusion was incorporated following Hirai & Saitoh (2017). The cosmic ultra-violet (UV) heating was implemented following Haardt & Madau (2012). The reionization is assumed to occurat a redshift (z) of 8.5. We also assumed the self-shielding model of Rahmati et al. (2013). We adopted the nucleosynthetic yields compiled in the Chemical Evolution Library (celib, Saitoh 2017). CCSNe and SNe Ia are the dominant contributors to the evolution of the [\u03b1/Fe] ratios. For CCSNe, we use the yields of Nomoto et al. (2013) with 13 M\u2299to 40 M\u2299. Given the mass of the star particle, we integrated the IMF from the maximum stellar mass of the IMF to the lower stellar mass until the cumulative number of stars in the integration range became unity. This approach enabled the tracking of the contribution from CCSNe with different progenitor masses in sufficiently high-resolution simulations. When the stellar particle mass (m\u2217) was 4.5 \u00d7 103 M\u2299, the IMF for CCSNe (13\u2013 40 M\u2299) was divided into 100 bins. For SNe Ia, we assumed a delay-time distribution with a power-law index of \u22121, and a minimum delay time of 40 Myr, following Maoz et al. (2012). We also included the contribution of asymptotic giant branch (AGB) stars for stars with 1 to 8 M\u2299(Karakas 2010; Doherty et al. 2014). We adopted the solar abundance of Asplund et al. (2009). 2.2. Initial Conditions A MW-like halo was selected from the cosmological simulation with a box size of (36 h\u22121 Mpc)3. We adopted cosmological parameters of \u2126m = 0.308, \u2126\u039b = 0.692, \u2126b = 0.0484, and H0 = 67.8 km s\u22121 Mpc\u22121 (Planck Collaboration et al. 2016). An initial condition for the zoom-in simulation was generated by music (Hahn & Abel 2011). We used the Amiga Halo Finder (ahf, Gill et al. 2004; Knollmann & Knebe 2009) to find Table 1. List of Simulated Satellite Galaxies at z = 0. Halo ID Mhalo M\u2217 \u27e8[Fe/H]\u27e9 \u03c3[Fe/H] d (M\u2299) (M\u2299) (dex) (kpc) 9 7.5 \u00d7 109 7.5 \u00d7 106 \u22121.95 0.23 204.2 12 4.7 \u00d7 109 2.1 \u00d7 107 \u22121.08 0.58 148.8 36 2.2 \u00d7 109 1.1 \u00d7 107 \u22121.43 0.37 54.5 38 2.5 \u00d7 109 1.3 \u00d7 105 \u22121.52 0.52 198.7 40 2.3 \u00d7 109 3.9 \u00d7 106 \u22121.52 0.46 57.9 150 5.9 \u00d7 108 2.4 \u00d7 105 \u22122.53 0.24 190.7 151 6.0 \u00d7 108 7.7 \u00d7 104 \u22122.89 0.43 167.2 167 5.2 \u00d7 108 3.1 \u00d7 104 \u22124.34 0.14 206.6 199 4.2 \u00d7 108 2.8 \u00d7 104 \u22123.42 0.28 169.2 Note\u2014From left to right, the columns are the Halo ID, the total halo mass within the virial radius (Mhalo), the total stellar mass (M\u2217), the mean [Fe/H] (\u27e8[Fe/H]\u27e9), the dispersion of [Fe/H] (\u03c3[Fe/H]), and the distance from the center of the central galaxy (d). M\u2217, \u27e8[Fe/H]\u27e9, and \u03c3[Fe/H] are computed within the half-mass radius. the target halo. In this simulation, the initial masses of each particle in the finest region were 7.2 \u00d7 104 M\u2299for dark matter, 1.3 \u00d7 104 M\u2299for gas, and 4.5 \u00d7 103 M\u2299for stars. We set the gravitational softening length (\u03f5g) to 85 pc for dark matter and 82 pc for gas and stars. We performed the simulation from z = 100 to 0. In this simulation, we picked out satellites orbiting the central galaxy. We only considered those with a minimum of 104 dark matter and 10 star particles, and made sure that they were not false substructures introduced by the contamination from low-resolution particles. Table 1 lists the simulated satellite galaxies selected for this study. 2.3. Mock Observations We performed mock observations for Subaru PFS (see Section 4.2)2. For the mock observation, we computed the magnitudes of simulated stars. First, SSP particles were divided into individual stars. In this model, stars from 0.1 M\u2299to 100 M\u2299were probabilistically generated from SSP particles, following a Chabrier (2003) IMF. Stars were generated until the total generated stellar mass exceeded the particle\u2019s mass. Then, the magnitudes of each star were computed using the isochrone 2 Sanderson et al. (2020) also discussed in detail mock observations of galaxy simulations. 4 Hirai et al. table obtained from cmd 3.73 (Girardi et al. 2000, and updates thereof). We generated isochrones with ages from 4 Gyr to 13.8 Gyr and [M/H]4 from \u22122.0 to 0.0 based on the PARSEC-COLIBRI stellar-evolutionary tracks (Bressan et al. 2012; Chen et al. 2014, 2015; Tang et al. 2014; Marigo et al. 2017; Pastorelli et al. 2019, 2020). With this tool, we computed apparent V -band magnitudes for stars in Halos 12 and 40. We assume Halos 12 and 40 are located at 147 kpc and 86 kpc from an observer to compare with the Fornax and Sculptor dSphs, respectively (McConnachie 2012). We then applied the Subaru PFS spectral synthesis pipeline (roughly based on Kirby et al. 2010; Escala et al. 2019) to compute observed uncertainties. The pipeline adopts synthetic spectra of K-giants and G-dwarfs for \u22124.0 \u2264[Fe/H] \u2264\u22120.5. We calculated wavelengthdependent continuum signal-to-noise ratios with the Subaru PFS Exposure Time Calculator5 using the simulated stars\u2019 V -band magnitudes, assuming a three-hour exposure in the Subaru PFS median-resolution mode for K giants. Then, we computed uncertainties on [Fe/H] and [\u03b1/Fe] by resampling the synthetic spectra hundreds of times from Gaussian-distributed per-pixel noise inversely proportional to the estimated signal-to-noise ratios. The simulated chemical abundances of stars are varied within those estimated uncertainties. 3. RESULTS 3.1. Structures and Star Formation Histories This paper mainly discusses the chemo-dynamical evolution of Halos 12, 40, and 150, listed in Table 1. The [\u03b1/Fe] as a function of [Fe/H] for Halos 9 and 36 are shown in the Appendix. We select three these simulated dwarf galaxies based on their stellar mass (Halo 12: 2.1 \u00d7 107M\u2299, Halo 40: 3.9 \u00d7 106M\u2299, and Halo 150: 2.4 \u00d7 105M\u2299). These values are similar to those of the Fornax (2.0 \u00d7 107M\u2299), Sculptor (2.3 \u00d7 106M\u2299), and Draco (2.9 \u00d7 105M\u2299) dSphs (McConnachie 2012). Also, Halos 12, 40, and 150 currently contain no gas. Figure 1 shows the stellar mass distribution of Halos 12, 40, and 150 at z = 0. The half-mass (light) radii of these galaxies are 1,334 pc (Halo 12), 874 pc (Halo 40), and 1,346 pc (Halo 150), respectively. The somewhat larger radii than the observed ones (Fornax: 710 pc, Sculptor: 283 pc, Draco: 221 pc, McConnachie 2012) are 3 http://stev.oapd.inaf.it/cgi-bin/cmd 4 [M/H] = log(Z/X) \u2212log(Z/X)\u2299, where X and Z are the mass fractions of hydrogen and metals, respectively. 5 https://github.com/Subaru-PFS/spt ExposureTimeCalculator due to the spatial resolution of this simulation (\u03f5g = 85 pc). The simulated satellite dwarf galaxies exhibit various SFHs. Figure 2 shows the cumulative SFHs of all satellites listed in Table 1. The SFHs of satellite galaxies are affected by SN feedback, cosmic reionization, and interactions with the host galaxy. This figure shows that more massive satellites tend to have extended SFHs, while less massive halos quench star formation earlier. Star formation in halos with < 109M\u2299(150, 151, 167, and 199) is quenched at < 2 Gyr from the beginning of the simulation by cosmic reionization and SN feedback, while halos with \u2265109M\u2299form stars after the reionization epoch. Gas accreted before reionization in halos with \u2265109M\u2299self-shield the UV background, resulting in them surviving the reionization (e.g., O\u02dc norbe et al. 2015; Wheeler et al. 2019). Hereafter, we focus on three satellites: Halos 12, 40, and 150. The mass and the cosmic infall time also affect the SFHs. Figure 3 shows the orbits (top panels), mass evolution (middle panels), and SFHs (bottom panels) of Halos (a) 12, (b) 40, and (c) 150. Halo 12 has the most recent infall time. The first pericentric passage (5 kpc) of this galaxy is 0.7 Gyr prior to the end of the simulation (Figure 3 (a), top panel). Prior to pericentric passage, this galaxy experienced two star formation events separated by 2.9 Gyr (Figure 3 (a), bottom panel). The first period of star formation starts at 0.1 Gyr and ends at 3.3 Gyr from the beginning of the simulation. During this period, stars are formed along with the accretion of material (Figure 3 (a), middle panel). After SNe expel the gas away from the halo, the infall of the gas forms new stars. This interplay episodically forms stars for 3.2 Gyr. The second star formation event begins when the accretion of a halo brings additional material to the halo at 6.2 Gyr. As with the first period of star formation, it is regulated by SN feedback. The star formation is quenched when feedback from CCSNe from the recent star formation (t \u227210 Myr ago) and SNe Ia from previous star formation (t \u223c1 Gyr ago) expel the gas from the galaxy at 9.5 Gyr. Halo 40 has a shorter total duration of star formation, mainly due to the earlier infall time than that of Halo 12. Halo 40 crosses the main halo\u2019s virial radius (Rvir) at 7.4 Gyr, while Halo 12 experiences its closest pericenter passage at 12.6 Gyr. Due to the early infall, repeated gas removal by ram pressure stripping prevents additional star formation in the later phase. The evolution of gas mass after the first infall is due to our analysis method. The increase in the tidal radius of the halo around the apocenter accretes more diffuse gas around the galaxy, Simulated Dwarf Satellites of the Milky Way 5 \u22124 \u22122 0 2 4 X (kpc) \u22124 \u22122 0 2 4 Y (kpc) (a) \u22125.0 \u22124.5 \u22124.0 \u22123.5 \u22123.0 \u22122.5 Log stellar mass fraction \u22124 \u22122 0 2 4 X (kpc) \u22124 \u22122 0 2 4 Y (kpc) (b) \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 Log stellar mass fraction \u22124 \u22122 0 2 4 X (kpc) \u22124 \u22122 0 2 4 Y (kpc) (c) \u22125.0 \u22124.5 \u22124.0 \u22123.5 \u22123.0 \u22122.5 Log stellar mass fraction Figure 1. Stellar distribution of simulated satellite dwarf galaxies for (a) Halo 12, (b) Halo 40, and (c) Halo 150. The color scale depicts each grid\u2019s log scale stellar-mass fraction. Most stars are spherically distributed at the center of their dark matter halo. resulting in the increase of the detected gas mass of this 100 101 log(Time (Gyr)) 0.0 0.2 0.4 0.6 0.8 1.0 Cumulative star formation history Halo 9 Halo 12 Halo 36 Halo 38 Halo 40 Halo 150 Halo 151 Halo 167 Halo 199 Figure 2. Cumulative SFHs of simulated dwarf satellites, as listed in Table 1. Less massive halos (e.g., Halos 151, 167, and 199) tend to quench star formation earlier than more massive halos (e.g., Halos 9, 12, and 36). halo. Although gas mass evolution is shown here, these gas particles are not eligible to form stars. Halo 40 experienced star formation in the first 2.8 Gyr. As shown in the bottom panel of Figure 3 (b), there are five peaks of star formation, separated from 0.40 to 0.97 Gyr. The SFH in this halo is also mainly regulated by SN feedback. As shown in the bottom panel of Figure 3 (b), stars are formed during cosmic reionization. After the star formation is quenched at 0.83 Gyr, an additional gas supply resumes star formation at 1.79 Gyr. Eventually, star formation is halted at 2.76 Gyr. This quenching is mainly caused by the heating by CCSNe from the recent star formation and SNe Ia from the previous star formation, due to their delay times. Since Halo 40 is located at a distance five times larger than the virial radius of the main halo at 2.76 Gyr, ram pressure stripping is unlikely to be the main cause responsible for the suppression of star formation. Halo 150 has the shortest duration of star formation among the halos shown in Figure 3. Figure 3 (c) shows the cosmic time evolution of Halo 150. The top panel shows that this halo experienced at least two pericenter passages. Note that we cannot follow the mass evolution before 4.84 Gyr, because the progenitor halos are undetected by the halo finder. As shown in the bottom panel of Figure 3 (c), the first episode of star formation lasts 0.47 Gyr, and is quenched by cosmic reionization. In this episode, 80% of its stars are formed. The second star formation event occurs at 1.66 Gyr, possibly because of the gas infall, but it is quenched quickly. 6 Hirai et al. 0 1000 Distance (kpc) 106 109 Mass (M ) 0 2 4 6 8 10 12 14 Time (Gyr) 10 3 10 2 SFR (M yr 1) (a) 0 500 1000 Distance (kpc) 106 109 Mass (M ) 0 2 4 6 8 10 12 14 Time (Gyr) 10 3 10 2 SFR (M yr 1) (b) 0 200 Distance (kpc) 105 107 109 Mass (M ) 0 2 4 6 8 10 12 14 Time (Gyr) 10 3 SFR (M yr 1) (c) Figure 3. Cosmic time evolution of (a) Halo 12, (b) Halo 40, and (c) Halo 150. Top sub-panels: The orbital distance (blue) and the time evolution of the virial radius of the main halo (orange). Middle sub-panels: The dark matter (bluesolid) and gas (orange-dashed) mass evolution. Bottom subpanels: star formation histories. The grey line represents the epoch of reionization (z = 8.5). The light-grey shaded region in panel (c) means the halo finder cannot follow the mass evolution. 3.2. Chemical Abundances The MDFs of stellar systems reflect their histories of star formation, gas infall, and gas outflow; Figure 4 shows MDFs of Halos 12, 40, and 150. We also plot the observed MDFs of the Fornax, Sculptor, and Draco dSphs (Kirby et al. 2010). It should be noted that the purpose of our study is not to reproduce the MDFs of the observed dSphs. Rather, we compare simulated and observed MDFs in Section 4.1. The MDF of Halo 12 exhibits a bimodal distribution, reflecting two major star formation events (the bottom panel of Figure 3 (a)). All stars with [Fe/H] < \u22121.5 are formed within 3.3 Gyr from the beginning of the simulation. These stars are mainly located in the outskirts of the galaxy. For stars with [Fe/H] < \u22121.5, 28.5% of them are within rh, while 71.5% of stars with [Fe/H] \u2265\u22121.5 are within rh. As shown in the green-dashed line in Figure 4 (a), the fraction of stars with [Fe/H] < \u22121.5 in the MDF is significantly decreased for stars within rh. Stars around [Fe/H] = \u22121.2 and [Fe/H] = \u22120.8 are associated with star formation events around 8.0 Gyr and 9.5 Gyr, respectively. As shown in the middle panel of Figure 3 (a), these stars are formed from gas infall. Figure 4 (b) shows the MDF of Halo 40. The MDF is broadly distributed over \u22123.0 \u2272[Fe/H] \u2272\u22121.0. Stars around [Fe/H] = \u22122.3, \u22121.8, and \u22121.3 reflect star formation at different cosmic times. For [Fe/H] < \u22122.5, all stars formed before 1.0 Gyr from the beginning of the simulation. For stars with \u22122.5 < [Fe/H] < \u22122.0, half of them are formed at t < 1.0 Gyr, while others are formed at 1.7 < t/(Gyr) < 2.3, simultaneously with stars with \u22122.0 < [Fe/H] < \u22121.5. Stars with [Fe/H] > \u22121.5 have younger ages. All of these stars are formed after 2.2 Gyr. Although there is an overlap in the ages of each peak, the peaks in the MDFs indicate star formation at different cosmic times. In Figure 4 (b), we also plot the MDF for stars within rh. Unlike for Halo 12, the MDFs are not largely affected by the spatial distribution of stars. Figure 4 (c) shows the MDF for Halo 150. As shown in Figure 3 (c), Halo 150 exhibits two star formation events. The second peak of star formation produces stars with \u22122.3 < [Fe/H] < \u22122.1, while the first star formation event mainly forms stars with [Fe/H] < \u22124.0. The number fraction of these ultra metal-poor (UMP) stars is 75.6% for all UMP stars and 64.9% for UMP stars within rh. These stars largely affect the median metallicity of this galaxy. The median metallicity is [Fe/H] = \u22124.36 for all stars, but [Fe/H] = \u22122.16 for stars with [Fe/H] > \u22124.0. Stars with different ages clearly differ in the [\u03b1/Fe] vs. [Fe/H] space. Figure 5 (a) shows [\u03b1/Fe], as a function of [Fe/H], in Halo 12. This galaxy has two major Simulated Dwarf Satellites of the Milky Way 7 \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 df/d[Fe/H] (a) Halo 12 Halo 12 (within rh) Fornax \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] 0.00 0.05 0.10 0.15 0.20 0.25 df/d[Fe/H] (b) Halo 40 Halo 40 (within rh) Sculptor \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 [Fe/H] 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 df/d[Fe/H] (c) Halo 150 Halo 150 (within rh) Draco Figure 4. Simulated (blue-solid line) and observed (orangedashed line) MDFs for (a) Halo 12 and Fornax, (b) Halo 40 and Sculptor, and (c) Halo 150 and Draco. The green-dashed line represents the MDFs for stars within rh. The simulated data do not include simulated observational errors. Observed data are taken from Kirby et al. (2010). star formation events (Figure 3 (a)). The first event (13.7 Gyr to 10.5 Gyr ago) forms the decreasing trend of [\u03b1/Fe] from [Fe/H] = \u22122.5 to [Fe/H] = \u22121.0. Also, there is roughly a \u223c1 dex scatter in the [\u03b1/Fe] ratios. The episodic star formation creates these features during the first major star formation event. The first star formation episode (\u226513 Gyr ago) forms the high-\u03b1 ([\u03b1/Fe] > +0.3) component. The interstellar medium (ISM)\u2019s inhomogeneity results in a widely distributed metallicity (\u22123.0 < [Fe/H] < \u22121.5). The low-\u03b1 (\u22120.3 < [\u03b1/Fe] < \u22120.1) and very metal-poor (\u22122.5 < [Fe/H] < \u22122.2) component come from another dwarf galaxy accreted to Halo 12. The subsequent star formation episodes (12.0 Gyr to 10.5 Gyr ago) produce the decreasing trend of [\u03b1/Fe] ratios due to the substantial contribution from SNe Ia. In contrast, the second star formation event (7.6 Gyr to 4.3 Gyr ago) produces an increasing trend of the [\u03b1/Fe] ratios for [Fe/H] > \u22121.5. This trend suggests that stars are preferentially formed from the ejecta of CCSNe. During the second major star formation event, stars are mainly produced at the galaxy\u2019s center. Young stars give rise to CCSNe mainly at the center, while SNe Ia occur in the more extended region due to their delay times; SNe Ia occur in the more distant places relative to the star-forming region. This difference in the spatial distribution results in the formation of stars reflecting the yields of CCSNe. Since Si also exhibits a similar behavior, AGB stars are unlikely to contribute to forming this trend. Figure 5 (b) shows [\u03b1/Fe], as a function of [Fe/H], in Halo 40. From inspection, five peaks of star formation (Figure 3 (b)) produce groups of stars with different [Fe/H] and [\u03b1/Fe] ratios. The first peak of star formation (13.4 Gyr ago) produces stars with [Fe/H] < \u22122.3 and [\u03b1/Fe] > +0.3. Since it is the earliest phase of the star formation, CCSNe are the dominant contributor to the enrichment, resulting in a flat trend of [\u03b1/Fe] as a function of [Fe/H]. A few stars with [Fe/H] > \u22122.0 and [\u03b1/Fe] \u22480.2 are formed from the ejecta of Population III CCSNe. The second peak of star formation (13.0 Gyr ago) forms stars with \u22122.5 < [Fe/H] < \u22122.0 and +0.1 < [\u03b1/Fe] < +0.5. The contribution of SNe Ia from the stars produced in the first peak of star formation makes this second group of stars, with lower [\u03b1/Fe] and higher [Fe/H] than the first group. Subsequent star formation and the contributions of SNe Ia from the previous peaks of star formation produce groups of stars with lower [\u03b1/Fe] and higher [Fe/H]. The third peak of star formation (12.0 Gyr ago) creates groups of stars with \u22122.5 < [Fe/H] < \u22121.7 and \u22120.3 < [\u03b1/Fe] < +0.2. This group has the lowest [\u03b1/Fe] 8 Hirai et al. ratios because of the contribution of SNe Ia from the previous two star formation peaks. The fourth peak of star formation (11.6 Gyr ago) produces stars with the same [Fe/H] range but higher [\u03b1/Fe] ratios (0.0< [\u03b1/Fe] < +0.4). This group of stars reflects the ejecta from CCSNe formed in the third peak of star formation. The final star formation event (11.0 Gyr ago) forms stars with \u22121.5 < [Fe/H] < \u22121.0 and \u22120.2 < [\u03b1/Fe] < +0.2. Because of its short duration (\u223c100 Myr), stars are mainly formed from the ejecta of CCSNe. Figure 5 (c) shows [\u03b1/Fe] as a function of [Fe/H] in Halo 150. Although stars are too few to discuss the trend of the [\u03b1/Fe] ratios, stars formed at different times exhibit distinct differences in [\u03b1/Fe] ratios. Stars formed in \u226513.4 Gyr ago show [\u03b1/Fe] > +0.2, reflecting the yields of CCSNe. Different [\u03b1/Fe] ratios originate from CCSNe with different progenitor masses. A clear separation of star formation events (1.24 Gyr, Figure 3 (c)) yields stars formed in the second star formation peak with lower [\u03b1/Fe] ratios owing to the contribution of SNe Ia. The dispersion of the [\u03b1/Fe] ratios reflects the degree of the ISM\u2019s inhomogeneity. We quantified the scatter for [\u03b1/Fe] in \u22123 < [Fe/H] < \u22120.5 following Escala et al. (2018). These authors defined the intrinsic scatter as the standard deviation of the distance distribution between stars\u2019 [\u03b1/Fe] ratios and the cubic spline fitting curve for the data. For Halos 12 and 40, the intrinsic scatter of [\u03b1/Fe] is 0.18 dex and 0.16 dex, respectively. These are similar to the estimated intrinsic scatter (Escala et al. 2018) of the Fornax (0.14 dex) and Sculptor dSphs (0.078 dex), meaning that the simulated and observed satellites have ISM inhomogeneity that gives rise to scatter \u22720.2 dex for the [\u03b1/Fe] ratios. The radial metallicity distribution reflects spatial variations in star formation. Star formation in the inner region of Halos 12 and 40 lasts longer than that in the outer region. Figures 6 (a) and (b) show radial [Fe/H] distributions in Halos 12 and 40, respectively. Both galaxies have a negative slope of [Fe/H], as a function of the distance from the center, reflecting the difference in the spatial distribution of the stars with different ages. The youngest stars in these galaxies are located within 3 kpc, while stars with ages of > 13 Gyr have a more extended spatial distribution to 5 kpc. The radial [\u03b1/Fe] distribution exhibits positive slopes (Figures 6 (c) and (d)). Because newer stars located in the center of the galaxies are more affected by SNe Ia, the average [\u03b1/Fe] ratio near the galactic center is lower than in the outskirts. These radial [Fe/H] and [\u03b1/Fe] gradients are caused by old and metal-poor populations in the outskirts. This result highlights the importance \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 [\u03b1/Fe] (a) 6 8 10 12 Age (Gyr) \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 [\u03b1/Fe] (b) 11.0 11.5 12.0 12.5 13.0 13.5 Age (Gyr) \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 [\u03b1/Fe] (c) 12.05 12.06 12.07 12.08 12.09 12.10 Age (Gyr) Figure 5. The \u03b1-element distributions for (a) Halo 12, (b) Halo 40 , and (c) Halo 150. The color bars indicate the ages of the stars. The simulated data do not include simulated observational errors. of measuring the chemical abundances of stars in the outer regions of dwarf satellites. The kinematics of stars also differ among stars with different metallicities. Figures 7 (a) and (b) show the line-of-sight velocities (vlos) as a function of [Fe/H]. We computed vlos assuming that Halos 12 and 40 are located in the equatorial coordinates of Fornax and Sculptor (Hayashi et al. 2020), respectively, i.e., we observed Halos 12 and 40 respectively located in the positions of Fornax and Sculptor dSphs from the position of the Sun in the Milky Way. The dispersion of vlos for [Fe/H] \u2264\u22121.5 is 19.3 km s\u22121 in Halo 12 and 19.2 km s\u22121 in Halo 40. On the other hand, stars with [Fe/H] > \u22121.5 have smaller dispersion: 15.0 km s\u22121 (Halo 12) and 16.8 km s\u22121 (Halo 40). These results confirm the existence of kinematical distinct populations in satellites (e.g., Tolstoy et al. 2004; Battaglia et al. 2006). Simulated Dwarf Satellites of the Milky Way 9 0 1 2 3 4 5 Distance from the center (kpc) 3.0 2.5 2.0 1.5 1.0 0.5 [Fe/H] (a) ( 0.22 \u00b1 0.01) dex per kpc 5 6 7 8 9 10 11 12 13 Age (Gyr) 0 1 2 3 4 5 Distance from the center (kpc) 3.0 2.5 2.0 1.5 1.0 0.5 [Fe/H] (b) ( 0.13 \u00b1 0.01) dex per kpc 11.0 11.5 12.0 12.5 13.0 13.5 Age (Gyr) 0 1 2 3 4 5 Distance from the center (kpc) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 [ /Fe] (c) (+0.05 \u00b1 0.002) dex per kpc 5 6 7 8 9 10 11 12 13 Age (Gyr) 0 1 2 3 4 5 Distance from the center (kpc) 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 [ /Fe] (d) (+0.08 \u00b1 0.01) dex per kpc 5 6 7 8 9 10 11 12 13 Age (Gyr) Figure 6. Radial [Fe/H] distributions for (a) Halo 12, (b) Halo 40, and [\u03b1/Fe] distributions for (c) Halo 12, and (d) Halo 40, respectively. The color bars indicate the ages of the stars. The simulated data do not include simulated observational errors. The red line is the least squares linear fit for the data. The slope is shown in each panel. 4. DISCUSSION 4.1. Chemo-dynamical Evolution of Satellites Here, we discuss the chemo-dynamical evolution of the MW\u2019s satellites by comparing simulations and observations. The relationship between orbits and SFHs has been argued to explain the variety of observed SFHs seen in MW\u2019s satellites. Miyoshi & Chiba (2020) computed the orbital motions of MW\u2019s satellites, including Fornax, Leo I, Sculptor, and Draco, with a time-varying gravitational potential based on the Gaia Data Release 2 (Gaia Collaboration et al. 2018) proper motions, and compared them with SFHs. They found that the infall times of classical dSphs coincide well with the peak of the star-formation regions (SFRs), while UFDs had already been quenched before the infall times. Simulated satellites have some similarities to galaxies analyzed by Miyoshi & Chiba (2020). Halo 12 is similar to the Fornax dSph in terms of its stellar mass and SFH. Both galaxies have intermediate age (4\u20138 Gyr) and old (> 10 Gyr) stellar populations. The orbit of Halo 12 is similar to that of Leo I. Both Halo 12 and Leo I experienced one pericenter passage throughout their orbits. Stellar mass, orbits, and SFHs are similar between Halo 40 and the Sculptor dSph. These galaxies formed most stars prior to their infall. Halo 150 is similar to the Draco dSph regarding stellar mass, orbits, and SFHs. These galaxies also comprise old (> 10 Gyr) stellar populations. These results suggest that star formation in intermediate-age and old stars in these galaxies was regulated by SN feedback and gas inflow, as we have argued in Section 3.1. The major difference between our simulation and the MW\u2019s satellites is the star formation after infall. Our simulation does not exhibit enhancement of the SFR at the time of the infall, which has been observed by 10 Hirai et al. 3.0 2.5 2.0 1.5 1.0 0.5 [Fe/H] 0 20 40 60 80 100 120 vlos (km s 1) (a) 3.0 2.5 2.0 1.5 1.0 0.5 [Fe/H] 50 75 100 125 150 175 vlos (km s 1) (b) Figure 7. Line-of-sight velocities (vlos) as a function of [Fe/H] in (a) Halo 12 and (b) Halo 40. The simulated data do not include simulated observational errors. The orangedashed line shows the standard deviation of vlos as a function of [Fe/H]. Miyoshi & Chiba (2020). Di Cintio et al. (2021) showed that galaxies should satisfy two conditions to enhance the star formation after infall: (1) galaxies must have cold gas with at least 10\u22122 times the virial mass of the halo at the time of the infall and (2) the pericentric distance should be larger than 10 kpc. None of the galaxies analyzed in this study satisfy these conditions. The strength and treatment of SN feedback highly affect the SFHs and gas outflow of simulated dwarf galaxies. Since galaxy formation simulations cannot resolve the evolution of SN remnants, we need to rely on subgrid feedback models (e.g., Naab & Ostriker 2017; Hopkins et al. 2018a). Revaz & Jablonka (2012) performed isolated dwarf galaxy simulations with different strengths of SN feedback. Their simulations showed that the star formation lasted < 1 Gyr in their strongest feedback case, while stars were continuously formed over 14 Gyr if they adopted a level of feedback 100 times less than the strongest one (also see Hazenfratz et al. 2024). Xu et al. (2022) suggested that the mass-loading factor (the ratio of outflow rate and star formation rate) in dwarf galaxies (M\u2217\u223c104\u2013107M\u2299) observed in extremely metal-poor representatives explored by the Subaru survey project (e.g., Kojima et al. 2020; Matsumoto et al. 2022; Isobe et al. 2023; Nishigaki et al. 2023; Xu et al. 2024) were \u223c10 to 100 times lower than those predicted in galaxy formation simulations. These results highlight the importance of studying the effects of feedback on the SFHs of dwarf galaxies. MDFs reflect the SFHs and gas infall/outflow of dwarf galaxies. Kirby et al. (2011b) showed that Fornax dSph has a narrow MDF with \u03c3 = 0.36 dex. The Leo I dSph also exhibits a similar MDF. Their chemical evolution model suggested that these galaxies experienced gas infall to shape the narrow MDF. Halo 12 also exhibits a narrow MDF (\u03c3 = 0.20 dex) for stars with [Fe/H] > \u22121.5. As described in Section 3, these stars are formed by gas infall. These results suggest that gas infall plays an important role in the chemical evolution of the Fornax and Leo I dSphs. The Sculptor dSph has a broader MDF (\u03c3 = 0.46 dex) than those of the Fornax and Leo I dSphs (Kirby et al. 2013). Kirby et al. (2011b) found that none of their chemical evolution models reproduce Sculptor\u2019s MDF. This problem is resolved if they alter the SFH of the chemical evolution model to a more appropriate choice of parameters for SNe Ia and the SFH (Kirby et al. 2011a; Homma et al. 2015). Homma et al. (2015) interpreted Sculptor\u2019s SFH derived by de Boer et al. (2012a) with a chemical evolution model similar to that of Kirby et al. (2011b). They found that dSphs with a larger fraction of stars formed in the early phase have a more elongated low-metallicity tail of the MDF. Halo 40 in our simulation also exhibits a broad MDF (\u03c3 = 0.46 dex) similar to Sculptor\u2019s MDF. This broad MDF is formed by episodic star formation (Figure 3 (b)), rather than the continuous SFH assumed in the one-zone chemical evolution models (Kirby et al. 2011a; Homma et al. 2015). From inspection of Figure 4 (b), there are at least three distinct peaks in Halo 40\u2019s MDF formed by episodic star formation. If this is the case, upcoming spectroscopic surveys of dSphs could confirm whether or not the Sculptor dSph has an episodic SFH (see Section 4.2). 4.2. Prospects for Future Surveys Identifying whether the MW\u2019s satellites have episodic star formation is critical to understanding the effects of SN feedback on their chemo-dynamical evolution and the nature of dark matter (e.g., Aparicio et al. 2001; Bettinelli et al. 2019; Rusakov et al. 2021). Pontzen & Governato (2012) showed that large-scale bulk motion Simulated Dwarf Satellites of the Milky Way 11 of gas caused by episodic star formation transforms the cusped density profile of dark matter to a cored one (also see Mashchenko et al. 2008; Wheeler et al. 2019). The dependence of SFHs on dark matter profiles in observed satellites is not well understood (e.g., Hayashi et al. 2020, 2023). We need additional indicators to identify episodic star formation. As we have found in Figure 5, the episodic star formation creates groups of stars with similar [\u03b1/Fe] and [Fe/H]. We need to search for this feature with observations. Upcoming wide-field spectroscopic surveys will be able to measure chemical abundances for a sufficiently large number of stars to detect signatures of episodic SFH from chemical abundances (e.g., Takada et al. 2014; Cooper et al. 2023). For example, Subaru PFS will measure Fe and \u03b1-element abundances for 14,000 and 6,900 stars in Fornax and Sculptor, respectively. In this subsection, we discuss how the simulated [\u03b1/Fe] vs. [Fe/H] distribution (Figure 5) can be observed by Subaru PFS. Figure 8 shows Subaru PFS mock observations of [\u03b1/Fe] vs. [Fe/H] for Halos 12 and 40. Procedures for the mock observations are described in Section 2.3. Typical observational uncertainties added to the simulated data are \u03c3 \u22480.13 dex and 0.14 dex for the [\u03b1/Fe] and [Fe/H] ratios, respectively. Compared to Figure 5, the scatter in the [\u03b1/Fe] ratios have been increased. Nevertheless, we can still identify groups of stars having similar [\u03b1/Fe] and [Fe/H] associated with episodic star formation. The top panel of Figure 8 compares mock observed abundances of Halo 12 and the Fornax dSph. With Keck/DEIMOS, Kirby et al. (2011a) found scatter in [\u03b1/Fe] ratios and a lack of correlation with [Fe/H] in Fornax. Their results suggested that such scatter could arise from bursty star formation or inhomogeneity of the ISM. Mock observed [\u03b1/Fe] ratios in Halo 12 also exhibit scatter for stars with [Fe/H] > \u22121.5. Due to the observed uncertainties, detailed structures of [\u03b1/Fe] ratios seen in Figure 5 (a) cannot be observed, and these structures are observed as scatter. As we have argued in Section 3.2, the scatter of [\u03b1/Fe] ratios likely come from the enhanced contribution of CCSNe, due to bursty star formation and inhomogeneous chemical abundances in the ISM. This result is consistent with the suggestion by Kirby et al. (2011a). Stars with [Fe/H] < \u22121.5 in Figure 8 (top) highlight the importance of observing the Fornax dSph with a wide-field multiplexed spectrograph. In Figure 4 (a), we have shown that most stars with [Fe/H] < \u22121.5 are located outside of rh. Even after applying observed uncertainties, we can still see the decreasing trend of [\u03b1/Fe] as a function of [Fe/H] and scatter associated with the Figure 8. Subaru PFS mock observations (black dots) of [\u03b1/Fe] vs. [Fe/H] for Halos 12 (top panel) and 40 (bottom panel). Red symbols are the abundances for Fornax (top panel) and Sculptor (bottom panel) observed with Keck/DEIMOS (Kirby et al. 2011a). peaks of episodic star formation. Since the current sample (Kirby et al. 2011b) is limited to the center of the Fornax dSph (\u2272400 pc), we cannot constrain the chemical evolution in the outskirts of this galaxy. We will be able to investigate the most metal-poor tail of the MDF and [\u03b1/Fe] ratios by obtaining spectroscopy out to the tidal radius (2,078 pc; Irwin & Hatzidimitriou 1995) of the Fornax dSph. There are limitations on the ability of mediumresolution spectroscopy to identify dwarf galaxies accreted to the Fornax dSph with [\u03b1/Fe] ratios. In Figure 5, we find a low-\u03b1 (\u22120.3 < [\u03b1/Fe] < \u22120.1) and very metal-poor (\u22122.5 < [Fe/H] < \u22122.2) component, which is from an accreted dwarf galaxy. However, the distinction of this component is unclear, due to the observed uncertainties in Figure 8 (top). This result suggests that measuring velocity distribution (Figure 7) and high-resolution spectroscopy for chemical abundances of stars on the outskirts is necessary to distinguish accreted components. For example, most stars with [Fe/H] 12 Hirai et al. \u2264\u22122.5 in Halo 12 come from accreted dwarf galaxies. Their line-of-sight velocity dispersion is 22.3 km s\u22121, while that of stars with [Fe/H] > \u22122.5 shows 16.7 km s\u22121 (Figure 7). These difference in velocity dispersion could be measured in future surveys. The bottom panel of Figure 8 compares mock observed [\u03b1/Fe], as a function of [Fe/H], in Halo 40 and the Sculptor dSph. In the mock observation, the groups of stars with similar [\u03b1/Fe] and [Fe/H] formed in episodic star formation. For [Fe/H] < \u22122.0, these groups are typically separated with 0.5 and 0.4 dex in [Fe/H] and [\u03b1/Fe], respectively. However, the number of stars (375) observed in Keck/DEIMOS (Kirby et al. 2011b) is insufficient to identify such groups of stars. With Subaru PFS, we expect to measure [\u03b1/Fe] and [Fe/H] for 6,900 stars in the Sculptor dSph. As shown in this mock observation, in the planned survey we will confirm whether there is episodic star formation occurring every few hundred Myr by identifying chemical clumps. In this subsection, we have shown that [\u03b1/Fe] vs. [Fe/H] measured by medium-resolution spectroscopy for \u22731,000 stars can confirm signatures of episodic star formation in the Fornax and Sculptor dSphs. Thanks to our high-resolution cosmological zoom-in simulation, we can discuss the detailed chemo-dynamical structures of satellite galaxies with \u2273106 M\u2299. However, due to the resolution limit, we cannot constrain the SFHs and chemical abundances of galaxies with \u2272105 M\u2299. The SFHs of poorly resolved galaxies tend to be more bursty, because there are too many synchronized SNe from a star particle (e.g., Hopkins et al. 2018b; GarrisonKimmel et al. 2019). Hopkins et al. (2018b) showed that simulated galaxies should have > 100 star particles to result in a convergence of SFHs. This result means that simulations of MW-like galaxies with a mass resolution of \u223c10 M\u2299is required to resolve SFHs of the smallest satellites (\u2272103 M\u2299). Such simulations could be achieved by resolving the computational scaling issue using deep learning (Hirashima et al. 2023). We expect that a comparison with upcoming wide-field spectroscopic surveys and high-resolution cosmological simulations will improve our capability to reconstruct the chemo-dynamical evolution of satellites from chemical abundances. 5. CONCLUSIONS In this study we performed a high-resolution cosmological zoom-in simulation of a MW-like galaxy. With this simulation, we find that the SFHs, MDFs, and [\u03b1/Fe] ratios of three simulated satellite galaxies are similar to the MW\u2019s satellites (Fornax, Sculptor, and Draco dSphs). We also performed a mock observation of medium-resolution spectra using the Subaru PFS spectral synthesis pipeline. In our simulation, we find that star formation in most simulated satellites is quenched before their infall to the host (Figure 3). Star formation episodes in simulated satellites are separated by a few hundred Myr. Such episodic star formation is regulated by SN feedback. For the Fornax-like galaxy (Halo 12), gas infall induces additional star formation at \u22486\u201310 Gyr from the beginning of the simulation. Simulated MDFs reflect SFHs and gas infall/outflow. The narrow MDF for [Fe/H] > \u22121.5 in Halo 12 is formed in the additional star formation due to gas infall (Figure 4 (a)). This feature is similar to the Fornax dSph. In contrast, the Sculptor-mass galaxy (Halo 40) exhibits a broad MDF (Figure 4 (b)). This MDF has at least three distinct peaks formed by episodic star formation. The [\u03b1/Fe] ratios, as a function of [Fe/H], reflect the ages of stars (Figure 5). The oldest stars have [\u03b1/Fe] \u2273+0.4. Subsequent enrichment by SNe Ia decreases the [\u03b1/Fe] ratios. Stars with similar ages formed in episodic star formation events comprise groups with similar [\u03b1/Fe] and [Fe/H]. The bursty star formation and inhomogeneity of the ISM form scattered [\u03b1/Fe] ratios at [Fe/H] > \u22121.5. Our mock observations find that the groups of stars with similar [\u03b1/Fe] and [Fe/H] formed by episodic star formation can be identified by upcoming multiplexed medium-resolution spectra surveys (Figure 8). We can test whether satellites have episodic star formation with [\u03b1/Fe] ratios measured with medium-resolution spectra for \u22731,000 stars. We also find that metal-poor stellar populations can be found in the outskirts of the galaxy. These results indicate that comparison with upcoming spectroscopic surveys and high-resolution cosmological simulations will greatly improve our understanding of the chemo-dynamical evolution of satellite galaxies. Simulated Dwarf Satellites of the Milky Way 13 This work was supported in part by JSPS KAKENHI Grant Numbers JP22KJ0157, JP21H04499, JP21K03614, JP22H01259, JP24K00669, JP20H01895, JP21K13909, JP23H04009, JP22K03688, JP20H05855, MEXT as \u201cProgram for Promoting Researches on the Supercomputer Fugaku\u201d (Structure and Evolution of the Universe Unraveled by Fusion of Simulation and AI; Grant Number JPMXP1020230406), JICFuS, grants PHY 14-30152; Physics Frontier Center/JINA Center for the Evolution of the Elements (JINA-CEE), and OISE-1927130: The International Research Network for Nuclear Astrophysics (IReNA), awarded by the US National Science Foundation. E.N.K. acknowledges support from NSF CAREER grant AST-2233781. Numerical computations and analysis were carried out on Cray XC50 and computers at the Center for Computational Astrophysics, the National Astronomical Observatory of Japan and the Yukawa Institute Computer Facility. This research made use of NASA\u2019s Astrophysics Data System. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Software: AHF (Gill et al. 2004; Knollmann & Knebe 2009), astropy (Astropy Collaboration et al. 2013, 2018), CELib (Saitoh 2017), Cloudy (Ferland et al. 2013), MUSIC (Hahn & Abel 2011) APPENDIX Figure 9 shows [\u03b1/Fe], as a function of [Fe/H], for Halos 9 and 36, which have sufficient data to plot. Halo 9 has two major star formation events in 2 Gyr. The first burst forms stars with [Fe/H] \u2272\u22122.0, and the second burst produces stars with [Fe/H] \u2273\u22122.0. As a result of Fe enrichment by SNe Ia, the [\u03b1/Fe] ratios decrease toward higher metallicity. The inhomogeneity of the spatial metallicity distribution of the ISM due to CCSNe produces scatter of the [\u03b1/Fe] ratios. On the other hand, Halo 36 has a more extended SFH (Figure 2). The first star formation event creates stars with constant [\u03b1/Fe] \u2248+0.3 due to the contribution from CCSNe. The two subsequent star formation events are influnced by SNe Ia. There is a decreasing trend of [\u03b1/Fe] ratios toward higher [Fe/H]. Similar to Halos 12 and 40, both galaxies have stars with different [\u03b1/Fe] and [Fe/H], depending on their ages. \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 [\u03b1/Fe] Halo 9 11.0 11.5 12.0 12.5 13.0 13.5 Age (Gyr) \u22123.0 \u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 [Fe/H] \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 [\u03b1/Fe] Halo 36 9 10 11 12 13 Age (Gyr) Figure 9. Same as Figure 5, but for Halos 9 (left) and 36 (right). The color bars indicate the ages of the stars. 14 Hirai et al."
19
+ }
intro_8K/test_introduction_short_2405.05380v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05380v1",
3
+ "title": "Excluded volume effects on tangentially driven active ring polymers",
4
+ "abstract": "The conformational and dynamical properties of active ring polymers are\nstudied by numerical simulations. The two-dimensionally confined polymer is\nmodeled as a closed bead-spring chain, driven by tangential forces, put in\ncontact with a heat bath described by the Brownian multiparticle collision\ndynamics. Both phantom polymers and chains comprising excluded volume\ninteractions are considered for different bending rigidities. The size and\nshape are found to be dependent on persistence length, driving force, and bead\nmutual exclusion. The lack of excluded volume interactions is responsible for a\nshrinkage of active rings when increasing driving force in the flexible limit\nwhile the presence induces a moderate swelling of chains. Internal dynamics of\nflexible phantom active rings shows activity-enhanced diffusive behavior at\nlarge activity values while, in the case of self-avoiding active chains, it is\ncharacterized by active ballistic motion not depending on stiffness. The\nlong-time dynamics of active rings is marked by rotational motion whose period\nscales as the inverse of the applied tangential force, irrespective of\npersistence length and beads self-exclusion.",
5
+ "authors": "A. Lamura",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cond-mat.soft",
9
+ "cats": [
10
+ "cond-mat.soft"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Last twenty years registered a growing interest towards active matter [1\u20134]. This is made of out-of-equilibrium interacting units capable of absorbing energy from their environment and transforming it into motion. An interesting example is provided by active polymer-like structures where the presence of active noise and/or internal propulsion, interacting with de- formability, is responsible for intriguing new phenomena, investigated both theoretically and numerically [5\u201318]. Nature provides numerous realizations showing how activity is crucial in determining both structural and dynamical properties. Among others, actin \ufb01laments and microtubules are prototypes of \ufb01lamentous structures, subject to local active forces exerted by biomolecular motors, capable of performing di\ufb00erent activities at the biological level [19, 20]. For example, microtubules placed on kinesin motility assays can undergo active self-organization to obtain more ordered structures such as bundles [21] and rings [22]. Such closed structures are very common and can be observed in chromosomes inside bacteria [23], in DNA and RNA arranging in loops [24, 25], in actomyosin rings [26], and in microtubules on dynein-coated surfaces [27] whose dynamics is greatly a\ufb00ected by the circular form [28]. Very recently some studies have investigated structures and dynamic behaviors of active rings. In three spatial dimensions active Brownian [29] and tangentially driven [30, 31] ring polymer models have been considered. In the former case it is found that the action of local active random forces enhances conformal \ufb02uctuations [29] while in the latter one, the local tangent force causes small rings to swell and large rings to collapse with an arrested dynamics in the case of \ufb02exible rings [30]. Neglecting excluded volume interactions allows an analytical study of the dynamics of semi\ufb02exible active polar ring polymers [31] which reveals that conformations are independent on activity and characterized by a rotational motion. This resembles the tank-treading motion observed for passive rings [32\u201335] and vesicles [36, 37] when subject to an external shear \ufb02ow. The interplay of local polar and long-range activities on the swelling and collapse of \ufb02exible ring polymers has been also considered [38]. In the two-dimensional case very few studies addressed the behavior of active ring poly- mers. Active Brownian models have been adopted to mimic mammalian cells [39] and to investigate the motion of active rings in porous media [40]. Despite of this, the problem is in- teresting since several experiments showed that it is possible to assemble microtubules, on a 2 motor protein-\ufb01xed surface [22, 28, 41], in ring shapes which are characterized by rotational motion [27]. Due to the peculiar dynamic behavior, it appears very engaging to understand such patterns which strongly depend on the topological constraints in two dimensions. This is precisely the aim of the present study where the e\ufb00ects of excluded volume interactions are explicitly considered in the case of active polymer rings. We introduce a discrete model of a closed semi\ufb02exible polymer whose beads are subject to a force tangentially oriented with respect to the polymer backbone. Excluded volume interactions are taken into account in order to highlight their roles in the observed dynamics since these forces are known to be relevant in the case of two-dimensional passive rings in the limit of small bending rigidity [42, 43]. Hydrodynamic interactions are ignored due to the strong interaction between rings and substrates in two dimensions thus allowing the use of the free-draining approximation. For this reason the polymer is placed in contact with a Brownian heat bath and its dynamics is numerically studied by using the Brownian version [44] of the multiparticle collision dynamics [45, 46]. We \ufb01nd that the size and shape, measured by the radius of gyration and by the asphericity, respectively, depend on persistence length, excluded volume interactions, and active force. In the limit of \ufb02exible rings, phantom chains decrease in size when increasing activity while rings with conserved topology present a moderate swelling, becoming more roundish in both cases. In the opposite limit of sti\ufb00rings, excluded volume interactions are not crucial in determining conformations which are independent on activity. Flexible phantom active rings show enhanced di\ufb00usive dynamics while self-avoiding active chains display ballistic dynamic behavior not depending on sti\ufb00ness. The long-time dynamics is characterized by a reptation motion for all bending rigidities which, in the case of sti\ufb00rings, resembles the tank-treading motion observed for two-dimensional sheared vesicles [47\u201349]. The rotational period is found to scale as the inverse of the active force. The numerical model for the polymer and the Brownian heat bath is introduced in Sec. II. The results for the conformations and the dynamics are reported in Sec. III. Finally, Sec. IV is devoted to discuss the main \ufb01ndings presenting some conclusions. 3",
15
+ "main_content": "A closed chain of length L is considered in two spatial dimensions. It is composed of N beads, each having mass M, whose internal interactions are due to different contributions. Consecutive beads interact via the harmonic potential Ubond = \u03bah 2 2 N \ufffd i=1 \ufffd i=1 (|ri+1 \u2212ri| \u2212l)2, (1) where \u03bah is the spring constant, ri indicates the position vector of the i\u2212th bead (i = 1, . . . , N) with rN+1 = r1 and r0 = rN, and l is the average bond length. A bending potential is considered to enforce chain stiffness and is given by Ubend = \u03ba N \ufffd i=1 \ufffd i=1 (1 \u2212cos \u03b8i) (2) where \u03ba controls the bending rigidity and \u03b8i is the angle between two consecutive bond vectors. In the following, chain stiffness is characterized in terms of the length Lp = 2\u03bal/kBT which corresponds to the polymer persistence length in the worm-like chain limit [50]. Here kBT is the thermal energy, T is the temperature, and kB is Boltzmann\u2019s constant. Excluded volume interactions between non-bonded beads are modeled by the truncated and shifted Lennard-Jones potential Uex = 4\u01eb \ufffd\ufffd\u03c3 r r \ufffd12 \u2212 \ufffd\u03c3 r r \ufffd6 + 1 4 4 \ufffd \u0398(21/6\u03c3 \u2212r), (3) where \u01eb is the volume-exclusion energy, r is the distance between two non-connected beads, and \u0398(x) is the Heaviside function (\u0398(x) = 0 for x < 0 and \u0398(x) = 1 for x \u22650). This potential avoids chain self-crossings so to preserve the ring topology. Finally, an active force F a i (i = 1, . . . , N) is applied tangentially to the filament at the position of each bead. In the present paper we adopt a push-pull type force [6, 8, 13, 31]. By assuming that molecular motors are homogeneously distributed along a bond, it is reasonable to consider that each bond is subject to a constant force, along its direction, given by f a(ri\u2212ri\u22121)/l(i = 1, . . . , N) [6]. This force has magnitude f a since the bond length |ri\u2212ri\u22121| is constrained to be l by using a very high value of the spring constant \u03bah in (1). The force on each bond is then equally distributed between the adjacent beads so that, say, on the bead i there is a contribution f a(ri \u2212ri\u22121)/(2l) along the inward bond and a contribution 4 f a(ri+1 \u2212ri)/(2l) along the outward bond. The total net force acting on the i-th bead is the sum of these two terms F a i = f a 2l (ri+1 \u2212ri\u22121) , i = 1, . . . , N. (4) The expression (4) is such that the sum of active forces along the discrete ring, PN i=1 F a i , is zero [31]. Moreover, the value of the force (4) depends on the relative positions of the beads i \u22121 and i + 1, varying between 0, when the two consecutive bonds are antiparallel, and f a, when the bonds are parallel. In other studies a constant tangent force, acting on all the beads, has been considered [30, 51, 52]. The strength of the total active force is quanti\ufb01ed by the P\u00b4 eclet number Pe = f aNL/(kBT) [8, 31]. An alternative de\ufb01nition of the P\u00b4 eclet number, Pe\u2217= f al/(kBT) = Pe/N2, being L = Nl, is sometimes used in the literature [30]. Newton\u2019s equations of motion of beads are integrated by the velocity-Verlet algorithm with time step \u2206tp [53, 54]. The ring is kept in contact with a Brownian heat bath which is modeled by making use of the Brownian multiparticle collision (MPC) method [44, 46, 55] where hydrodynamics is ignored. Every bead interacts with \u03c1 virtual solvent particles of mass m in order to simulate the interaction with a \ufb02uid volume. Since it is not necessary to keep track of the positions of the solvent particles in the present algorithm [44], it is su\ufb03cient to couple each bead with an e\ufb00ective virtual solvent particle with momentum sampled from a Maxwell-Boltzmann distribution of variance \u03c1mkBT and zero mean. The interaction process proceeds via the stochastic rotation dynamics of the MPC method [46, 56, 57]. The relative velocity of each polymer bead, with respect to the center-of-mass velocity of the bead and its corresponding virtual solvent particle, is randomly rotated by angles \u00b1\u03b1. Collisions are then executed at time intervals \u2206t, with \u2206t > \u2206tp. It has been shown that the evolution equation of the MPC model for the solute particle takes the form of a discretized Langevin equation for which the expression of the friction coe\ufb03cient has been obtained [55]. Simulations are carried out with the choices \u03b1 = 130o, \u2206t = 0.1tu, with time unit tu = p ml2/(kBT), M = \u03c1m with \u03c1 = 5, \u03bahl2/(kBT) = 104, \u03c3/l = 1, N = L/l = 50, and \u2206tp = 10\u22122\u2206t. In some cases, longer rings with N = 100, 200 beads have been also considered. A larger value of the ratio \u03c3/l, which might be experimentally relevant, would cause the overlap of neighboring beads with a smoothing of the interaction potential and, eventually, only minor quantitative changes in the following results. The value of \u03bah is such 5 to ensure that bond length \ufb02uctuations are negligible in any non-equilibrium condition. III. NUMERICAL RESULTS We consider rings with persistence lengths ranging from the \ufb02exible limit (Lp/L = 0) to the sti\ufb00one (Lp/L = 40). The active force f a is varied to access a wide interval of P\u00b4 eclet number (0 \u2264Pe \u22645 \u00d7 104 0 \u2264Pe\u2217\u226420). Finally, in order to incorporate excluded volume e\ufb00ects, the value \u01eb = kBT is used referring to the model as a self-avoiding active ring (SAR). To point up topological e\ufb00ects, a comparison with self-crossing rings is also carried out by setting \u01eb = 0. In this latter case we refer to the model as a phantom active ring (PAR). For the considered set of parameters, the friction coe\ufb03cient \u03be [55] acting on each bead is such that M/\u03be \u22722.0 \u00d7 10\u22126\u03c4r, 8.5 \u00d7 10\u22125\u03c4r for self-avoiding and phantom rings, respectively. This ensures that the dynamics is close to the overdamped one so that inertial e\ufb00ects are negligible for the results in the following. Here and in the rest of the paper, \u03c4r denotes the polymer relaxation time in the passive case and is determined by considering the time decay of the ring-diameter autocorrelation function (see details when discussing Fig. 10). It results to be \u03c4r \u22436.5 \u00d7 104tu, 1.5 \u00d7 103tu for self-avoiding and phantom \ufb02exible rings, respectively, and \u03c4r \u22431.6 \u00d7 105tu when Lp/L = 40 where there are no di\ufb00erences between the two models. Polymers are initialized in a circular shape and equilibrated up to time 106tu, much longer than any polymer relaxation time. Then, data are collected in single runs for every parameter set over time intervals of duration \u224350\u03c4r, and averaged. In the case of the PAR model with Lp/L = 0.4 at Pe = 2.5 \u00d7 104, averages are obtained from three di\ufb00erent realizations, each of duration up to 150\u03c4r. A. Polymer conformations By varying activity and sti\ufb00ness, rings can attain di\ufb00erent con\ufb01gurations. In order to characterize the observed patterns, the gyration tensor G\u03b1\u03b2 = 1 N N X i=1 \u2206ri,\u03b1\u2206ri,\u03b2 (5) is computed. Here \u2206ri,\u03b1 is the position of the i-th bead in the center-of-mass reference frame of the polymer and the Greek index indicates the Cartesian component. The two eigenvalues 6 \u03bb1 and \u03bb2, with \u03bb1 > \u03bb2, of the tensor (5) are extracted to calculate the gyration radius R2 g = \u03bb1 + \u03bb2 (6) which measures the total size of the ring. The asphericity A = (\u03bb1 \u2212\u03bb2)2 (\u03bb1 + \u03bb2)2 (7) is also computed to provide information about the shape, being 0 \u2264A \u22641 with A = 0 for a circle and A = 1 for a rod. The computed values of \u27e8R2 g\u27e91/2, normalized to the radius of gyration Rc = L/(2\u03c0) of a rigid circle, are depicted versus the P\u00b4 eclet number in Fig. 1 for di\ufb00erent values of the persistence length Lp in the case of SAR and PAR models. The left panel shows data in the \ufb02exible regime corresponding to chains for which the values of the gyration radius in the passive limit, Pe \u21920, are di\ufb00erent for self-avoiding (\ufb01lled symbols) and phantom (empty symbols) rings [43]. The di\ufb00erence in the radii is due to the conserved circular topology in the SAR model thanks to self-avoiding e\ufb00ects. In this model polymers show larger sizes with respect to the PAR model. On the contrary, the bonds of phantom rings overlap to maximize the con\ufb01gurational entropy because of \ufb02exibility [43] thus producing more compact structures. Radii increase with the persistence length in both models while the relative di\ufb00erences reduce. Activity does not produce any signi\ufb01cant change in the radius of gyration up to Pe \u2243103. For values Pe \u2273104, the behavior varies with the considered model and the conformations depend on activity. Some typical con\ufb01gurations are reported in the bottom part of Fig. 1. This latter range of activity is experimentally relevant: For example, in the case of microtubules of length L = 1\u00b5m with N = 10 active motors, each with force f a = 6pN, it would be Pe \u2243104 at room temperature [31, 58]. Phantom rings tend to shrink while self-avoiding rings swell. In the case of fully \ufb02exible chains (Lp/L = 0) when Pe = 5 \u00d7 104, the root mean-square radius of gyration reduces by approximately 25% for PAR model and increases by approximately 15% for SAR model with respect to the values at equilibrium. We note here that the shrinkage of phantom chains in two dimensions is larger compared to the value (\u224310%) found in three dimensions [31] using a similar discrete model for the same P\u00b4 eclet number, thus pointing out the relevance of space dimensionality on conformations. The probability distribution functions P(Rg/Rc) of the radius of gyration are shown for PAR and SAR models with Lp/L = 0 in the panels (a) and (c), respectively, of 7 Fig. 2 for di\ufb00erent values of activity. In both models, the mode of the distribution increases with Pe and the width becomes narrower suggesting that \ufb02uctuations are suppressed by activity (see Movie 1 given in the supplementary material). By increasing the sti\ufb00ness, the variations of \u27e8R2 g\u27e91/2 with respect to the equilibrium value reduce and become negligible in the case of self-avoiding rings for which a very small contraction (\u22433%) can be appreciated when Lp/L = 0.2. At value of bending rigidity such that Lp/L \u22430.4, the sti\ufb00regime is entered. In the passive limit Pe \u21920, the values of the gyration radius appear indistinguishable at \ufb01xed bending rigidity, irrespective of excluded volume interactions, as a consequence of the mechanical constraints exerted by sti\ufb00ness (see Fig. 1 (b)). The global size of rings increases with sti\ufb00ness to become comparable to that of a rigid ring for very sti\ufb00chains (Lp/L = 40). When active polymers are considered, they show negligible variations in size except in the case of phantom active rings with Lp/L = 0.4. In this latter case, the gyration radius displays a non-monotonic dependence on the P\u00b4 eclet number due to di\ufb00erent conformations which can be assumed by the ring. This is re\ufb02ected in the probability distribution function of Rg, shown in Fig. 2 (b), that becomes multimodal in the cases with Pe = 2.5\u00d7104, 5\u00d7104. Without the topology constraint enforced by excluded volume interactions, activity is able to deform the chain despite its bending rigidity. The interplay with \ufb02uctuations produces di\ufb00erent con\ufb01gurations of variable duration, observable during very long time dynamics. Typical patterns, corresponding to the three peaks of P(Rg/Rc) with Pe = 2.5 \u00d7 104, are illustrated in Fig. 3. In the case of self-avoiding active rings with Lp/L = 0.4, activity does not change the global size. However, distribution functions become skewed (see Fig. 2 (d)) since rings continuously shrink and swell during their dynamics (see Movie 2 given in the supplementary material). This e\ufb00ect reduces when increasing the bending rigidity so that rings behave as rigid circles. Indeed, when Lp/L \u22731, no appreciable di\ufb00erence can be observed in the behavior between PAR and SAR models since self-exclusion becomes irrelevant. This is due to the fact that bonds are separated from each other because of the high bending rigidity of sti\ufb00polymers. More details about the dynamics will be provided in the following Section. In order to gain further insight into the observed patterns of active rings, the equal time bond correlation function is computed. It is de\ufb01ned as \u27e8cos \u03b8(s)\u27e9= \u27e8ti+s \u00b7 ti\u27e9 l2 (8) 8 where ti = ri+1 \u2212ri is the bond vector and s is the contour separation. The closed topology guarantees the property \u27e8cos \u03b8(s)\u27e9= \u27e8cos \u03b8(N \u2212s)\u27e9. Figure 4 depicts the bond correlation function for the persistence lengths Lp/L = 0, 0.4 with Pe = 2.5 \u00d7 104. Flexible phantom rings show a very fast decay at small separations followed by anti-correlation on a distance of about two bonds before reaching complete decorrelation at a contour separation of about 6 bonds. This suggests the presence of small wraps of few beads that favor the contraction in size. In contrast, \ufb02exible self-avoiding active rings manifest a larger directional correlation on short distance due to excluded volume e\ufb00ects that restrict the possible conformations. Owing to the preserved circular topology, the correlation function becomes negative on separations s/N \u22431/2. As already observed, sti\ufb00ness is responsible of increasing the size of rings. In the case of self-avoiding active rings with Lp/L = 0.4, this produces a larger correlation between bonds which are strongly anti-correlated on distances s/N \u22431/2 as in the case of rigid passive rings [42]. When considering semi\ufb02exible phantom active rings, the presence of the structure with two interlaced rings, shown in Fig. 3 (c), determines bond anti-correlation at separations s/N \u22431/4 and small correlation at s/N \u22431/2. In order to better evaluate the e\ufb00ect of activity on the shape of active rings, the average asphericity is plotted in Fig. 5 for the \ufb02exible (panel (a)) and sti\ufb00(panel (b)) regimes. In the former case, asphericity presents a non-monotonic dependence on sti\ufb00ness when Pe \u21920, as observed in Ref. [43], with self-exclusion warranting more circular shapes. The e\ufb00ect of activity is to make rings more roundish in both models with the exception of the PAR model with Lp/L = 0.2 when activity favors elongated structures with respect to the passive limit. As far as the bending rigidity is negligible, our results give \u27e8A\u27e9\u22430.26 in the passive case, as predicted in the Gaussian limit [59]. The observed small wraps at high activity favor local back folding so that rings are able to gain even more compact conformations (see Fig. 1 (a)), while reducing their asphericity with respect to the passive case. Once bending rigidity comes into play (at values Lp/L \u22430.2), phantom active rings can still reduce the gyration radius due to self-crossing while assuming a more eccentric elliptical shape. The corresponding probability distributions P(A) are highly skewed with a maximum at A = 0 and long tails, as it can be seen in Fig. 6 (a,c) for \ufb02exible rings (Lp/L = 0). The e\ufb00ect of activity is to increase the height of the maximum of distributions while slightly shortening tails. For sti\ufb00 active rings (Fig. 5 (b)) it is possible to observe that activity induces slightly more elongated shapes with respect to the passive case though this e\ufb00ect reduces when increasing sti\ufb00ness. 9 Only for phantom active rings with Lp/L = 0.4, a non-monotonic dependence on activity is visible due to the observed conformations (see Fig. 3) and the peculiar dynamics, as previously discussed. This is also re\ufb02ected in the probability distributions shown Fig. 6 (b) for Lp/L = 0.4. The distribution P(A) is characterized by a linear decay as far as Pe \u2272104. For larger values of activity longer tails and pronounced shoulders appear in the distribution P(A). In the case of self-avoiding active rings (Fig. 6 (d)), the role played by activity is to produce slightly longer tails while poorly a\ufb00ecting the behavior at small values of A. B. Dynamical behavior In this Section we describe and characterize the dynamical behavior of active rings once the steady state has been reached. When Pe \u22721, there are no e\ufb00ects induced by the applied tangential force and rings behave as in the passive case with di\ufb00usive translational motion of the center of mass (see the following discussion). By increasing activity, rings are set in a slow rotational motion due to the applied force though this rotation is not continuous in time. In order to illustrate and quantify the described behavior, it is useful to consider the ring diameter, de\ufb01ned as Rd = rN/2+1 \u2212r1. The time dependence of the x-component Rdx is reported in Fig. 7 in the case of a \ufb02exible self-avoiding ring at di\ufb00erent values of activity. Once Pe \u223co(102), a steady rotation of active rings can be observed. During steady rotation, the vector Rd rotates continuously so that its components oscillate periodically in time. This behavior can be used to infer the characteristic rotation frequency \u03c9. This is determined by a spectral analysis (see the inset of Fig. 7 (d)) of the time series Rdx(t). The computed periods of rotation, T = 2\u03c0/\u03c9, are shown in Fig. 8 for di\ufb00erent persistence lengths and rings of lengths L = 50l, 100l, 200l. It is evident that the period T follows a power-law decay with dependence (Pe/L3)\u22121, irrespective of the bending rigidity and ring average size at high activity. Our results con\ufb01rm what analytically predicted for threedimensional phantom active rings that undergo active tank-treading motion with frequency \u03c9 = (Pe/L3)(2\u03c0lkBT/\u03be) = f a/(Rc\u03be) which is proportional to the tangential velocity f a/\u03be and independent of the e\ufb00ective ring size. [31]. Moreover, here we \ufb01nd evidence that the period is not depending on excluded volume interactions in two dimensions. In the case of the phantom \ufb02exible chain, a compact conformation is observed at Pe \u2243102 and thermal noise deeply in\ufb02uences ring rotation so that the observed spectrum of frequencies is quite 10 broad. Phantom active rings require larger values of activity or of sti\ufb00ness with respect to self-excluding active rings in order to establish a uniform rotational motion. Sizes and shapes of active rings in the steady state show a poor dependence on the applied force as far as Pe \u2272104, as already discussed in the previous Section. However, when entering the regime of experimentally relevant P\u00b4 eclet numbers, rings undergo large morphological deviations with respect to equilibrium. Phantom active rings, despite the initial circular con\ufb01guration, can be driven, going through intermediate structures (see panel (b) of Fig. 3), into more compact con\ufb01gurations (see panel (c) of Fig. 3). Simulations for the PAR model have been conducted at Pe = 2.5\u00d7104 for di\ufb00erent values of the persistence length. It appears that when 0.3 \u2272Lp/L \u22720.45, rings spontaneously assume the double ring conformation with Rg/Rc \u22430.52 (corresponding to the typical value of Rg for the conformation of Fig. 3 (c)). This latter structure can spontaneously disentangle with a lifetime which is longer at Lp/L \u22430.4. This behavior can be observed in the time dependence of the gyration radius and of the asphericity in Fig. 9 for the PAR model with Lp/L = 0.4 at Pe = 2.5 \u00d7 104 on a very long time run of duration 150\u03c4r \u22437 \u00d7 104T. Starting from the initial circular shape, phantom rings can self-cross assuming conformations similar to the one of Fig. 3 (b) with an elongated shape resembling the number eight. This is possible only in a narrow range centered at Lp/L \u22430.4 since the \u201ceight con\ufb01guration\u201d is compatible with this value of the persistence length. Due to thermal \ufb02uctuations, it can happen that one of the two sub-rings moves towards the other one trespassing the mutual crossing point to give the double ring conformation. Despite this costs a strong local bending, the double ring is always observed at Lp/L = 0.4 in all the considered runs at very high P\u00b4 eclet number. In the case of active rings comprising excluded volume interactions, activity is responsible of inducing temporary elongated con\ufb01gurations as illustrated in Fig. 9 by the peaks of asphericity corresponding to the reduction of the radius of gyration (see also Movie 2 in the supplementary material). In order to further characterize the rotational behavior, it is useful to consider the normalized time-correlation function of the ring diameter \u27e8Rd(t) \u00b7 Rd(0)\u27e9/\u27e8R2 d(0)\u27e9. In the left panel of Fig. 10 the normalized autocorrelation function is plotted for a \ufb02exible self-avoiding ring for di\ufb00erent values of activity. In the passive case, the function exhibits an exponential decay, exp(\u2212t/\u03c4r), which is used to determine the polymer relaxation time \u03c4r. When Pe = 10 no relevant di\ufb00erence can be appreciated with respect to equilibrium on time scales 11 comparable to the relaxation time. The increase of activity is responsible for producing an oscillatory behavior which is modulated in time by the same decay of the passive ring. The damped oscillatory pattern with a shorter period is maintained when the P\u00b4 eclet is further increased. The comparison in the behavior of the autocorrelation function of the ring diameter between the PAR and SAR models is reported in the panel (b) of Fig. 10 for di\ufb00erent bending rigidities with Pe = 103. In the case of \ufb02exible phantom active rings, the correlation function shows an exponential decay since the observed compact structure, due to the lack of any bending rigidity, requires larger values of activity to observe oscillations. On the contrary, self-avoiding active rings present the damped oscillatory behavior thanks to excluded volume e\ufb00ects that preserve the circular topology avoiding any collapse of the chain while rotating. Oscillations are clearly observable in the correlation functions of semi\ufb02exible, both phantom and self-excluding, active rings. The amplitudes are larger in the latter case due to the longer relaxation times and increase with bending rigidity to become indistinguishable between the two models in the limit of sti\ufb00rings. As far as oscillations are well de\ufb01ned, the numerical data of the autocorrelation function are very well approximated (see Fig. 10 (b)) by the theoretical prediction [31] \u27e8Rd(t) \u00b7 Rd(0)\u27e9 \u27e8R2 d(0)\u27e9 \u2248cos(2\u03c0t/T) exp(\u2212t/\u03c4r), (9) where the values of T and \u03c4r, computed in the present simulations, are used. Finally, the beads mean-square displacement (MSD) \u27e8(ri(t) \u2212ri(0))2\u27e9is computed which allows the characterization of the translational motion of ring. Due to the ring topology, the beads MSD is independent of the point location and receives a contribution from the centerof-mass motion, \u27e8\u2206r2 cm(t)\u27e9, and another one from the internal dynamics, \u27e8\u2206r2(t)\u27e9, so that one can write \u27e8(ri(t) \u2212ri(0))2\u27e9= \u27e8\u2206r2 cm(t)\u27e9+ \u27e8\u2206r2(t)\u27e9. Since the sum of all internal and active forces over the whole ring vanish, the center-of-mass motion is purely di\ufb00usive depending only on thermal \ufb02uctuations and not on activity. In this way the quantity \u27e8\u2206r2(t)\u27e9, which is related the beads MSD relative to the center-of-mass MSD, provides information on the ring internal dynamics. The MSD \u27e8\u2206r2(t)\u27e9for self-avoiding \ufb02exible (Lp/L = 0) and sti\ufb00 active rings (Lp/L = 40) with di\ufb00erent activities are reported in Fig. 11. In the case without any sti\ufb00ness (panel (a)) the sub-di\ufb00usive exponent 0.6 is found in the time range t \u226a\u03c4r when thermal e\ufb00ects prevail on active contributions, as predicted by the Rouse model of two-dimensional \ufb02exible polymers with excluded volume interactions [60]. For large P\u00b4 eclet 12 numbers, Pe \u2273104, an active ballistic time regime is observed with \u27e8\u2206r2(t)\u27e9\u223ct2. For longer times, oscillations, due to the active tank-treading, appear in the MSD which then goes to a plateau when t \u2273\u03c4r. This behavior, due to the mutual repulsion among beads, is di\ufb00erent from what is found when considering \ufb02exible phantom rings. In this case the sub-di\ufb00usive behavior t1/2 holds when t \u226a\u03c4r. The MSD shows the activity-enhanced linear time regime at high values of activity (Pe \u2243104) followed by oscillations at longer times, as predicted in three dimensions [31]. The MSD of sti\ufb00polymers (panel (b)) exhibits an initial time dependence t0.7. The exponent 0.7 slightly underestimates the predicted value 3/4 [61] due to the \ufb01nite ring length [60]. A linear time dependence [62] is then observed at late times when Pe \u22721. Strong activity induces the active ballistic time regime followed in time by oscillations. In this case we \ufb01nd that the numerical values of \u27e8\u2206r2(t)\u27e9are very well described (see Fig. 11 (b)) by the theoretical prediction [31] \u2206r2(t) \u000b /L2 \u2248 h 1 \u2212cos(2\u03c0t/T)e\u2212t/\u03c4ri /(2\u03c02), (10) where the computed values of T and \u03c4r are used. IV. DISCUSSION AND CONCLUSIONS The conformations and dynamics of tangentially-driven active ring polymers have been numerically studied. The discrete closed chain has been con\ufb01ned in two dimensions and coupled to a Brownian heat bath performed by the stochastic implementation of the multiparticle collision dynamics. Both phantom and self-avoiding rings have been considered for di\ufb00erent bending rigidities ranging from the \ufb02exible to the sti\ufb00limit. Excluded volume interactions a\ufb00ect the conformations of \ufb02exible active rings: Polymers with self-excluding beads swell up to 15% while phantom chains shrink down to 25% at high values of the P\u00b4 eclet number, in both cases attaining more circular shapes. No appreciable di\ufb00erence is observed between the two models of active rings in the semi\ufb02exible limit with the exception of phantom rings with persistence length slightly less than half of the total chain length. In this latter case activity can induce more compact conformations since the initial circular topology is not conserved. The observed double ring conformation would be permitted in the presence of excluded-volume interactions in three spatial dimensions, or quasi two-dimensional con\ufb01nement, so that activity might trigger the transition to this 13 conformation in real systems, eventually detectable in experimental setups. The meansquare displacement relative to the center-of-mass MSD allows us to capture the internal dynamics. At intermediate time scales, \ufb02exible phantom active rings shows an activityenhanced di\ufb00usive regime at large activity values. This is di\ufb00erent from what observed in the case of self-avoiding active chains for which internal motion is ballistic, independently on sti\ufb00ness. At high values of activity, rings exhibit active tank-treading motion whose period scales as the inverse of the applied tangential force, irrespective of both persistence length and beads self-exclusion. Excluded volume interactions have been shown to play a major role in capturing the phenomenology of two-dimensional \ufb02exible active rings thus it appears very interesting to extend the present study to melts where both interand intra-bead mutual repulsions will be relevant. Moreover, the action of an external shear \ufb02ow would possibly enrich the described picture impacting on the observed conformations and dynamics, as observed for active Brownian linear polymers [63, 64]. This would require to separate out e\ufb00ects of internal and external stresses, as far as timescales are not separable, in order to obtain a complete description of the system. ACKNOWLEDGMENTS Funding from MIUR Project No. PRIN 2020/PFCXPE is acknowledged. This work was performed under the auspices of GNFM-INdAM. Appendix: Movies Description In this Section we provide a brief description of the movies accompanying the paper. \u2022 Movie 1: Flexible self-avoiding active ring The movie illustrates the motion of a \ufb02exible self-avoiding active ring with Lp/L = 0 for Pe = 2.5 \u00d7 104 in the center-of-mass reference frame. Frames are taken at time intervals \u2206t/T \u22430.11 where T is the computed rotational period. To illustrate the clockwise rotation, the beads 1 and N/2 + 1 are colored blue and yellow, respectively. \u2022 Movie 2: Semi\ufb02exible self-avoiding active ring The movie illustrates the motion of a semi\ufb02exible self-avoiding active ring with Lp/L = 14 0.4 for Pe = 2.5 \u00d7104 in the center-of-mass reference frame. Frames are taken at time intervals \u2206t/T \u22430.11 where T is the computed rotational period. To illustrate the clockwise rotation, the beads 1 and N/2 + 1 are colored blue and yellow, respectively. [1] M. C. Marchetti, J. F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Hydrodynamics of soft active matter, Rev. Mod. Phys. 85, 1143 (2013). [2] J. Elgeti, R. G. Winkler, and G. Gompper, Physics of microswimmers\u2013single particle motion and collective behavior: a review, Rep. Prog. Phys. 78, 056601 (2015). [3] C. Bechinger, R. D. Leonardo, H. L\u00a8 owen, C. Reichhardt, G. Volpe, and G. Volpe, Active particles in complex and crowded environments, Rev. Mod. Phys. 88, 045006 (2016). [4] R. G. Winkler, J. Elgeti, and G. Gompper, Active polymers emergent conformational and dynamical properties: A brief review, J. Phys. Soc. Jpn. 86, 101014 (2017). [5] J. Elgeti and G. Gompper, Self-propelled rods near surfaces, EPL 85, 38002 (2009). [6] H. Jiang and Z. Hou, Motion transition of active \ufb01laments: rotation without hydrodynamic interactions, Soft Matter 10, 1012 (2014). [7] A. Ghosh and N. S. Gov, Dynamics of active semi\ufb02exible polymers, Biophys. J. 107, 1065 (2014). [8] R. E. Isele-Holder, J. Elgeti, and G. Gompper, Self-propelled worm-like \ufb01laments: spontaneous spiral formation, structure, and dynamics, Soft Matter 11, 7181 (2015). [9] R. E. Isele-Holder, J. Jager, G. Saggiorato, J. Elgeti, and G. Gompper, Dynamics of selfpropelled \ufb01laments pushing a load, Soft Matter 12, 8495 (2016). [10] T. Eisenstecken, G. Gompper, and R. G. Winkler, Internal dynamics of semi\ufb02exible polymers with active noise, J. Chem. Phys. 146, 154903 (2017). [11] K. R. Prathyusha, S. Henkes, and R. Sknepnek, Dinamically generated patterns in dense suspensions of active \ufb01laments, Phys. Rev. E 97, 022606 (2018). [12] O. Duman, R. E. Isele-Holder, J. Elgeti, and G. Gompper, Collective dynamics of self-propelled semi\ufb02exible \ufb01laments, Soft Matter 14, 4483 (2018). [13] S. K. Anand and S. P. Singh, Structure and dynamics of a self-propelled semi\ufb02exible \ufb01lament, Phys. Rev. E 98, 042501 (2018). 15 [14] M. Foglino, E. Locatelli, C. A. Brackley, D. Michieletto, C. N. Likos, and D. Marenduzzo, Non-equilibrium e\ufb00ects of molecular motors on polymers, Soft Matter 15, 5995 (2019). [15] R. G. Winkler and G. Gompper, The physics of active polymers and \ufb01laments, J. Chem. Phys. 153, 040901 (2020). [16] T. Eisenstecken and R. G. Winkler, Path integral description of semi\ufb02exible active Brownian polymers, J. Chem. Phys. 156, 064105 (2022). [17] C. A. Philipps, G. Gompper, and R. G. Winkler, Tangentially driven active polar linear polymers an analytical study, J. Chem. Phys. 157, 194904 (2022). [18] M. Vatin, S. Kundu, and E. Locatelli, Conformation and dynamics of partially active linear polymers, Soft Matter 20, 1892 (2024). [19] N. Hirokawa, Kinesin and dynein superfamily proteins and the mechanism of organelle transport, Science 279, 519 (1998). [20] M. Bachand, A. Trent, B. Bunker, and G. Bachand, Physical factors a\ufb00ecting kinesin-based transport of synthetic nanoparticle cargo, J. Nanosci. Nanotechnol. 5, 718 (2005). [21] R. Kawamura, A. Kakugo, Y. Osada, and J. Gong, Selective formation of a linear-shaped bundle of microtubules, Langmuir 26, 533 (2010). [22] R. Kawamura, A. Kakugo, K. Shikinaka, Y. Osada, and J. P. Gong, Ring-shaped assembly of microtubules shows preferential counterclockwise motion, Biomacromolecules 9, 2277 (2008). [23] F. Wu, A. Japaridze, X. Zheng, J. Wiktor, J. W. J. Kerssemakers, and C. Dekker, Direct imaging of the circular chromosome in a live bacterium, Nat. Commun. 10, 2194 (2019). [24] R. Schaleif, Dna looping, Annu. Rev. Biochem. 61, 192 (1992). [25] T. S. Alexiou, P. V. Alatas, D. G. Tsalikis, and V. G. Mavrantzas, Conformational and dynamic properties of short dna minicircles in acqueous solution from atomistic molecular dynamics simulations, Macromolecules 53, 5903 (2020). [26] S. P. Pearce, M. Heil, O. E. Jensen, G. W. Jones, and A. Prokop, Curvature-sensitive kinesin binding can explain microtubule ring formation and reveals chaotic dynamics in a mathematical model, Bullettin of Mathematical Biology 80, 3002 (2018). [27] M. Ito, A. M. R. Kabir, D. Inoue, T. Torisawa, Y. Toyoshima, K. Sada, and A. Kakugo, Formation of ring-shaped microtubule assemblies through active self-organization on dynein, Polym. J. 46, 220 (2014). [28] J. J. Keya, A. M. R. Kabir, and A. Kakugo, Synchronous operation of biomolecular engines, 16 Biophys. Rev. 12, 401 (2020). [29] S. M. Mousavi, G. Gompper, and R. G. Winkler, Active Brownonian ring polymers, J. Chem. Phys. 150, 064913 (2019). [30] E. Locatelli, V. Bianco, and P. Malgaretti, Active polymer rings: activity-induced collapse and dynamicl arrest, Phys. Rev. Lett. 126, 097801 (2021). [31] C. A. Philipps, G. Gompper, and R. G. Winkler, Dynamics of active polar ring polymers, Phys. Rev. E 105, L062591 (2022). [32] W. Chen, J. Chen, and L. An, Tumbling and tank-treading dynamics of individual ring polymers in shear \ufb02ow, Soft Matter 9, 4312 (2013). [33] P. S. Lang, B. Obermayer, and E. Frey, Dynamics of a semi\ufb02exible polymer or polymer ring in shear \ufb02ow, Phys. Rev. E 89, 022606 (2014). [34] M. Liebetreu, M. Ripoll, and C. N. Likos, Trefoil knot hydrodynamic delocalization on sheared ring polymers, ACS Macro Lett. 7, 447 (2018). [35] M. Liebetreu and C. N. Likos, Hydrodynamic in\ufb02ation of ring polymers under shear, Communications Materials 1, 4 (2020). [36] H. Noguchi and G. Gompper, Fluid vesicles with viscous membranes in shear \ufb02ow, Phys. Rev. Lett. 93, 258102 (2004). [37] J. Beaucourt, F. Rioual, T. Seon, T. Biben, and C. Misbah, Steady to unsteady dynamics of a vesicle in a \ufb02ow, Phys. Rev. E 69, 011906 (2004). [38] S. Kumar and S. Thakur, Local polar and long-range isotropic activity assisted swelling and collapse dynamics of an active ring polymer, Macromolecules 56, 5229 (2023). [39] E. F. Teixeira, H. C. M. Fernandes, and L. G. Brunnet, A single active ring model with velocity self-alignment, Soft Matter 17, 5991 (2021). [40] L. Theeyancheri, S. Chaki, T. Bhattacharjee, and R. Chakrabarti, Migration of active rings in porous media, Phys. Rev. E 106, 014504 (2022). [41] L. Liu, E. T\u00a8 uzel, and J. L. Ross, Loop formation of microtubules during gliding at high density, J. Phys.: Condens. Matter 23, 374104 (2011). [42] T. Sakaue, G. Witz, G. Dietler, and H. Wada, Universal bond correlation function for twodimensional polymer rings, EPL 91, 68002 (2010). [43] F. Drube, K. Alim, G. Witz, G. Dietler, and E. Frey, Excluded volume e\ufb00ects on semi\ufb02exible ring polymers, Nano Lett. 10, 1445 (2010). 17 [44] M. Ripoll, R. G. Winkler, and G. Gompper, Hydrodynamic screening of star polymers in shear \ufb02ow, Eur. Phys. J. E 23, 349 (2007). [45] R. Kapral, Multiparticle collision dynamics: Simulations of complex systems on mesoscale, Adv. Chem. Phys. 140, 89 (2008). [46] G. Gompper, T. Ihle, D. M. Kroll, and R. G. Winkler, Multi-particle collision dynamics: A particle-based mesoscale simulation approach to the hydrodynamics of complex \ufb02uids, Adv. Polym. Sci. 221, 1 (2009). [47] R. Finken, A. Lamura, U. Seifert, and G. Gompper, Two-dimensional \ufb02uctuating vesicles in linear shear \ufb02ow, Eur. Phys. J. E 25, 309 (2008). [48] B. Kaoui, J. Harting, and C. Misbah, Two-dimensional vesicle dynamics under shear \ufb02ow: E\ufb00ect of con\ufb01nement, Phys. Rev. E 83, 066319 (2011). [49] A. Lamura, Numerical study of a con\ufb01ned vesicle in shear \ufb02ow at \ufb01nite temperature, Mathematics 10, 3570 (2022). [50] R. G. Winkler, P. Reineker, and L. Harnau, Models and equilibrium properties of sti\ufb00molecular chains, J. Chem. Phys. 101, 8119 (1994). [51] V. Bianco, E. Locatelli, and P. Malgaretti, Globulelike conformation and enhanced di\ufb00usion of active polymers, Phys. Rev. Lett. 121, 217802 (2018). [52] J. P. Miranda-L\u00b4 opez, E. Locatelli, and C. Valeriani, Self-organized states of solutions of active ring polymers in bulk and under con\ufb01nement, J. Chem. Theory Comput. 20, 1636 (2024). [53] W. C. Swope, H. C. Andersen, P. H. Berens, and K. R. Wilson, A computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules: Application to small water clusters, J. Chem. Phys. 76, 637 (1982). [54] M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids (Clarendon Press, Oxford, 1987). [55] N. Kikuchi, C. M. Pooley, J. F. Ryder, and J. M. Yeomans, Transport coe\ufb03cients of a mesoscopic \ufb02uid dynamics model, J. Chem. Phys. 119, 6388 (2003). [56] T. Ihle and D. M. Kroll, Stochastic rotation dynamics: A Galilean-invariant mesoscopic model for \ufb02uid \ufb02ow, Phys. Rev. E 63, 020201(R) (2001). [57] A. Lamura, G. Gompper, T. Ihle, and D. M. Kroll, Multiparticle collision dynamics: Flow around a circular and a square cylinder, Europhys. Lett. 56, 319 (2001). [58] B. Rupp and F. N\u00b4 ed\u00b4 elec, Patterns of molecular motors that guide and sort \ufb01laments, Lab 18 Chip 12, 4903 (2012). [59] H. W. Diehl and E. Eisenriegler, Universal shape ratios for open and closed random walks: exact results for all d, J. Phys. A 22, L87 (1989). [60] A. Nikoubashman, A. Milchev, and K. Binder, Dynamics of single semi\ufb02exible polymers in dilute solution, J. Chem. Phys. 145, 234903 (2016). [61] E. Farge and A. C. Maggs, Dynamic scattering from semi\ufb02exible polymers, Macromolecules 26, 5041 (1993). [62] R. G. Winkler, Di\ufb00usion and segmental dynamics of rodlike molecules by \ufb02uorescence correlation spectroscopy, J. Chem. Phys. 127, 054904 (2007). [63] A. Martin-Gomez, G. Gompper, and R. G. Winkler, Active Brownian \ufb01lamentous polymers under shear \ufb02ow, Polymers 10, 837 (2018). [64] A. Panda, R. G. Winkler, and S. P. Singh, Characteristic features of self-avoiding active Brownian polymers under linear shear \ufb02ow, Soft Matter 19, 8577 (2023). 19 0.2 0.4 0.6 0.8 10 2 10 3 10 4 10 5 Pe <Rg2>1/2 / Rc 0.7 0.8 0.9 1 10 2 10 3 10 4 10 5 Pe <Rg2>1/2 / Rc (a) (b) (c) (d) (e) FIG. 1. Root-mean-square values of the radius of gyration Rg as function of the P\u00b4 eclet number Pe in the (a) \ufb02exible regime for Lp/L = 0 (black circles), 0.04 (red squares), 0.1 (blue up triangles), 0.2 (green stars), and in the (b) sti\ufb00regime for Lp/L = 0.4 (red squares), 0.7 (black circles), 1 (blue up triangles), 4 (green down triangles), 40 (purple stars). Filled and empty symbols correspond to the SAR and PAR models, respectively. Rc = L/(2\u03c0) is the radius of the corresponding rigid ring. Typical conformations of rings at Pe = 2.5 \u00d7 104 for the PAR model with Lp/L = 0.04 (c) and the SAR model with Lp/L = 0.04 (d), 0.4 (e). The beads 1 and N/2 + 1 are colored blue and yellow, respectively. 20 10 -3 10 -2 10 -1 1 10 0 0.1 0.2 0.3 0.4 0.5 0.6 Rg / Rc P(Rg / Rc) 10 -3 10 -2 10 -1 1 10 0.4 0.6 0.8 1 Rg / Rc P(Rg / Rc) (a) (b) 10 -3 10 -2 10 -1 1 10 0.4 0.5 0.6 0.7 0.8 0.9 Rg / Rc P(Rg / Rc) 10 -3 10 -2 10 -1 1 10 0.5 0.6 0.7 0.8 0.9 1 Rg / Rc P(Rg / Rc) (c) (d) FIG. 2. Normalized probability distribution function of the radius of gyration Rg for PAR (upper row) and SAR (lower row) models with Lp/L = 0 (a,c), 0.4 (b,d) and Pe = 0 (black circles), 104 (blue up triangles), 2.5 \u00d7 104 (green down triangles), 5 \u00d7 104 (purple stars). Rc = L/(2\u03c0) is the radius of the corresponding rigid ring. 21 (a) (b) (c) FIG. 3. Typical conformations of a ring with Lp/L = 0.4 at Pe = 2.5 \u00d7 104 for the PAR model corresponding to the positions Rg/Rc \u22430.95 (a), 0.80 (b), 0.52 (c) of the three peaks of the corresponding probability distribution of Fig. 2 (b). The beads 1 and N/2+1 are colored blue and yellow, respectively. 22 -1 -0.5 0 0.5 1 0 0.25 0.5 0.75 1 s / N <cos \u03b8(s)> FIG. 4. Bond correlation function versus the scaled contour separation s/N for Pe = 2.5 \u00d7 104 and Lp/L = 0 (black circles), 0.4 (blue squares). Filled and empty symbols correspond to the SAR and PAR models, respectively. 23 0.1 0.2 0.3 0.4 10 2 10 3 10 4 10 5 Pe <A> 0 0.05 0.1 0.15 0.2 10 2 10 3 10 4 10 5 Pe <A> (a) (b) FIG. 5. Average values of the asphericity A as function of the P\u00b4 eclet number Pe in the (a) \ufb02exible regime for Lp/L = 0 (black circles), 0.04 (red squares), 0.1 (blue up triangles), 0.2 (green stars), and in the (b) sti\ufb00regime for Lp/L = 0.4 (red squares), 0.7 (black circles), 1 (blue up triangles), 4 (green down triangles), 40 (purple stars). Filled and empty symbols correspond to the SAR and PAR models, respectively. 24 10 -2 10 -1 1 10 0 0.2 0.4 0.6 0.8 1 A P(A) 10 -2 10 -1 1 10 0 0.2 0.4 0.6 0.8 1 A P(A) (a) (b) 10 -2 10 -1 1 10 0 0.2 0.4 0.6 0.8 1 A P(A) 10 -2 10 -1 1 10 0 0.2 0.4 0.6 0.8 1 A P(A) (c) (d) FIG. 6. Normalized probability distribution function of the asphericity A for for PAR (upper row) and SAR (lower row) models with Lp/L = 0 (a,c), 0.4 (b,d) and Pe = 0 (black circles), 104 (blue up triangles), 2.5 \u00d7 104 (green down triangles), 5 \u00d7 104 (purple stars). 25 -3 -2 -1 0 1 2 3 0 5 10 15 20 25 30 t / (105 tu) Rdx(t) / Rc -3 -2 -1 0 1 2 3 0 5 10 15 20 t / (105 tu) Rdx(t) / Rc (a) (b) -3 -2 -1 0 1 2 3 0 1 2 3 4 5 t / (105 tu) Rdx(t) / Rc t / (105 tu) Rdx(t) / Rc -6 -4 -2 0 2 0 0.5 1 1.5 2 103 \u03c9 tu Power spectrum 0 2 4 6 8 10 0.6 0.8 (c) (d) FIG. 7. Time behavior of the x-component Rdx of the ring-diameter vector Rd for the \ufb02exible, Lp/L = 0.04, self-avoiding ring at Pe = 0 (a), 10 (b), 102 (c), 103 (d). In the inset of panel (d) the corresponding power spectrum (in arbitrary units) is shown as a function of the frequency \u03c9. Rc = L/(2\u03c0) is the radius of the corresponding rigid ring. 26 10 -3 10 -2 10 -1 1 10 10 2 10 3 10 4 10 5 Pe (T / L3) (l3 / tu) FIG. 8. Period T of the rotational motion of active rings, rescaled by the polymer length L, as function of the P\u00b4 eclet number Pe for Lp/L = 0 (black circles), 0.1 (red squares), 1 (green stars) with L = 50l, for Lp/L = 0.1 (blue triangles up), 1 (purple plus symbols) with L = 100l, and for Lp/L = 0.1 (red diamonds), 1 (black triangles down) with L = 200l. Filled and empty symbols correspond to the SAR and PAR models, respectively. The black line has slope \u22121. 27 0.4 0.6 0.8 1 0 50 100 150 t / \u03c4r Rg(t) / Rc 0 0.5 1 1.5 2 0 50 100 150 t / \u03c4r A(t)+1 A(t) (a) (b) FIG. 9. Radius of gyration Rg (a) and asphericity A (b) as functions of time t/\u03c4r, where \u03c4r is the polymer relaxation time, with Lp/L = 0.4 at Pe = 2.5 \u00d7 104 for SAR (black line) and PAR (red line) models. Rc = L/(2\u03c0) is the radius of the corresponding rigid ring. The asphericity of the SAR model in panel (b) has been shifted by 1 to avoid overlap with the red curve. 28 -1 -0.5 0 0.5 1 0 0.5 1 1.5 t / \u03c4r <Rd(t) \u2022 Rd(0)> / <Rd2(0)> -1 -0.5 0 0.5 1 0 1 2 3 4 5 t / T <Rd(t) \u2022 Rd(0)> / <Rd2> (a) (b) FIG. 10. (a) Autocorrelation function of the ring diameter Rd(t) as a function of time t/\u03c4r, where \u03c4r is the polymer relaxation time, for a self-avoiding active ring with Lp/L = 0.04 and Pe = 0 (black line), 10 (red line), 102 (blue line), 103 (purple line). (b) Autocorrelation function of the ring diameter Rd(t) as a function of time t/T, where T is the computed rotational period, for Pe = 103 and Lp/L = 0 (green line), 0.1 (red line), 40 (blue line). Full and dashed lines correspond to the SAR and PAR models, respectively. The black empty circles correspond to the theoretical prediction (9) [31] when Lp/L = 40. 29 10 -4 10 -3 10 -2 10 -1 10 -4 10 -3 10 -2 10 -1 1 t / \u03c4r < \u2206r2(t) > / L2 t0.6 t2 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 -5 10 -4 10 -3 10 -2 10 -1 1 t / \u03c4r < \u2206r2(t) > / L2 t t0.7 t2 (a) (b) FIG. 11. Beads mean-square displacement relative to the center-of-mass mean-square displacement, \u2206r2(t) \u000b , of self-avoiding (a) \ufb02exible, Lp/L = 0, and (b) sti\ufb00, Lp/L = 40, active rings as function of time t/\u03c4r, where \u03c4r is the polymer relaxation time, for P\u00b4 eclet numbers Pe = 0 (black line), 102 (blue line), 103 (red line), 104 (green line). Short black lines indicate power laws with the annotated time dependence. In panel (b) the black dashed line corresponds to the theoretical prediction (10) [31] when Pe = 104. 30"
16
+ }
intro_8K/test_introduction_short_2405.05433v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05433v1",
3
+ "title": "Robust Reward Placement under Uncertainty",
4
+ "abstract": "Reward placement is a common optimization problem in network diffusion\nprocesses, where a number of rewards are to be placed in a network so as to\nmaximize the total reward obtained as agents move randomly in it. In many\nsettings, the precise mobility network might be one of several possible, based\non parameters outside our control, such as the weather conditions affecting\npeoples' transportation means. Solutions to the reward placement problem must\nthus be robust to this uncertainty, by achieving a high utility in all possible\nnetworks. To study such scenarios, we introduce the Robust Reward Placement\nproblem (RRP). Agents move randomly on a Markovian Mobility Model that has a\npredetermined set of locations but its precise connectivity is unknown and\nchosen adversarialy from a known set $\\Pi$ of candidates. Network optimization\nis achieved by selecting a set of reward states, and the goal is to maximize\nthe minimum, among all candidates, ratio of rewards obtained over the optimal\nsolution for each candidate. We first prove that RRP is NP-hard and\ninapproximable in general. We then develop $\\Psi$-Saturate, a pseudo-polynomial\ntime algorithm that achieves an $\\epsilon$-additive approximation by exceeding\nthe budget constraint by a factor that scales as $O(ln|\\Pi|/\\epsilon)$. In\naddition, we present several heuristics, most prominently one inspired from a\ndynamic programming algorithm for the max-min 0-1 Knapsack problem. We\ncorroborate our theoretical findings with an experimental evaluation of the\nmethods in both synthetic and real-world datasets.",
5
+ "authors": "Petros Petsinis, Kaichen Zhang, Andreas Pavlogiannis, Jingbo Zhou, Panagiotis Karras",
6
+ "published": "2024-05-08",
7
+ "updated": "2024-05-08",
8
+ "primary_cat": "cs.MA",
9
+ "cats": [
10
+ "cs.MA",
11
+ "cs.SI"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "In many graph optimization problems, a stakeholder has to se- lect locations in a network, such as a road, transportation, in- frastructure, communication, or web network, where to place reward-generating facilities such as stores, ads, sensors, or utilities to best service a population of moving agents such as customers, autonomous vehicles, or bots [Zhang and Vorob- eychik, 2016; Ostachowicz et al., 2019; Zhang et al., 2020; Rosenfeld and Globerson, 2016; Amelkin and Singh, 2019]. B C 0.2 D A 0.8 1 1 1 B C 0.2 D A 1 1 0.8 1 2 1/2 2 1/2 A B D 2 1/2 2 1/2 A Figure 1: Moving agent under two settings; sunny and rainy. Each table contains the number of steps and the initial probabilities. Such problems are intricate due to the uncertainty surround- ing agent mobility [Krause et al., 2008; Chen et al., 2016; He and Kempe, 2016; Hor\u02c7 c\u00b4 \u0131k et al., 2022]. For instance, consider outdoor ad placement. We represent the road map as a probabilistic network in which agents move. If every agent follows the same movement pattern regardless of environmental conditions, then the problem of placing ads to maximize expected number of ad views admits a greedy algorithm with an approximation ratio [Zhang et al., 2020]. Still, the problem becomes more involved under malleable environmental conditions that alter movement patterns. As a toy example, Figure 1 shows a probabilistic network. A mov- ing agent randomly starts from an initial location, and takes two steps by the probabilities shown on edges representing street segments, under two environmental settings; sunny and rainy. Suppose a stakeholder has a budget to place an ad- billboard at a single location. Under the sunny setting, the best choice of ad placement is B, since agent certainly pass by that point regardless of its starting position. On the other hand, under the rainy setting, agent will necessarily pass by D within two steps, hence that is most preferable. Under such uncertainty, a rational stakeholder would prefer the location that yields, in the worst case, a reward having the highest ratio to the best feasible one. For instance, if a stakeholder selects B (resp. D), then under the rainy (resp. sunny) setting the expected reward is 0.6. However, the optimal strategy for a risk-averse stakeholder would be the selection of C, as the expected reward is higher in both settings, equal to 0.9. In this paper, we introduce the problem of robust reward placement (RRP) in a network, under uncertainty about the environment whereby an agent is moving according to any of several probabilistic mobility settings. We express each arXiv:2405.05433v1 [cs.MA] 8 May 2024 such setting by a Markov Mobility Model (MMM) noted as \u03c0 \u2208\u03a0. The cumulative reward a stakeholder receives grows whenever agent passes by one of the reward states SR. RRP seeks to select a set of such states S\u2217 R within a budget, that maximizes the worst-case ratio, across all settings \u03a0, of the collected reward F(SR|\u03c0) over the highest reward that can be collected under the same setting F(S\u2217 \u03c0|\u03c0). More formally, S\u2217 R = arg maxSR min\u03c0\u2208\u03a0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) . The max-min ratio objective is used in risk-averse portfolio optimization and ad- vertising [Ordentlich and Cover, 1998; Li and Yang, 2020]. Our Contribution. Our contributions stand as follows: 1. We introduce the problem of Robust Reward Place- ment (RRP) over a set of Markov Mobility Models, that has real-world applications across various domains. 2. We study the properties of RRP and show that it is NP- hard (Theorem 1). Due to the additivity and monotonic- ity properties of the reward function (Lemma 3), RRP admits an optimal solution in pseudo-polynomial time under a single setting, i.e. |\u03a0| = 1 (Lemma 4), yet it is inapproximable when |\u03a0| > 1 unless we exceed the budget constraint by a factor O(ln |\u03a0|) (Theorem 2). 3. We equip techniques from robust influence maximiza- tion to develop \u03a8-Saturate, a pseudo-polynomial time algorithm viable to the challenge of RRP. \u03a8-Saturate computes a solution within \u03f5 distance of the optimal, i.e. OPT\u2212\u03f5, while exceeding the budget constraint by a fac- tor O(ln |\u03a0|/ \u03f5) (Lemma 6). 4. We present several heuristics as alternative solutions with the most prominently one based on a dynamic pro- gramming algorithm for the max\u2013min 0\u20131 KNAPSACK problem; to which RRP can be reduced (Lemma 5). We corroborate our theoretical findings with an experimen- tal comparison of our solution vs. a suite of heuristics in both synthetic and real-world data. Due to space constraints, the proofs of Lemmas 5,6 and 9 appear in Appendix A.",
16
+ "main_content": "Several robust discrete optimization problems [Kouvelis and Yu, 2013] address uncertainty in decision-making by optimizing a max\u2013min or min\u2013max function under constraints. The robust MINIMUM STEINER TREE problem [Johnson et al., 2000] seeks to minimize the worst-case cost of a tree that spans a graph; the min\u2013max and min\u2013max regret versions of the KNAPSACK problem [Aissi et al., 2009] have a modular function as a budget constraint; other works examine the robust version of submodular functions [Krause and Golovin, 2014; He and Kempe, 2016] that describe several diffusion processes [Adiga et al., 2014; Krause et al., 2008]. To our knowledge, no prior work considers the objective of maximizing the worst-case ratio of an additive function over its optimal value subject to a knapsack budget constraint. 3 Preliminaries Markov Mobility Model (MMM). We denote a discretetime MMM as \u03c0 = (S, I, T , M), where S is a set of n states, I is a vector of n elements in [0, 1] expressing an initial probability distribution over states in S, T is an n \u00d7 n right-stochastic matrix, where T [s, s\u2032] is the probability of transition from state s \u2208S to another state s\u2032 \u2208S, and M is an n \u00d7 K matrix with elements in [0, 1], where K is the maximum number of steps and M[s, k] expresses the cumulative probability that an agent starting from state s \u2208S takes k\u2032 \u2208[k, K] steps. Remarkably, a MMM describes multiple agents and movements, whose starting positions are expressed via initial distribution I and their step-sizes via M. Rewards. Given a MMM, we select a set of states to be reward states. We use a reward vector R \u2208{0, 1}n to indicate whether state s \u2208S is a reward state and denote the set of reward states as SR = {s \u2208S|R[s] = 1}. In each timestamp t, an agent at state s may move to state s\u2032 and retrieve reward R[s\u2032]. For a set of reward states SR with reward vector R, and a given MMM \u03c0, the cumulative reward F(SR|\u03c0) of an agent equals: \ufffd quals: F(SR|\u03c0) = \ufffd k\u2208[K () =\u22a4\ufffd \ufffd k\u2208[K] F\u03c0(SR|k) (1) \u22a4\ufffd k() \ufffd (2) \ufffd k\u2208[K] F\u03c0(SR|k) = R\u22a4\ufffd T k(I \u25e6Mk) \ufffd , (2) R|k) is the expected reward at the kth step, Mk umn of M, and \u25e6denotes the Hadamard product. SR| R\ufffd TI \u25e6M \ufffd where F\u03c0(SR|k) is the expected reward at the kth step, Mk is the kth column of M, and \u25e6denotes the Hadamard product. Note that as K \u2192\u221e, Equation 2 yields the steady-state distribution of the model. Equation 2 is a general formulation of PageRank scores [Brin and Page, 1998] as it considers different initial and step distributions via I and M, respectively. 4 Problem Formulation In this section we model the uncertain environment where individuals navigate and introduce the Robust Reward Placement (RRP) problem over a set of Markov Mobility Models (MMMs), extracted from real movement data, that express the behavior of individuals under different settings. Setting. Many real-life applications generate data on the point-to-point movements of agents over a network, along with a distribution and their total number of steps. Using aggregate statistics on this information, we formulate, without loss of generality, the movement of a population by means of a single agent moving probabilistically over the states of a MMM \u03c0 = (S, I, T , M). Due to environment uncertainty, the agent\u2019s movement pattern may follow any of |\u03a0| different settings1 \u03a0 = {\u03c01, \u03c02, . . . , \u03c0|\u03a0|}. Robust Reward Placement Problem. Several allocation problems can be formulated as an optimization problem over a MMM \u03c0, where reward states SR correspond to the placement of resources. Given a budget L and a cost function c : S \u2192N+, the Reward Placement (RP) problem seeks the set of reward states S\u2217 R \u2286S that maximize the cumulative reward F(S\u2217 R|\u03c0) given by a moving agent, that is: S\u2217 R = arg max SR F(SR|\u03c0) s.t. X s\u2208SR c[s] \u2264L. However, in reality the agent\u2019s movements follow an unknown distribution sampled from a set of settings \u03a0 = {\u03c01, \u03c02, . . . , \u03c0|\u03a0|} represented as different MMMs. Under this uncertainty, given a set of MMMs noted as \u03a0, the Robust Reward Placement (RRP) problem seeks a set of reward states SR, within a budget, that maximize the ratio of agent\u2019s cumulative reward over the optimal one, when the model \u03c0 \u2208\u03a0 is unknown. In particular, given a budget L and a cost function c : S \u2192N+, we seek a reward placement S\u2217 R \u2286S such that: S\u2217 R = arg max SR min \u03c0\u2208\u03a0 F(SR|\u03c0) F(S\u2217 \u03c0|\u03c0) s.t. X s\u2208SR c[s] \u2264L, (3) where S\u2217 \u03c0 = arg max SR F(SR|\u03c0) is the optimal reward placement for a given model \u03c0 \u2208\u03a0 within budget L. This formulation is equivalent to minimizing the maximum regret ratio of F(SR|\u03c0), i.e., 1 \u2212F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) . The motivation arises from the fact that stakeholders are prone to compare what they achieve with what they could optimally achieve. The solution may also be interpreted as the optimal placement when the model \u03c0 \u2208\u03a0 in which agents are moving is chosen by an omniscient adversary, i.e. an adversary that chooses the setting \u03c0 after observing the set reward states SR. 5 Hardness and Inapproximability Results In this section we examine the optimization problem of RRP and we show that is NP-hard in general. First, in Theorem 1 we prove that even for a single model (|\u03a0| = 1) the optimal solution cannot be found in polynomial time, due to a reduction from the 0\u20131 KNAPSACK problem [Karp, 1972]. Theorem 1. The RRP problem is NP-hard even for a single model, that is |\u03a0| = 1. Proof. In the 0\u20131 KNAPSACK problem [Karp, 1972] we are given a set of items U, each item u \u2208U having a cost c(u) and, wlog, an integer value F(u) and seek a subset V \u2286U that has total cost P v\u2208V c(v) no more than a given budget L and maximum total value P v\u2208V F(v). In order to reduce 0\u20131 KNAPSACK to RRP, we set a distinct state s \u2208S for 1We use the terms \u201dsetting\u201d and \u201dmodel\u201d interchangeably. each item u \u2208U with the same cost, i.e., S = U, assign to each state a self-loop with transition probability 1, let each state be a reward state, and set a uniform initial distribution of agents over states equal to 1/|S| and steps probability equal to M[s, k] = 1, \u2200k \u2208[1, . . . , F(u)]. For a single setting, an optimal solution to the RRP problem of Equation (3) is also optimal for 0\u20131 KNAPSACK problem, which is NP-hard. Theorem 2 proves that RRP is inapproximable in polynomial time within constant factor, by a reduction from the HITTING SET problem, unless exceeding the budget constraint. Theorem 2. Given a budget L and set of models \u03a0, it is NPhard to approximate the optimal solution to RRP within a factor of \u2126(1/ n1\u2212\u03f5), for any constant \u03f5 > 0, unless the cost of the solution is at least \u03b2L, with \u03b2 \u2265ln |\u03a0|. Proof. We reduce the HITTING SET problem [Karp, 1972] to RRP and show that an approximation algorithm for RRP implies one for HITTING SET. In the HITTING SET problem, given a collection of X items, C = {c1, c2, . . . , cX} and a set of M subsets thereof, Bi \u2286C, i \u2208{1, . . . , M}, we seek a hitting set C\u2032 \u2286C such that Bi \u2229C\u2032 \u0338= \u2205\u2200i \u2208{1, . . . , M}. Given an instance of HITTING SET, we reduce it to RRP as follows. First, for each subset Bi we set a MMM \u03c0i (|\u03a0| = M) over the same set of states S = Sl \u222aSr with Sl \u2229Sr = \u2205. For each subset Bi we set a state sl i \u2208Sl and for each item ci we set a state sr i \u2208Sr. We set the initial probabilities I as uniform for all states in Sl, equal to 1/ |Sl| for all models. For each model \u03c0i \u2208\u03a0, there are transition probabilities 1 from each state sl j to state sl i, with i \u0338= j, and uniform transition probabilities from sl i to each state sr j if and only if cj \u2208Bi. States in Sr are absorbing, i.e., each state has a self-loop with probability 1. Figure 2 shows a small example of a HITTING SET instance and its RRP equivalent. We set the cost for absorbing states in Sr to 1 and let each node in Sl have a cost exceeding L. By this construction, if the reward placement SR does not form a hitting set, then it exists at least a subset Bi, such that Bi \u2229SR = \u2205, hence min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) = 0. In reverse, if SR forms a hitting set, it holds that min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u2265 1 |Sr| > 0. Thus, a hitting set exists if and only if min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) > 0. In effect, if we obtained an approximation algorithm for RRP by increasing the budget to \u03b2L, for \u03b2 > 1, then we would also approximate, with a budget increased by a factor of \u03b2, the HITTING SET problem, which is NP-hard for \u03b2 < (1 \u2212\u03b4) ln |\u03a0| and \u03b4 > 0 [Dinur and Steurer, 2014]. Hitting Set RRP Figure 2: HITTING SET (left) and RRP reduction (right). 6 Connections to Knapsack Problems In this section, we establish connections between RRP and KNAPSACK problems, which are useful in our solutions. Monotonicity and Additivity. Lemma 3 establishes that the cumulative reward function F(SR|\u03c0) is monotone and additive with respect to reward states SR. These properties are crucial on evaluating the reward function taking advantage of pre-computations. Lemma 3. The cumulative reward F(SR|\u03c0) in Equation (1) is a monotone and additive function of reward states SR. Proof. By Equation (1) we obtain the monotonicity property of the cumulative reward function F(\u00b7|\u03c0). Given a model \u03c0 \u2208\u03a0 and two sets of reward states A \u2286B \u2286S every term of F(A|\u03c0) is no less than its corresponding term of F(B|\u03c0) due to Equation (2). For the additivity property it suffices to show that any two sets of reward states A, B \u2286S satisfy: F(A|\u03c0) + F(B|\u03c0) = F(A \u222aB|\u03c0) + F(A \u2229B|\u03c0). At time t = 0, r0 A + r0 B = r0 A\u2229B + r0 A\u222aB, with rt X being the cumulative reward at time t for the set of reward states X. Assume wlog that the equality holds for time t. It suffices to prove that the additivity property holds for t+1. In timestamp t + 1, agent at state s \u2208S moves to s\u2032 \u2208S. We distinguish three cases as follows: 1. If s\u2032 / \u2208A \u222aB then s\u2032 / \u2208A \u2229B, s\u2032 / \u2208A and s\u2032 / \u2208B, thus additivity holds. 2. If s\u2032 \u2208A \u222aB and s\u2032 / \u2208A \u2229B then either s\u2032 \u2208A or s\u2032 \u2208B. Assume wlog that s\u2032 \u2208A, then it holds that: rt+1 A = rt A + T [s, s\u2032], rt+1 A\u222aB = rt A\u222aB + T [s, s\u2032], rt+1 B = rt B and rt+1 A\u2229B = rt A\u2229B. 3. If s\u2032 \u2208A \u2229B then s\u2032 \u2208A and s\u2032 \u2208B. Then, it holds that: rt+1 A = rt A + T [s, s\u2032], rt+1 B = rt B + T [s, s\u2032], rt+1 A\u222aB = rt A\u222aB + T [s, s\u2032], and rt+1 A\u2229B = rt A\u2229B+T [s, s\u2032]. In all scenarios the cumulative reward function is additive. Next, Lemma 4 states that RRP under a single model \u03c0 (|\u03a0| = 1), i.e., the maximization of F(SR|\u03c0) within a budget L, is solved in pseudo-polynomial time thanks to the additivity property in Lemma 3 and a reduction from the 0\u20131 KNAPSACK problem [Karp, 1972]. Lemma 4 also implies that we can find the optimal reward placement with the maximum expected reward by using a single expected setting \u03c0. Lemma 4. For a single model \u03c0 (|\u03a0| = 1) and a budget L, there is an optimal solution for RRP that runs in pseudopolynomial time O(Ln). Proof. For each state si \u2208S we set an item ui \u2208U with cost c(ui) = c[si] and value F(ui) = F({si}|\u03c0). Since the reward function is additive (Lemma 3), it holds that F(SR|\u03c0) = P si\u2208SR F({si}|\u03c0) = P ui\u2208U F(ui). Thus, we can optimally solve single setting RRP in pseudo-polynomial time by using the dynamic programming solution for 0\u20131 KNAPSACK [Martello and Toth, 1987]. In the MAX\u2013MIN 0\u20131 KNAPSACK problem (MNK), given a set of items U, each item u \u2208U having a cost c(u), and a collection of scenarios X, each scenario x \u2208X having a value Fx(u), we aim to determine a subset V \u2286 U that has total cost no more than a given budget L and maximizes the minimum total value across scenarios, i.e., argV max minx P u\u2208V Fx(u). The following lemma reduces the RRP problem to MAX\u2013MIN 0\u20131 KNAPSACK [Yu, 1996] in pseudo-polynomial time. Lemma 5. RRP is reducible to MAX\u2013MIN 0\u20131 KNAPSACK in O(|\u03a0|Ln) time. 7 Approximation Algorithm Here, we introduce \u03a8-Saturate2, a pseudo-polynomial time binary-search algorithm based on the Greedy-Saturate method [He and Kempe, 2016]. For any \u03f5 > 0, \u03a8-Saturate returns an \u03f5-additive approximation of the optimal solution by exceeding the budget constraint by a factor O(ln |\u03a0|/\u03f5). The \u03a8-Saturate Algorithm. Algorithm 1 presents the pseudocode of \u03a8-Saturate. As a first step, in Lines 1-2, the algorithm finds the optimal reward placement S\u2217 \u03c0 for each model \u03c0 \u2208\u03a0; this is needed for evaluating the denominator of the RRP objective value in Equation (3). By Lemma 4, S\u2217 \u03c0 is computed in pseudo-polynomial time using the dynamic programming algorithm for the KNAPSACK problem. Then, in Lines 5-18 the algorithm executes a binary search in the range of the min\u2013max objective ratio (Line 4). In each iteration, the algorithm makes a guess \u03b7 of the optimal min\u2013max objective value (Line 6), and then seek a set of reward states SR (Line 7), of minimum cost, with score at least \u03b7 (Line 8), within distance \u03f5 > 0. Finding SR of the minimum cost, implies an optimal solution for the NP-hard RRP problem. In Lines 9-10, \u03a8-Saturate evaluates function min \u0010 \u03b7, F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 which, for fixed \u03c0 and \u03b7, is monotone and submodular3, by using the Greedy approximation algorithm of Wolsey [1982]. If the formed solution exceeds the budget constraint, the algorithm decreases the upper bound of the search scope (Lines 12-13), otherwise it increases the lower bound and updates the optimal solution S\u2217 R (Lines 14-16). Finally, it returns the optimal solution found (Line 19). Following an analogous proof to Theorem 3 in work of He and Kempe [2016], we derive Lemma 6 which states that \u03a8-Saturate approximates the optimal value within distance \u03f5 when it exceeds the budget by a factor O(ln |\u03a0|/ \u03f5), i.e., offers a bicriteria approximation solution. Lemma 6. For any constant \u03f5 > 0, let \u03b2 = 1 + ln 3|\u03a0| \u03f5 . \u03a8-Saturate finds a reward placement SR of cost at most \u03b2L with min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u2265min\u03c0 F (S\u2217 R|\u03c0) F (S\u2217 \u03c0|\u03c0) \u2212\u03f5 = OPT \u2212\u03f5, and S\u2217 R = argSR max min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) s.t. P s\u2208SR c[s] \u2264L. Different to the pseudo-polynomial time dynamic programming (Knapsack, Line 2) algorithm we opt, the GreedySaturate algorithm [He and Kempe, 2016] uses a simple 2\u03a8 stands for \u201dpseudo\u201d coming from Greek word \u201d\u03c8\u03b5\u03c5\u03b4\u03ae\u03c2\u201d. 3The minimum of a constant function (\u03b7) and a monotone additive function \u0010 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) , Lemma 3 \u0011 is monotone and submodular. The term F(S\u2217 \u03c0|\u03c0) is constant as it has been computed in Line 2. Algorithm 1 \u03a8-Saturate Algorithm Input: MMMs \u03a0, max steps K, budget L, precision \u03f5, extra size parameter \u03b2. Output: Optimal Reward Placement S\u2217 R of cost at most \u03b2L. 1: for \u03c0 \u2208\u03a0 do 2: S\u2217 \u03c0 \u2190Knapsack(\u03c0, L) 3: end for 4: \u03b7min \u21900, \u03b7max \u21901, S\u2217 R \u2190\u2205 5: while (\u03b7min \u2212\u03b7max) \u2265\u03f5 do 6: \u03b7 \u2190(\u03b7max + \u03b7min)/2 7: SR \u2190\u2205 8: while P \u03c0\u2208\u03a0 min \u0010 \u03b7, F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 < (\u03b7 \u00b7 |\u03a0| \u2212\u03b7 \u00b7 \u03f5/3) do 9: s \u2190arg max s\u2208S\\SR P \u03c0\u2208\u03a0 1 c(s) \u0010 min \u0010 \u03b7, F (SR\u222a{s}|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 \u2212 min \u0010 \u03b7, F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 \u0011 10: SR \u2190SR \u222a{s} 11: end while 12: if P s\u2208SR c[s] > \u03b2L then 13: \u03b7max \u2190\u03b7 14: else 15: \u03b7min \u2190\u03b7 \u00b7 (1 \u2212\u03f5/3) 16: S\u2217 R \u2190SR 17: end if 18: end while 19: return S\u2217 R Greedy algorithm to approximate the optimal reward placement S\u2217 \u03c0 (Lines 1-2). Greedy4 provides an 1/ 2-approximation of the optimal solution for a monotone additive function over a knapsack constraint [Johnson and Garey, 1979]. The reward function is monotone and additive (Lemma 3), thus the following corollary holds. Corollary 7. For any constant \u03f5 > 0, let \u03b2 = 1 + ln 3|\u03a0| \u03f5 . Greedy-Saturate finds a reward placement SR of cost at most \u03b2L with min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u22651 2 min\u03c0 F (S\u2217 R|\u03c0) F (S\u2217 \u03c0|\u03c0) \u2212\u03f5 = 1 2OPT \u2212\u03f5, and S\u2217 R = argSR max min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) s.t. P s\u2208SR c[s] \u2264L. Notably, for \u03b2 = 1, \u03a8-Saturate returns an non-constant approximation of the optimal solution within the budget constraint L. In particular, the next corollary holds. Corollary 8. For any constant \u03f5 > 0, let \u03b3 = 1+ln 3|\u03a0| \u03f5 . For \u03b2 = 1, \u03a8-Saturate satisfies the budget constraint and returns an 1 \u03b3 (OPT \u2032\u2212\u03f5) approximation factor of the optimal solution, with OPT \u2032 = maxSR min\u03c0 F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) s.t. P s\u2208SR c[s] \u2264L \u03b3 . It is important to note that the approximation in Corollary 8 is non-constant, meaning that it can be arbitrarily small. This is also derived from the inapproximability result of Theorem 2. However, the corollary indicates that if the optimal value for a smaller budget constraint is non-zero, then \u03a8Saturate for \u03b2 = 1 provides an approximation of the optimal solution within the initial budget constraint L. 4 The algorithm iteratively selects the element, within the budget, that offer the maximal marginal gain divided by its cost. 8 Heuristic Solutions Inspired from previous works on node selection in networks [He and Kempe, 2016; Zhang et al., 2020] and the connection of RRP with Knapsack problems, we propose four heuristic methods. For a single model (|\u03a0| = 1) and under uniform costs (c[s] = c \u2200s \u2208S), these four heuristics find an optimal solution. However, contrary to \u03a8-Saturate algorithm (Lemma 6), they may perform arbitrarily bad in the general multi-model case, even by exceeding the budget constraint. To accelerate the selection process, we use the Lazy Greedy technique that updates values selectively [Minoux, 1978] in all heuristics, except the one using dynamic programming. All Greedy. The All Greedy method optimally solves the RRP problem for each model \u03c0 \u2208\u03a0 separately using the Knapsack dynamic programming algorithm (Lemma 4) and then picks, among the collected solutions, the one yielding the best value of the objective in Equation (3). All Greedy is optimal for a single model with arbitrary cost function. Myopic. A greedy algorithm that iteratively chooses the reward state s\u2217\u2208S, within the budget, that offers the maximal marginal gain ratio to the RRP objective divided by the cost, that is s\u2217= arg max s\u2208S\\SR min \u03c0\u2208\u03a0 \u0010 1 c[s] F (SR\u222a{s}|\u03c0)\u2212F (SR|\u03c0) F (S\u2217 \u03c0|\u03c0) \u0011 . Best Worst Search (BWS). This algorithm initially assigns a score H(s) for each state s \u2208S. This score is defined as the minimum cumulative reward when SR = {s}, that is H(s) = min\u03c0 F({s}|\u03c0). As a final step, BWS iteratively chooses the reward state s\u2217, within the budget, that offers the maximal marginal gain to the score divided by the cost, that is s\u2217= arg max s\u2208S\\SR \u0010 H(SR\u222a{s})\u2212H(SR) c[s] \u0011 . Dynamic Programming (DP-RRP). In Lemma 5 we reduced RRP to MAX\u2013MIN 0\u20131 KNAPSACK (MNK) in pseudo-polynomial time. While MNK admits an optimal solution using a pseudo-polynomial time dynamic programming algorithm, its running time grows exponentially with the number of settings |\u03a0| [Yu, 1996]. To overcome this time overhead, we propose a more efficient albeit non-optimal dynamic-programming algorithm for the RRP problem, noted as DP-RRP. For reward placement SR, we denote the cumulative reward for each setting as the following |\u03a0|-tuple: g(SR) = \u0000F(SR|\u03c01), F(SR|\u03c02), . . . , F(SR|\u03c0|\u03a0|) \u0001 . We use an (n + 1) \u00d7 (L + 1) matrix M whose entries are |\u03a0|tuples of the form g(\u00b7). Let min g(SR) = min\u03c0i F(SR|\u03c0i) be the minimum reward, across |\u03a0| settings. We define the maximum of two entries g(SR1) and g(SR2), as arg maxSR\u2208{SR1,SR2} min g(SR), i.e. the one holding the largest minimum reward. We initialize M[\u00b7, 0] = M[0, \u00b7] = (0, 0, . . . , 0) and recursively compute M[i, j] as follows: M[i, j] = max{M[i\u22121, j], M[i\u22121, j\u2212c[i]]+g({i})}, (4) where M[i, j] stands for a solution using the first i states, by some arbitrary order, and j units of budget. In the recursion of Equation (4), the first option stands for not choosing state si as a reward state, while the latter option stands for doing so while paying cost c[i] and gaining the additive reward g({i}). We compute M[n, L] as above in space and time complexity \u0398(|\u03a0|Ln) and backtrack over M to retrieve the selected reward states in the final solution. Note that, for a single model, i.e. |\u03a0| = 1 and arbitrary cost function, Equation (4) returns an optimal solution. Worst Case Performance. While all heuristics approach the optimal solution under a single setting, they may perform arbitrarily bad with multiple settings. In Lemma 9 we prove that this holds even when exceeding the budget constraint, contrariwise to the \u03a8-Saturate algorithm (Lemma 6). Lemma 9. The heuristics for RRP may perform arbitrarily bad even when they exceed the budget constraint from L to \u03b2L, with \u03b2 = 1 + ln 3|\u03a0| \u03f5 and \u03f5 > 0. Extensions. All algorithms work, without any modification, with rewards of arbitrary non-negative values, i.e. R[\u00b7] \u2208R+, and when partial solution is already given. 9 Experimental Analysis In this section we evaluate the running time and performance of algorithms on synthetic and real-world data. We use different problem parameters as shown in Table 1; the default values of all parameters are marked in bold. To satisfy the budget constraint, for the \u03a8-Saturate algorithm we fix \u03b2 = 1 as in Corollary 8 and precision \u03f5 = (|\u03a0| \u00b7 103)\u22121. We set the budget L as a percentage of the total cost P s\u2208S c[s]. To benefit from the additivity property of Lemma 3, we precompute the cumulative reward F({s}|\u03c0) for each state s \u2208S and model \u03c0 \u2208\u03a0. We implemented5 all methods in C++ 17 ran experiments on a 376GB server with 96 CPUs @2.6GHz. 9.1 Generated Data We use two different types of synthetic datasets to represent the stochastic networks (MMMs). In each type we generate the graph and then we sample edge-weights using a normal distribution to make different settings. In more details: Erd\u02dd os-R\u00b4 enyi: We generate 6 directed graphs of different sizes as shown in Table 1. In all datasets, we preserve the same out-degree (default is 6), thus we modify the probability of creating an edge accordingly. Scale-Free: We generate 6 directed scale-free graphs of different sizes as shown in Table 1. We use the work of Bollob\u00b4 as et al. [2003] which introduces three parameters to construct the network: p\u03b1 (p\u03b3) is the probability of adding a new node connected to an existing node chosen randomly by its indegree (out-degree), and p\u03b2 is the probability adding an edge 5https://anonymous.4open.science/r/RRP-F6CA Parameter Values n 2500, 5000, 7500, 10000 10000 10000, 12500 |\u03a0| 2,5,10,15,20 K 2,4,6,8,10 L 10%, 25%, 50%, 75% Erd\u02dd os-R\u00b4 enyi \u27e8d\u27e9 3,6,9,12 Scale-Free p\u03b2 0.6, 0.7, 0.8, 0.9 Table 1: Setting Parameters (u, v), with u and v selected by its in-degree and out-degree respectively. In all datasets we tune p\u03b2 (default is 0.8), such that p\u03b1 + p\u03b2 + p\u03b3 = 1 and p\u03b1 = 2p\u03b3. For a fixed graph structure we further generate |\u03a0| = 20 distinct settings corresponding to different models. To do that, we sample edge-weights using 20 different normal distributions (one for each setting) with the same mean value, set as 1/(# of out-neighbors), while we vary the standard deviation. Whenever the sampled value is negative we arbitrary set the edge-weight zero. Each of the graphs is directed and edge-weighted, thus transition probabilities T correspond to the normalized edge-weights. Moreover, we set the initial probabilities I proportional to the sum of out-weights. Finally, we set the cost of each node as the average number of its in-neighbors (rounding down) among the distinct settings. Time and Performance. Figure 3 plots the average preprocessing and running time over 20 runs for all algorithms as the size of graph and the budget increases. Notably, precomputations take superlinear time with respect to the size n as the time complexity needed for the power iteration is O(Kn) in sparse networks, where K is the maximum number of steps. Moreover, the linear growth of the runtime graph size (n) and budget increase, indicates the efficiency of all algorithms except DP-RRP approach whose time complexity is at least quadratic in n (\u0398(|\u03a0|Ln) when L = \u2126(n)). Figure 4 shows how algorithms perform on average over 20 runs as different parameters vary. We observe that, for the Erd\u02dd os-R\u00b4 enyi dataset the \u03a8-Saturate algorithm outperforms all 2.5 5 7.5 10 12.5 \u00b7103 0 2 4 \u00b7104 Pre-Time (msec) Erd\u02dd os-R\u00b4 enyi 2.5 5 7.5 10 12.5 \u00b7103 2 4 \u00b7104 Scale-Free 2.5 5 7.5 10 12.5 \u00b7103 0 0.5 1 \u00b7104 n Time (msec) \u03a8-Saturate AllGreedy Myopic BWS DP-RRP 2.5 5 7.5 10 12.5 \u00b7103 0.5 1 \u00b7104 n 0.25 0.5 0.75 0 1 2 \u00b7104 L Time (msec) 0.25 0.5 0.75 0 1 2 \u00b7104 L Figure 3: Preprocessing and Running Time vs. n, L for Erd\u02dd os-R\u00b4 enyi (left) and Scale-Free (right) datasets. 2.5 5 7.5 10 12.5 \u00b7103 0.82 0.84 0.86 Score \u03a8-Saturate AllGreedy Myopic BWS DP-RRP 1 5 10 15 20 0.6 0.7 0.8 0.9 1 2 4 6 8 10 0.8 0.9 0.25 0.5 0.75 0.78 0.8 0.82 0.84 0.86 0.88 3 6 9 12 0.8 0.85 0.9 Erd\u02dd os-R\u00b4 enyi 2.5 5 7.5 10 12.5 \u00b7103 0.75 0.8 n Score 1 5 10 15 20 0.6 0.8 1 |\u03a0| 2 4 6 8 10 0.7 0.8 0.9 K 0.25 0.5 0.75 0.75 0.8 0.85 0.9 L 0.6 0.7 0.6 0.7 0.7 0.75 0.8 0.85 \u27e8d\u27e9(top) p\u03b2 (bott.) Scale-Free Figure 4: Performance on Erd\u02dd os-R\u00b4 enyi (top) and Scale-Free (bottom) datasets. heuristics techniques in all settings, while at Scale-Free graph the DP-RRP has the best overall performance. For a single setting, |\u03a0| = 1, all heuristics find an almost optimal solution. As expected, the performance of all algorithms decreases as we increase the number of models |\u03a0|, since the adversary has a larger pool of models to select from. By analogy, the increase in the number of steps increases the cumulative reward as it expands the feasible movements of the agents leading to a decrease of the min ratio. In contrast, the larger the budget is, the higher score algorithms can achieve. Intuitively, more expensive reward placements offer higher cumulative reward, and consequently min ratio objective is determined by those scores. This is more evident by looking at the increase at the performance on the Scale-Free dataset, as there are fewer nodes with high in-degree. This is experimentally confirmed, as the growth of \u27e8d\u27e9results to random networks with uniform degree-distribution, while simultaneously the cumulative reward increases. In addition, larger p\u03b2 values result in networks with a more skewed power-law in-degree distribution, rendering the problem more intricate. 9.2 Real-World Data To further validate our algorithms, we create graphs using real-world movement data. We gathered movement records from Baidu Map, covering Xuanwu District in Nanjing6 from July 2019 to September 2019. These records consist of sequential Points of Interest (POIs) with timestamps, allowing us to calculate the probability of transitioning between POIs based on the Markovian assumption. Using these probabilities, we construct graphs where nodes represent POIs and edges depict the transition probabilities. Each graph represents a 7-day period, resulting in a total of 13 graphs. The combined dataset contains a total of 51,943 different nodes. Our study also introduces practicality by considering data-driven ad placement in the city. We thus assign costs to POIs based on their visit frequency and a fixed value: c[x] = \u230afrequency(x)/25 + 50\u230b. The initial and steps probabilities follow the same default setup as the synthetic datasets. 6https://en.wikipedia.org/wiki/Nanjing 0.25 0.5 0.75 0 2 4 \u00b7105 L Time (msec) 0.25 0.5 0.75 3 4 5 \u00b710\u22121 L Score Figure 5: Time, Score vs. L for the Xuanwu Dataset. Time and Performance. The preprecessing time for the Xuanwu dataset (N = 51, 943, |\u03a0| = 13) takes 118 seconds. Figure 5 presents the running time and the performance of the algorithms as budget constraint increases. DP-RRP is the most time consuming followed by the \u03a8-Saturate method, while the BWS strategy remains the most efficient solution. The score plot does not follow an uptrend as budget grows, since our min ratio objective function of Equation (3) is not a monotone function of the budget. In terms of effectiveness, DP-RRP consistently outperforms all algorithms in all budget constraints indicating its ability to uncover high quality solutions even for hard scenarios, while the performance of \u03a8-Saturate and other heuristic algorithms fluctuates. 10 Conclusions We introduced the NP-hard problem of Robust Reward Placement (RRP). Assuming an agent is moving on an unknown Markov Mobility Model (MMM), sampled by a set of \u03a0 candidates, RRP calls to select reward states within a budget, that maximize the worst-case ratio of the expected reward (agent visits) over the optimal one. We showed that RRP is strongly inapproximable, thus we propose \u03a8-Saturate, an algorithm that achieves an \u03f5-additive approximation by exceeding the budget constraint by a factor O(ln |\u03a0|/\u03f5). We also developed heuristics, with the most prominently one based on a dynamic programming algorithm. Experimental analysis, on both synthetic and real-world data, indicates the effectiveness of \u03a8-Saturate and the dynamic-programming based solution. Acknowledgments Work partially supported by grants from DFF (P.P. and P.K., 9041-00382B) and Villum Fonden (A.P., VIL42117)."
17
+ }
intro_8K/test_introduction_short_2405.05615v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05615v1",
3
+ "title": "Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning",
4
+ "abstract": "Current solutions for efficiently constructing large vision-language (VL)\nmodels follow a two-step paradigm: projecting the output of pre-trained vision\nencoders to the input space of pre-trained language models as visual prompts;\nand then transferring the models to downstream VL tasks via end-to-end\nparameter-efficient fine-tuning (PEFT). However, this paradigm still exhibits\ninefficiency since it significantly increases the input length of the language\nmodels. In this paper, in contrast to integrating visual prompts into inputs,\nwe regard visual prompts as additional knowledge that facilitates language\nmodels in addressing tasks associated with visual information. Motivated by the\nfinding that Feed-Forward Network (FFN) of language models acts as \"key-value\nmemory\", we introduce a novel approach termed memory-space visual prompting\n(MemVP), wherein visual prompts are concatenated with the weights of FFN for\nvisual knowledge injection. Experimental results across various VL tasks and\nlanguage models reveal that MemVP significantly reduces the training time and\ninference latency of the finetuned VL models and surpasses the performance of\nprevious PEFT methods. Code: https://github.com/JieShibo/MemVP",
5
+ "authors": "Shibo Jie, Yehui Tang, Ning Ding, Zhi-Hong Deng, Kai Han, Yunhe Wang",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.CL",
12
+ "cs.LG"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
16
+ "gt": "Recently, the investigation of pre-trained foundation mod- els has achieved remarkable success in the fields of both computer vision and natural language processing (Touvron et al., 2023; OpenAI, 2023; Tang et al., 2024; Radford et al., 2021), thereby fostering advancements in vision-language (VL) models. It has been found that VL models can be 1School of Intelligence Science and Technology, Peking University 2Huawei Noah\u2019s Ark Lab 3National Key Labora- tory of General Artificial Intelligence. Correspondence to: Yunhe Wang <[email protected]>, Zhi-Hong Deng <zh- [email protected]>, Kai Han <[email protected]>. Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u0012\u0011\u0015\u0010\u000f\u0011\u0010\u001b\u000e \r\u001d\u0013\u001b\f \u000b\u001b\n\u0019\u000e\t\u0015\b\u000f\u0019 \u001f\u001d\u0016\u0017\u0019\u0017\u001d\u0015 \u0014\u0007\u0006\u001b\u0013\u0013\u0017\u0015\u0010 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u0012\u0011\u0015\u0010\u000f\u0011\u0010\u001b\u000e \r\u001d\u0013\u001b\f \u000b\u001b\n\u0019\u000e\t\u0015\b\u000f\u0019 \r\u001b\u0007\u001d\u001e\u0005 \u0012\u0012\u0011\r\u0004\u000e\u0003\u0002 \u0001\u001a\u0017\u001b\u0015\u001a\u001b\u007f\u0004 \u0004\u0081\u001b\u001e\u0011\u0010\u001b\u000e\u0004\u001a\u001a \u000b\u001e\u0011\u0017\u0015\u0017\u0015\u0010\u000e\u000b\u0017\u0007\u001b\u000e\u008d\u0006\u0016\u000e\u008f\u000e\u0090\u009d \t\u0015\u00a0\u001b\u001e\u001b\u0015\u001a\u001b\u000e\u000b\u0017\u0007\u001b\u000e\u008d\u0006\u0016\u000e\u008f\u000e\u00ad\u0090\u009d \u0080\u0082\u0083\u0084\u0085 \u0082\u0083\u0090\u0080\u000e\u0016\u0086\u0006\u0011\u0019\u001a\u0087 \u0088\u0083\u0090\u0089\u000e\u0016\u0086\u0006\u0011\u0019\u001a\u0087 \u001f\u001e\u001d\u001c\u001b \u001c\u001d\u001a\u0019\u000e\u0016\u0086\u0006\u0011\u0019\u001a\u0087 \u0018\u001d\u0019\u0019\u000e\u0016\u0086\u0006\u0011\u0019\u001a\u0087 \u008d\u0011\u009d\u000e\u0012\u0012\u0011\u0018\u0004\u008a\u0012\u001d\u008b\u0004 \u008d\u0006\u009d\u000e\r\u001b\u0007\u0018\u001f\u000e\u008d\u008c\u000f\u001e\u0016\u009d \u0012\u001d\u008b\u0004 Figure 1. Illustration of PEFT methods using (a) the conventional input-space visual prompting and (b) our memory-space visual prompting. MemVP outperforms previous paradigms in terms of performance, training speed, and inference speed. efficiently constructed upon off-the-shelf pre-trained vision encoders and language models (Tsimpoukelli et al., 2021; Alayrac et al., 2022; Li et al., 2023; Liu et al., 2023b). The de-facto paradigm to combine them involves projecting the outputs of vision encoders, i.e., image features, to visual prompts within the input space of the language models via linear projection or resampler. Subsequently, the language models concentrate the visual prompts with the text embed- ding tokens and process them as a whole. Nevertheless, the scale of both vision models and lan- guage models is experiencing exponential growth, e.g., ViT- G (Zhai et al., 2022) has 1.8B parameters and LLaMA (Tou- vron et al., 2023) has up to 70B parameters. Therefore, both pre-training and fine-tuning their combinations with a vast number of parameters for downstream VL tasks become prohibitively expensive in terms of training and storage resources. To mitigate this challenge, parameter-efficient fine-tuning (PEFT) methods incorporate lightweight mod- ules (e.g., adapters (Houlsby et al., 2019), LoRA (Hu et al., 2022)) into the models, and/or select a small subset of pre- trained parameters (e.g., bias, normalization). During fine- tuning, only these modules and selected parameters are up- dated. Prior studies (Sung et al., 2022; Luo et al., 2023) have demonstrated that, even without resource-intensive VL pre- training, the combinations of vision encoders and language models can still be transferred to downstream VL tasks via PEFT while matching the performance of full fine-tuning. 1 arXiv:2405.05615v1 [cs.CV] 9 May 2024 Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning Although such \u201cinput-space visual prompting & PEFT\u201d paradigm proves efficient for training and storage, its mecha- nism of visual prompts still limits the inference and training efficiency. For instance, the average length of the text inputs is only 6.3 in VQAv2 (Goyal et al., 2017) dataset and 81 in ScienceQA (Lu et al., 2022) dataset, whereas the number of visual tokens can be up to 256 in LLaVA (Liu et al., 2023b). Consequently, in many scenarios, the input tokens of the language models are mostly visual tokens, thereby signifi- cantly amplifying the computation cost during training and inference. In this paper, we aim to explore an alternative manner for integrating the visual information into language models for downstream VL tasks, which is intended to not only be parameter-efficient but also facilitate fast training and inference. Existing research (Geva et al., 2021) has found that, the Feed-Forward Network (FFN) of language models acts as key-value memory that stores factual association as knowledge, e.g., \u201cStrawberries are red\u201d could be such knowledge stored in FFNs. Inspired by this, we infer that the visual information also contains vision-related factual association that is not included in the memory of language models, e.g., the language models do not realize \u201cThe fruits in the image are red\u201d. Therefore, it is necessary to inject such external knowledge into language models to enable them to tackle vision-related tasks. Since FFN is the main carrier of knowledge, we can put the visual information in the memory space of language models, i.e., weights of FFN, instead of input space, thus avoiding extending the input length. Based on this motivation, we propose Memory-Space Visual Prompting (MemVP), a PEFT framework for adapting pre- trained vision encoders and language models to downstream VL tasks. As shown in Figure 1, MemVP first projects the features extracted by vision encoders to the dimension of language models as visual prompts. The position-embeded visual prompts are concatenated with the weight matrices of the fully-connected (FC) layers in each FFN block of the language models. During fine-tuning, we freeze most pa- rameters of the vision encoders and language models, only the VL projection layers and position embeddings are tun- able. Without extending the inputs, MemVP only introduces a very small amount of extra parameters and computation to the language models, and is thus more efficient during training and inference. To evaluate the efficiency and effectiveness of MemVP, we conduct experiments across various downstream VL bench- marks, including visual question answering on VQAv2, GQA (Hudson & Manning, 2019), and ScienceQA, and image captioning on COCO Captions (Chen et al., 2015). Additionally, we evaluate MemVP on language models with different scales and architectures, including BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) with an encoder- decoder architecture, as well as decoder-only LLaMA-7B and LLaMA-13B. MemVP demonstrates superior perfor- mance compared to previous PEFT baselines, while achiev- ing remarkable acceleration for both training and inference.",
17
+ "main_content": "2.1. Vision-Language Models In the field of VL learning, many different model architectures have been proposed to meet the requirements of different VL tasks, such as dual-encoder (Radford et al., 2021), fusion-encoder (Tan & Bansal, 2019; Li et al., 2021; Kim et al., 2021; Dou et al., 2022), encoder-decoder (Cho et al., 2021; Wang et al., 2022b; Chen et al., 2023; Wang et al., 2022a; Li et al., 2022; 2023; Liu et al., 2023b), etc. Recently, the rapid advancement of large language models has prompted a growing number of researchers to regard VL tasks as a process of visual-conditioned text generation, and focus on how to involve vision information in off-theshelf pre-trained language models. For example, BLIP (Li et al., 2022) and Flamingo (Alayrac et al., 2022) insert new cross-attention layers into the language models to interact with visual features; Frozen (Tsimpoukelli et al., 2021), LLaVA (Liu et al., 2023b), and PaLI (Chen et al., 2023) use the vision encoder to generate visual prompts as the inputs of language models. BLIP-2 (Li et al., 2023) also uses a large Q-former as resampler to reduce the length of visual prompts. 2.2. Parameter-Efficient Fine-Tuning for VL Alignment PEFT has already been widely studied in the field of vision (Rebuffi et al., 2017; Chen et al., 2022; Zhang et al., 2022; Lian et al., 2022; Jie & Deng, 2023; Jie et al., 2023), language (Houlsby et al., 2019; Pfeiffer et al., 2021; Hu et al., 2022; Zaken et al., 2022; Liu et al., 2021), and multimodality (Sung et al., 2022; Hu et al., 2023; Luo et al., 2023; Zhang et al., 2023b; Jiang & Zheng, 2023; Lu et al., 2023). Particularly, based on the pre-trained vision encoders and language models, the VL models can be trained in a parameter-efficient manner. There are many studies focusing on PEFT of such assembled VL models on downstream tasks. VL-Adapter (Sung et al., 2022) and VL-PET (Hu et al., 2023) project the image features as visual prompts, and fine-tune the projector and PEFT modules inserted in the T5 or BART models. Differently, LLaMA-Adapter (Zhang et al., 2023a) concatenates the visual prompts with the hidden state of LLaMA\u2019s intermediate layers. LaVIN (Luo et al., 2023) inserts adapters in both the vision encoder and LLaMA, and introduces a routing mechanism for adapters. Through PEFT, it becomes possible to train VL models using off-the-shelf uni-modal models with less time and 2 Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning 0 100 200 300 400 Input Length 0.0 0.2 0.4 0.6 0.8 1.0 Training Time (s/batch) 2.6x slower w/ visual prompts w/o visual prompts 0 100 200 300 400 Input Length 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Inference Time (s/batch) 4.8x slower w/ visual prompts w/o visual prompts Figure 2. Training and inference time of LLaMA-7B on a single V100. The training process adopts PEFT in which we only tune LoRA modules. The training batch size and inference batch size are 4 and 16, respectively, to maximize utilization of GPU memory. We also highlight the position when the text token length is 64 w/ and w/o input-space visual prompts. The length of visual prompts is 256 as in LLaVA. We fix the output length to 1. GPU memory. However, it is noteworthy that these studies do not take computation efficiency into account, which is one of the main contributions of our paper. 2.3. Memory of Language Models Geva et al. (2021) discover that the FFN of pre-trained language models is essentially key-value memory which stores factual association. Based on this finding, Dai et al. (2022) locate and edit knowledge in language models by replacing certain rows of the matrices of FFN with the embedding of the object. Meng et al. (2022) edit the located factual knowledge by adding new key-value pairs to FFN. Dai et al. (2023) expand the size of FFN with extra keys and values as a knowledge bank. Cheng et al. (2023) replace FFN in language models with differentiable plug-in key-value memory for interpretability. However, current works only focus on pure language models, without exploring the potential of visual information as external factual knowledge. 3. Revisiting Visual Prompts in VL Models Current VL models mostly adopt a common architecture, including a pre-trained vision encoder, a pre-trained language model, and a module that bridges the two components. An efficient bridging module could be one or several FC layers that project the features of the images into the input space of the language model as visual prompts. Although the VL projection of visual prompts is parameter-efficient, it is not computation-efficient enough for training and inference. To obtain fine-grained local visual information, the visual prompts are usually projected from patch features of images, which contain a considerably large number of tokens. For example, LLaVA (Liu et al., 2023b) uses ViT-L/14 as vision encoder, which involves 256 tokens to express each image. The additional visual prompts significantly increase the length of the input sequence, leading to more computation during training and inference. To what extent do the visual prompts affect the computation speed? We show the inference speed across different lengths of input and output on LLaMA-7B in Figure 2. The computational complexity is O(L2d+Ld2) for Multi-Head Self-Attention (MHSA) and O(LdD) for FFN, in which L, d, and D are the length of token sequence, dimension of tokens, and hidden dimension of FFN, respectively. For example, after applying the visual prompts with 256 tokens to LLaMA-7B as in LLaVA, the training and inference latency of the language model part increase to 2.6\u00d7 and 4.8\u00d7 on the text with an input length of 64 and an output length of 1. Are there alternative solutions to use fewer visual tokens? BLIP2 (Li et al., 2023) uses a Q-former as resampler to reduce the number of visual tokens, which compresses the length of visual prompts from 256 to 32. Flamingo (Alayrac et al., 2022) uses a single <image> token as the visual prompt, and insert new resampler and cross-attention to interact with visual features. Although reducing the sequence length, these methods introduce hundreds of millions, or even billions, of new parameters, which necessitate large-scale VL pre-training. Therefore, we have to perform expensive VL pre-training again when switching to new pre-trained vision encoders or language models. Moreover, since the new modules are large, the training process cannot be parameter-efficient enough to reduce memory and time costs. Also, the large new modules still bring considerably more computation. Overall, to obtain VL models that are efficient during both training and inference, we need a new paradigm to concatenate pre-trained vision encoders and language models, which i) introduces negligible new parameters and extra computation; and ii) performs well when PEFT on downstream VL tasks. 4. Memory-Space Visual Prompting 4.1. Preliminary: Reformulation of FFN The standard FFN of language models is composed of two FC layers with non-linear activation in-between. Supposing x \u2208Rd is a input token of the FFN, the FFN can be formulated as: FFN(x) = \u03d5(xW 1)W 2, (1) in which \u03d5 is activation like ReLU and GELU, W 1 \u2208Rd\u00d7D and W 2 \u2208RD\u00d7d are the weight matrices of the two FC layers. Note that W 1 and W 2 can be rewritten as: W 1 = (k1, k2, ..., kD), W 2 = (v1, v2, ..., vD)\u22ba, (2) 3 Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u0012\u0011\u0010\u000f \u000e\u000e\r \f\u001b\u000b\u0019 \u0014\t\b\u001b\u0013\u0013\u0017\u0015\u0007 \u0006\u0005\u0015\u0007\u0004\u0005\u0007\u001b \u0012\u001d\u0013\u001b\u0003 \u0018\u0017\u0016\u0004\u0005\u0003 \u0002\u0015\u0001\u0004\u0019 \f\u001b\u000b\u0019 \u0002\u0015\u0001\u0004\u0019 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u000e\u000e\r \f\u001b\u000b\u0019 \u0014\t\b\u001b\u0013\u0013\u0017\u0015\u0007 \u0006\u0005\u0015\u0007\u0004\u0005\u0007\u001b \u0012\u001d\u0013\u001b\u0003 \u0018\u0017\u0016\u0004\u0005\u0003 \u0002\u0015\u0001\u0004\u0019 \f\u001b\u000b\u0019 \u0002\u0015\u0001\u0004\u0019 \u0012\u0011\u0010\u000f \u007f\u000f\u0019\u0019\u0015 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001d\u001e \u0018\u0017\u0016\u0017\u001d\u0015 \u0014\u0015\u001a\u001d\u0013\u001b\u001e \u0012\u0011\u0010\u000f \u000e\u000e\r \f\u001b\u000b\u0019 \u0014\t\b\u001b\u0013\u0013\u0017\u0015\u0007 \u0006\u0005\u0015\u0007\u0004\u0005\u0007\u001b \u0012\u001d\u0013\u001b\u0003 \u0018\u0017\u0016\u0004\u0005\u0003 \u0002\u0015\u0001\u0004\u0019 \f\u001b\u000b\u0019 \u0002\u0015\u0001\u0004\u0019 \u0081 \u001f\u001d\u0016\u0017\u0019\u0017\u001d\u0015 \u0014\t\b\u001b\u0013\u0013\u0017\u0015\u0007 \u001f\u001e\u001b\u008d\f\u001e\u0005\u0017\u0015\u001b\u0013 \u001f\u0005\u001e\u0005\t\u001b\u0019\u001b\u001e\u0016 \r\u001b\u008f \u001f\u0005\u001e\u0005\t\u001b\u0019\u001b\u001e\u0016 \u0002\u0015\u0019\u001b\u001e\t\u001b\u0013\u0017\u0005\u0019\u001b \u000e\u001b\u0005\u0019\u0004\u001e\u001b\u0016 \u0090\u0005\u009d \u0002\u0015\u0001\u0004\u0019\u008d\u0016\u0001\u0005\u001a\u001b \u00a0\u0017\u0016\u0004\u0005\u0003 \u0001\u001e\u001d\t\u0001\u0019\u0017\u0015\u0007 \u0090\b\u009d \u00ad\u001e\u001d\u0016\u0016\u008d\u0005\u0019\u0019\u001b\u0015\u0019\u0017\u001d\u0015 \u0003\u0005\u0080\u001b\u001e\u0016 \u0090\u001a\u009d \u0012\u001b\t\u0018\u001f \u0090\u0082\u0004\u001e\u0016\u009d ... ... ... ... \u000e\u000e\r Figure 3. Overview of the mainstream paradigms to concatenate vision encoder and language model. (a) Concatenating visual prompts with the text tokens as inputs of the language model is not computation-efficient, e.g., LLaVA, VL-Adapter, VL-PET. (b) Using cross-attention layers to incorporate the visual information from visual tokens is not parameter-efficient, e.g., Flamingo, BLIP. (c) Our MemVP injects visual prompts into the FFN blocks of language models, achieving both parameter and computation efficiency. in which ki \u2208Rd and vi \u2208Rd are entries of key and value, respectively. Then, the FFN can be rewritten as FFN(x) = D X i=1 \u03d5(\u27e8x, ki\u27e9) \u00b7 vi. (3) Therefore, the FFN can be interpreted as using input x as the query to calculate its similarity with keys, and gathering values based on the similarity. Previous work has found that FFN acts as a key-value memory storing factual knowledge (Geva et al., 2021). 4.2. FFN with Visual Prompting As illustrated in Figure 3, in conventional input-space visual prompting, the image features are projected to the prefix of the input as context for text generation. Since increasing the input length leads to inefficiency, we avoid using extra visual tokens, and thus all the visual information needs to be contained in textual tokens. A solution to incorporating visual information is to let the textual tokens retrieve information from the visual features. Previous works like Flamingo and BLIP perform retrieval via cross-attention layers, which can be formulated as XAttn(x) = softmax \u0012xW qW k \u22baZ\u22ba \u221a d \u0013 ZW vW o \u22ba, (4) in which x \u2208 Rd is a textual token and Z = (z1, z2, ..., zn)\u22ba\u2208Rn\u00d7d\u2032 is the visual features. However, the cross-attention layer introduces a large amount of new parameters, i.e., W q/k/v/o, which is far from parameter efficiency and brings considerable additional computation. Note that the cross-attention essentially performs a soft look-up using the query xW q from the key-value pairs (ZW k, ZW v) and outputs the weighted average of the retrieved values. Inspired by the fact that FFN also performs similar retrieval from its key-value memory, we consider a more simplified and efficient retrieval process for visual features: Retrieval(x) = n X i=1 \u03d5(\u27e8x, K(zi)\u27e9) \u00b7 V(zi), (5) in which K(zi), V(zi) \u2208Rd are the key and value corresponding to zi. This formulation shares a similar form with Eq (3). Since the size of FFN\u2019s key-value memory D is usually much larger than the number of visual features n (D = 11008 in LLaMA-7B and n = 256 for ViT-L/14), the computation of retrieving visual features is insignificant. Therefore, we do not introduce new cross-attention layers as in previous work, but perform such retrieval along with FFN instead. From the perspective of FFN, we regard the (K(zi), V(zi)) as new memory entries to complement vision-related knowledge that language models used to lack. The new visual key-value entries are inserted into memory, FFN(x) = D X i=1 \u03d5(\u27e8x, ki\u27e9)\u00b7vi + n X i=1 \u03d5(\u27e8x, K(zi)\u27e9)\u00b7V(zi). (6) As for K and V, they should realize two key functions: i) aligning the dimension between visual feature zi \u2208Rd\u2032 and textual token x \u2208Rd, and ii) identifying the position of each entry in the visual input. We use a projector f, which could be one or several FC layers, to project the visual features to the dimension of the textual token as a visual prompt. The projector is shared between K and V for parameter efficiency. The projected visual features are then added with position embedding, K(zi) = \u03bbf(zi) + pk i , V(zi) = \u03bbf(zi) + pv i , (7) in which \u03bb is a hyperparameter and pk, pv \u2208Rn\u00d7d are position embedding for visual prompts inserted into keys and values, respectively. 4 Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning To implement Eq (6), the position-embedded visual prompts are inserted into the memory as new key-value entries. For the FFN block, the weight matrices are modified to W \u2032 1 = (k1, k2, ..., kD, \u03bbf(z1) + pk 1, ..., \u03bbf(zn) + pk n), W \u2032 2 = (v1, v2, ..., vD, \u03bbf(z1) + pv 1, ..., \u03bbf(zn) + pv n)\u22ba. (8) Since the visual prompts are concatenated with the FFN weights which are actually memories, we call the proposed new paradigm memory-space visual prompting (MemVP). Besides the standard FFN above, which is widely used in small and middle-scale language models, large language models usually adopt Gated Linear Units (GLU) to enhance the FFN for better performance. For instance, LLaMA uses SwiGLU in FFN, which is FFN(x) = (SiLU(xW 1) \u2297xW 3)W 2. (9) Supposing W 3 = (g1, ..., gD), Eq (9) can be rewritten as FFN(x) = D X i=1 SiLU(\u27e8x, ki\u27e9) \u00b7 \u27e8x, gi\u27e9\u00b7 vi, (10) where \u27e8x, gi\u27e9can be viewed as matching the query with another key. For FFN using GLU, we simply let the second key entries responding to the visual prompts be x |x|2 2 , i.e., modify W 3 to W \u2032 3 = (g1, g2, ..., gD, x |x|2 2 , ..., x |x|2 2 ), (11) which is equivalent to omitting the second key when looking up the visual knowledge to avoid involving more parameters, i.e., FFN(x) = D X i=1 SiLU(\u27e8x, ki\u27e9) \u00b7 \u27e8x, gi\u27e9\u00b7 vi + n X i=1 SiLU(\u27e8x, \u03bbf(zi) + pk i \u27e9) \u00b7 (\u03bbf(zi) + pv i ). (12) In this paradigm, only the projector and position embedding are newly introduced, which are negligible compared with the large size of the pre-trained models. During fine-tuning, we can freeze the parameters of the vision encoders and language models, and only fine-tune these new parameters. From another perspective, the added key and value entries can be regarded as the two fully connected layers of a visionconditioned adapter for PEFT. Therefore, in practice, we also adopt some design philosophy of adapters (Luo et al., 2023). First, we set the length of position embedding as a hyperparameter to control the number of trainable parameters. We allow the length of position embedding to be longer than the visual prompts, in which case we simply zero-pad the visual prompt to align their lengths. Second, we add another scaling factor to the retrieval results as a hyperparameter to control their magnitude. 4.3. Complexity Analysis We consider a language model layer that is only composed of MHSA and FFN blocks. For simplicity, we omit the bias terms and normalization layers. Let L, d, and n denote the length of token sequence, dimension of tokens, and length of visual prompts, respectively. The FLOPs of MHSA and FFN are 8Ld2 + 4L2d and 16Ld2 respectively. We use FLOPsLM, FLOPsVP, and FLOPsMemVP to denote the FLOPs of a single transformer layer in the language model without visual prompts, with input-space visual prompts, and with memory-space visual prompts, respectively. Then we have FLOPsLM = 4Ld(6d + L). (13) For the previous manner which uses input-space visual prompting, the length of the input sequence becomes L + n. Then, the additional FLOPs of a layer are FLOPsVP \u2212FLOPsLM = 4nd(6d + n + 2L). (14) Whereas for MemVP, the length of the input is unchanged, and only the hidden dimension of FFN is increased. The additional FLOPs of a layer is FLOPsMemVP \u2212FLOPsLM = 4ndL. (15) Since current VL models basically satisfy d >> n, and for VL tasks we have n > L in the most cases, we find that FLOPsVP is multiple times of FLOPsLM, but the difference between FLOPsLM and FLOPsMemVP can be ignored. For other architectures such as encoder-decoder model, MemVP mainly reduces the FLOPs of the encoder part. Overall, MemVP is computation-efficient for VL tasks on various language model architectures. 5. Experiments In all the experiments, we follow prior works (Sung et al., 2022; Hu et al., 2023; Luo et al., 2023) adopting a fast and economic adaptation setting, i.e., the resource-intensive VL pre-training stage is not incurred. Although VL pre-training has already been widely used nowadays, our setting has practical significance since it enables low-cost deployment on new foundation models, considering the rapid evolution of language models. 5.1. Experiments on BART & T5 Datasets and Baselines. For visual question answering, we evaluate our method on VQAv2 (Goyal et al., 2017) and GQA (Hudson & Manning, 2019); for image captioning, we evaluate on COCO Captions (Chen et al., 2015). All these tasks are regraded as text generation tasks which directly output the answers in an open-ended space. Note that, different from previous work (Sung et al., 2022; Hu 5 Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning Table 1. Results on VQAv2, GQA, and COCO Captions. \u201cFLOPs\u201d denotes the average FLOPs in language models on test set. We report average performance over three runs on Karpathy test split for VQAv2 and COCO Captions, and on test-dev split for GQA. All the baseline results are reproduced using the official code of VL-PET (Hu et al., 2023). Method #Trainable Params (M/task) VQAv2 GQA COCO Captions Average Score VQA Score FLOPs (G) VQA Score FLOPs (G) CIDEr FLOPs (G) BART-base Full Fine-Tuning 141.16 65.4 4.8 53.1 5.3 110.6 6.4 76.4 Compacter 3.87 64.2 4.9 52.3 5.4 115.3 6.5 77.3 LoRA 3.92 64.8 4.8 52.2 5.3 115.1 6.4 77.4 VL-Adapter 3.87 65.5 4.9 53.7 5.4 114.3 6.5 77.8 VL-PET 3.84 65.3 5.0 53.9 5.5 120.3 6.6 79.8 MemVP (Ours) 3.78 65.2 1.2 55.1 1.8 120.2 2.8 80.2 T5-base Full Fine-Tuning 224.54 64.3 9.4 52.0 10.8 112.6 12.9 76.3 Compacter 6.11 65.5 9.6 53.6 11.0 113.4 13.2 77.5 LoRA 6.05 63.3 9.4 50.8 10.8 113.9 12.9 76.0 VL-Adapter 6.10 65.6 9.6 54.4 11.0 113.4 13.2 77.8 VL-PET 6.07 65.4 9.8 54.6 11.3 121.2 13.4 80.4 MemVP (Ours) 6.00 65.7 2.3 56.0 3.8 120.8 5.8 80.8 Figure 4. Left: Training time, training memory, and inference time of T5-base on VQAv2. The per-GPU batch sizes for training and inference are 64 and 512, respectively. Measured on V100 GPUs. Right: Average score vs. FLOPs of BART-base on the three datasets. The visual prompts of VL-PET are downsampled to reduce the input length. et al., 2023) using a multi-tasks learning setting where the VQA tasks benefit from the concurrently trained captioning data, we fine-tune MemVP and all the baselines on each dataset individually. We compare MemVP with baselines using previous input-space visual prompting, including current state-of-the-art PEFT methods on BART and T5: VLAdapter (Sung et al., 2022) and VL-PET (Hu et al., 2023), as well as representative PEFT methods designed for language models: Compacter (Mahabadi et al., 2021) and LoRA (Hu et al., 2022). We also report the results of fully fine-tuning the language models with input-space visual prompting. Implementation Details. Following previous work (Sung et al., 2022; Hu et al., 2023), we use ResNet-101 pre-trained via CLIP (Radford et al., 2021) to pre-extract image features. The resolution of input images is 224 \u00d7 224. The visual encoder is frozen during fine-tuning, and the PEFT modules are only inserted into the language model. For the language part, we use BART-base (Lewis et al., 2020) and T5-base (Raffel et al., 2020) with encoder-decoder architecture. For our MemVP, the grid features before global average pooling are projected to visual prompts via a single FC layer, and the visual prompts are only injected into the FFN blocks of language encoders. Additionally, we also unfreeze the layer normalization of language models. We train on each dataset for 20 epochs with batch size 8 \u00d7 64 and report performance on the test set. The hyperparameters of all methods are summarized in Appendix. Results and Analyses. As shown in Table 1, our MemVP achieves average performance better than current state-ofthe-art PEFT method, VL-PET, and much better than other baselines. However, the FLOPs in the language models of MemVP are only 23%\u201344% of other baselines. To exhibit the advantage of shorter inputs, we compare the training speed, training memory, and inference speed of all methods on VQAv2 in Figure 4 (left). Compared with VL-PET, MemVP is 1.7\u00d7 faster during training and 1.4\u00d7 faster during inference, while using only 56% training memory. Although PEFT only unlocks a small number of parameters for training, the gradient still needs to be propagated back through the whole language model, leading to considerable time and memory consumption. Therefore, the time and memory costs during training and inference are profoundly affected by the FLOPs in language models. MemVP releases the time and memory burden for fine-tuning by directly reducing FLOPs, suggesting that computation efficiency is also crucial in designing PEFT methods. Furthermore, we compared MemVP with a straightforward strategy to reduce the length of the visual prompt: 2D adap6 Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001b\u001a\u0018\u001b\u001c\u001e\u0017\u001b\u0019\u0016\u0015\u0014 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001b\u001c\u001e\u0017\u001b\u0013\u0012\u0011\u0012\u0010\u001b \u001b \u001b \u001b \u0012\u000f\u001b\u001c\u001e\u0017\u001b\u0019\u0016\u0015\u0014 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001b\u001c\u001e\u0018\u001b\u000e\u001d\u0018\u001b \u0017\u0016\u001b\u001c\u001e\u0018\u001b\u0015\u001d\u0014\u001b\u0014\u001a\u0013\u001e\u001c\u001b\u0019\u001a\u0012\u0018\u001b\u0011\u0018\u001d\u0014\u001a\u0016\u0013\u0010 \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001b\u001c\u001e\u0018\u001b\u000f\u001d\u0016\u001b \u0017\u0016\u001b\u001c\u001e\u0018\u001b\u0015\u001d\u0014\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u001a\u0012\u0018\u001b\u0011\u0018\u001d\u0014\u001a\u0016\u0013\u0010 Figure 5. Visual knowledge locating. The similarity values between blod text tokens and keys of visual knowledge are averaged over all layers. \u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u0017\u001c\u0016\u001b\u001f\u001e\u001d\u001c\u001b\u001a\u0019\u0018\u001c \u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u001c\u0017\u0016\u001a\u0015\u0014\u001c\u0013\u001e\u0012\u0011\u0010\u0011\u001c\u000f\u001e\u001c\u0018\u001e\u000e\u001c\u0011\u0010\u0010\r \u0015\u0017\u001c\u0018\u0017\u001c\u0016\u001b\u0014 \u0015\u0017\u001c\u0018\u0017\u001c\u0016\u001b\u0013 Figure 6. Visual knowledge distortion. Left: Inputs of model; Middle: Original similarity between text tokens and keys of visual knowledge; Right: Distorted similarity. The values in the red rectangle are set to 0. tive pooling. As illustrated in Figure 4 (right), after pooling the visual prompt, the input-space prompting methods suffer from obvious performance degradation, implying that the fine-grained local information is lost in this process. By contrast, MemVP can use long visual prompts without extending the input, thus outperforming the baselines in terms of efficiency. Visualization. We conduct experiments to verify our main motivation, i.e., the visual information can be inserted into memories of language models as external knowledge. If the model acquires the visual knowledge successfully, we are supposed to observe that i) the visual knowledge related to the text inputs is retrieved, and ii) when the model fails to retrieve the correct knowledge under manual distortion, the model should output the corresponding wrong contents. In Figure 5, we visualize the similarity between queries and keys, i.e., \u03d5(\u27e8x, \u03bbf(zi)\u27e9) in Eq (6), of BART-base finetuned on VQAv2. We find that the text tokens have a high similarity with the keys of related visual knowledge entries, implying that the corresponding values are retrieved. For instance, when asking the model \u201cWhat is in the sky?\u201d, the model retrieves knowledge entries around the plain; when asked \u201cWhat is the color of the sky?\u201d, the model retrieves knowledge entries of the background. Moreover, we find that different words in the input sentence have different preferences, e.g., when asking the model \u201cWhat is the man on the far right side wearing?\u201d, the \u201cman\u201d token retrieves the knowledge entries that contain men, and the \u201cright\u201d token retrieves the entries on the right side of the image. Next, we try distorting the knowledge by editing the querykey similarity. As the example in Figure 6, when asking the model \u201cHow many black horses do you see?\u201d, the model mainly retrieves the entries containing the black horse. Then, we manually block the retrieval of the two most responsive entries by setting \u03d5(\u27e8x, \u03bbf(zi)\u27e9) = 0. As a result, the model outputs \u201c0\u201d since it fails to obtain knowledge about the existence of black horse. Overall, these observations verify that the visual information is actually inserted into memory and direct the outputs of language models. 5.2. Experiments on LLaMA Datasets and Baselines. We use a challenging VQA task, ScienceQA (Lu et al., 2022), to evaluate our method. ScienceQA is a large-scale science question-answering dataset compiled from diverse knowledge domains. We compare MemVP with other LLaMA-based fine-tuned models with input-space visual prompting, including LLaVA (Liu et al., 2023b), LLaMA-Adapter (Zhang et al., 2023a), and LaVIN (Luo et al., 2023). We also provide results of LLaVA equipped with LoRA. All these methods adopt a onestage paradigm, i.e., directly generating the answers endto-end without multi-stage chain-of-thought (CoT) prompting (Zhang et al., 2023c). We adopt the training recipe used by Luo et al. (2023) and train each method for 20 epochs. All these methods use a ViT-L/14 pre-trained via CLIP as the visual encoder. We also report zero-shot results of GPT4 (OpenAI, 2023). Implementation Details. Following LLaVA (Liu et al., 2023b), MemVP and LLaVA-LoRA use the 256 patch features before the last layer of ViT-L/14 and project them as visual prompts. Differently, LaVIN and LLaMA-adapter stack 6 global features (i.e., [CLS] tokens of ViT) selected from different intermediate layers as much shorter visual prompts. The projectors of MemVP, LaVIN, and LLaVALoRA are two FC layers with non-linear activation in between. Since LaVIN also inserts adapters in the visual encoder, we adopt a comparable strategy on MemVP and LLaVA-LoRA for a fair comparison. Specifically, we introduce parallel adapters to the FFN of the vision encoder following previous work (Chen et al., 2022). Moreover, since LLaMA has much more layers and larger dimension than BART and T5, we also share the position embedding of MemVP across different layers for parameter efficiency. For the samples that do not have image inputs, we simply set the visual prompts of MemVP to zero tensors, and only insert the position embedding. Results and Analyses. As shown in Table 2, our MemVP significantly outperforms all the baseline PEFT methods on both LLaMA-7B and LLaMA-13B. LLaVA-LoRA performs better than LaVIN and LLaMA-Adapter, indicating that VL 7 Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning Table 2. Accuracy on ScienceQA test set. Question categories: NAT = natural science, SOC = social science, LAN = language science, TXT = w/ text context, IMG = w/ image context, NO = no context, G1-6 = grades 1-6, G7-12 = grades 7-12. \u2020 denotes our reproduced results. Other results are quoted from their original papers. Method #Trainable Params Language Model VL Pre-Train Subject Context Modality Grade Average NAT SOC LAN TXT IMG NO G1-6 G7-12 Human 90.23 84.97 87.48 89.60 87.50 88.10 91.59 82.42 88.40 GPT-4 (0-shot) GPT-4 84.06 73.45 87.36 81.87 70.75 90.73 84.69 79.10 82.69 LLaVA 7B Vicuna-7B \u221a 89.84 LLaVA 13B Vicuna-13B \u00d7 85.81 LLaVA 13B Vicuna-13B \u221a 90.36 95.95 88.00 89.49 88.00 90.66 90.93 90.90 90.92 PEFT methods LLaMA-Adapter 1.8M LLaMA-7B \u00d7 84.37 88.30 84.36 83.72 80.32 86.90 85.83 84.05 85.19 LLaVA-LoRA\u2020 4.4M LLaMA-7B \u00d7 91.70 94.60 86.09 91.25 90.28 88.64 91.52 89.65 90.85 LaVIN 3.8M LLaMA-7B \u00d7 89.25 94.94 85.24 88.51 87.46 88.08 90.16 88.07 89.41 MemVP (Ours) 3.9M LLaMA-7B \u00d7 94.45 95.05 88.64 93.99 92.36 90.94 93.10 93.01 93.07 LaVIN 5.4M LLaMA-13B \u00d7 90.32 94.38 87.73 89.44 87.65 90.31 91.19 89.26 90.50 MemVP (Ours) 5.5M LLaMA-13B \u00d7 95.07 95.15 90.00 94.43 92.86 92.47 93.61 94.07 93.78 Table 3. Training and inference time. Measured on 8\u00d7A800 GPUs without memory-saving or speed-up techniques (e.g., flash attention). The per-GPU batch size is 4 for training and 64 for inference. Method Length of Visual Prompt #Trainable Params Training Time (s/batch) Inference Time (s/batch) LLaVA-LoRA 7B 256 4.4M 0.49 3.42 LaVIN 7B 6 3.8M 0.39 2.06 MemVP 7B 256 3.9M 0.28 1.88 MemVP 13B 256 5.5M 0.46 3.07 Table 4. Ablation experiments on ScienceQA. \u201cAverage\u201d and \u201cIMG\u201d denote the accuracy on the whole test set and on the IMG subset, respectively. Settings Average IMG #Trainable Params (M) MemVP 7B 93.07 92.36 3.9 w/o visual prompts 85.33 76.05 3.3 visual features: local \u2192global 89.01 84.18 3.9 position embedding: add \u2192concat 89.79 86.07 3.9 insert visual prompts in keys only 91.94 90.23 3.9 insert visual prompts in values only 92.78 92.36 3.9 models benefit from the local visual information in longer visual prompts. Notably, MemVP also beats LLaVA, a fully fine-tuned model with VL pre-training, on average results as well as 7 out of 8 subsets. Besides, we also compare the training and inference speed of different PEFT methods in Table 3. In spite of the long visual prompts, MemVP is still 1.4\u00d7 faster than LaVIN during training, since the routing mechanism of LaVIN delays the training speed. LLaVALoRA, which also uses local visual prompts in input space, is 1.75\u00d7 and 1.8\u00d7 slower than MemVP in training and inference, respectively. Overall, memory-space prompting exhibits remarkable advantage in computation efficiency. To demonstrate the effectiveness of the components of MemVP, we conduct comprehensive ablation experiments. As in Table 4, when we insert the position embedding without adding visual prompts into the language model, its performance on IMG subset degrades significantly, since the language model cannot obtain the visual knowledge. We note that using global features as in LaVIN leads to a drop in performance due to the loss of local information. We also attempt to concatenate the position embedding with visual prompts instead of adding to them, where the visual prompts will not acquire hard-coded position information but the number of trainable parameters keeps unchanged. The degraded performance indicates the importance of position information for visual prompts since the text inputs may be location-related. When only inserting visual prompts in keys or values, the model performs worse in both cases. 6. Conclusion & Limitation In this paper, we revisit the current mainstream \u201cinput-space visual prompting & PEFT\u201d paradigm for efficiently bridging pre-trained vision encoders and language models, and point out its remaining inefficiency in terms of computation due to the extended inputs. Motivated by previous findings that the FFNs of language models serve as knowledge memories, we propose MemVP, a memory-space visual prompting method that inserts visual prompts into the FFN weights. Experiments on both small-scale and large-scale language models indicate that MemVP is both parameter-efficient and computation-efficient. Compared to previous state-ofthe-art PEFT methods, it achieves competitive performance while enabling faster training and inference, and less memory overhead, providing an economic solution for model adaptation on downstream VL tasks. However, MemVP still has limitations. Its main contribution lies in reducing the length of the input sequence, but the impact of input length on inference speed primarily occurs during the prefill stage, i.e., the generation of the first token. Therefore, for the generation of long texts (e.g., detailed captioning), MemVP\u2019s advantage in inference speed will be 8 Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning diminished, which could be improved in future work. Impact Statement The method proposed in this paper is based on pre-trained models, especially large language models (LLMs). It may inherit the drawbacks of LLMs such as inherent biases and misinformation, or generate copyright-violating material such as verbatim snippets from non-free content."
18
+ }
intro_8K/test_introduction_short_2405.05691v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05691v1",
3
+ "title": "StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework",
4
+ "abstract": "Thanks to the powerful generative capacity of diffusion models, recent years\nhave witnessed rapid progress in human motion generation. Existing\ndiffusion-based methods employ disparate network architectures and training\nstrategies. The effect of the design of each component is still unclear. In\naddition, the iterative denoising process consumes considerable computational\noverhead, which is prohibitive for real-time scenarios such as virtual\ncharacters and humanoid robots. For this reason, we first conduct a\ncomprehensive investigation into network architectures, training strategies,\nand inference processs. Based on the profound analysis, we tailor each\ncomponent for efficient high-quality human motion generation. Despite the\npromising performance, the tailored model still suffers from foot skating which\nis an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we\nidentify foot-ground contact and correct foot motions along the denoising\nprocess. By organically combining these well-designed components together, we\npresent StableMoFusion, a robust and efficient framework for human motion\ngeneration. Extensive experimental results show that our StableMoFusion\nperforms favorably against current state-of-the-art methods. Project page:\nhttps://h-y1heng.github.io/StableMoFusion-page/",
5
+ "authors": "Yiheng Huang, Hui Yang, Chuanchen Luo, Yuxi Wang, Shibiao Xu, Zhaoxiang Zhang, Man Zhang, Junran Peng",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.MM"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Human motion generation aims to generate natural, real- istic, and diverse human motions, which could be used for animating virtual characters or manipulating humanoid robots to imitate vivid and rich human movements with- out long-time manual motion modeling and professional *Corresponding authors: ManZhang ([email protected]) and Junran Peng([email protected]) Table 1. StableMoFusion achieves superior performance on mo- tion generation compared to other state-of-the-art methods. Lower FID and higher R Precision mean, the better. Method FID\u2193 R Precision (top3)\u2191 MDM [28] 0.544 0.611 MLD [3] 0.473 0.772 MotionDiffuse [37] 0.630 0.782 ReMoDiffuse [38] 0.103 0.795 StableMoFusion (Ours) 0.098 0.841 Figure 1. Comparison of the inference time costs on motion gen- eration. The closer the model is to the origin, the better. skills[1, 4, 37]. It shows great potential in the fields of ani- mation, video games, film production, human-robot interac- tion and etc. Recently, the application of diffusion models to human motion generation has led to significant improve- ments in the quality of generated motions [3, 28, 37]. arXiv:2405.05691v1 [cs.CV] 9 May 2024 Despite the notable progress made by diffusion-based motion generation methods, its development is still hin- dered by several fragmented and underexplored issues: 1) Lack of Systematic Analysis: these diffusion-based mo- tion generation work usually employ different network ar- chitectures and training pipelines, which hinders cross- method integration and the adoption of advancements from related domains. 2) Long Inference Time: due to the time- consuming iterative sampling process, most existing meth- ods are impractical for applications with virtual characters and humanoid robots, where real-time responsiveness is crucial. 3) Footskate Issue: foot skating (footskate) in gen- erated motions remains a major concern. This significantly undermines the quality of generated motions and limits their practical applicability. Therefore, in order to fill these research gaps and en- hance the effectiveness and reliability of diffusion-based motion generation in practical applications, our study con- ducts a comprehensive and systematic investigation into network architectures, training strategies, and inference process. Our investigation is specifically directed towards text conditional motion generation, as text prompts are ar- guably the most promising format for practical application and the most convenient input modality among various con- ditional signals. Ultimately, we present a robust and ef- ficient framework for diffusion-based motion generation, called StableMoFusion, as illustrated in Figure 2. In StableMoFusion, we use Conv1D UNet with AdaGN and linear cross-attention as the motion-denoising network, and improve its generalization capability with GroupNorm tweak. During training, two effective strategies were em- ployed to enhance the network\u2019s ability to generate mo- tion. During inference, we use four training-free acceler- ation tricks to achieve efficient inference. Furthermore, we present a footskate cleanup method based on a mechanical model and optimization. Extensive experiments demonstrate that StableMoFu- sion achieves an excellent trade-off between text-motion consistency and motion quality compared to other state-of- the-art methods, as shown in Table 1. Meanwhile, Stable- mofusion\u2019s efficient inference process notably reduces the minimum number of iterations required for generation from 1000 to 10, as well as shorter inference times than meth- ods of about the same performance, achieving an average inference time of 0.5 seconds on the Humanm3D test set, as shown in Figure 1. In addition, our footskate cleanup method within diffusion framework sizably solves the foot skating problem of motion generation as shown in Sec- tion 5.4. Our major contributions can be summarized as follows: \u2022 We perform a systematic evaluation and analysis on the design of each component in the diffusion-based mo- tion generation pipeline, including network architectures, training strategies, and inference process. \u2022 We propose an effective mechanism to eliminate foot skating which is a comment issue in current methods. \u2022 By consolidating these well-designed components, we present a robust and efficient diffusion-based motion gen- eration framework named StableMoFusion. Extensive experiments demonstrate its superiority in text-motion consistency and motion quality.",
16
+ "main_content": "2.1. Motion Diffusion Generation In recent years, the application of diffusion models to human motion generation has led to significant improvements in the quality of generated motions. MotionDiffuse [37] softly fuses text features into diffusion-based motion generation through cross-attention. MDM [28] experimented with the separate Transformer encoder, decoder, GRU as denoising networks, respectively. PyhsDiff [35] incorporates physical constraints to generate more realistic motions; Prior MDM [24] uses diffusion priors to allow the model to be applied to specific generative tasks; MLD [3] utilizes the latent space of VAE to speed up diffusion generation; ReMoDiffuse [38] uses a retrieval mechanism to enhance the motion diffusion model. All of these methods use Transformer-based network structure, while MoFusion [4] and GMD [13] use Conv1D UNet for motion diffusion generation. Our work towards a more robust and efficient diffusionbased motion generation framework through a comprehensive investigation into network architectures, training strategies, and inference process. It also addresses the practical application challenges of long inference time and footskate phenomenon. 2.2. Training-Free Sampling To reduce the inference time with a trained network, there have been many advanced samplers to accelerate DDPM [8]. Song et al. [26] show that using Stochastic Differential Equation (SDE) for sampling has a marginally equivalent probability Ordinary Differential Equations (ODE). And then, DDIM [25] constructs a class of non-Markovian diffusion processes that realize skip-step sampling. PNDN [16] uses pseudo numerical to accelerate the deterministic sampling process. DEIS [39] and DPMSolver [17] improve upon DDIM by numerically approximating the score functions within each discretized time interval. Meanwhile, several work have focused on speeding up stochastic sampling. For example, Gotta Go Fast [11] utilizes adaptive step sizes to speed up SDE sampling, and Lu et al. [18] converts the higher-order ODE solver into an SDE sampler to address the instability issue. 2 While these samplers have demonstrated efficacy in image generation, their impact on motion diffusion models remains unexplored. In this work, we evaluate them to find the most appropriate one for motion generation. 2.3. Footskate Cleanup In order to generate realistic motions in computer animation, various methods have been developed to improve footskate issue. Edge [29] embeds the foot contact term into the action representation for training and applies Contact Consistency Loss as a constraint to keep the physical plausibility of motion. RFC [34], Drop [10] and Physdiff [35] uses reinforcement learning to constrain the physical states of actions, such as ground force reaction and collision situations to get a realism motion. UnderPressure [19] and GroundLink [7] respectively collect foot force datasets during motion. UnderPressure [19] also utilizes this dataset to train a network capable of predicting vertical ground reaction forces. Based on this, UnderPressure proposes a foot skating cleanup method. 3. Preliminaries The pipeline of Diffusion model [8] involve three interconnected processes: a forward process that gradually diffuses noise into sample, a reverse process that optimizes a network to eliminate the above perturbation, and an inference process that utilizes the trained network to iteratively denoise noisy sample. Specifically, a motion denoising network is first trained to predict the original motion x0 from the noisy motion xt: randomly select a ground-truth motion x0 and a diffusion timestep t \u223cU[0, T], T being the maximum timestep. And then the noisy motion xt after t-step diffusion is gained by Equation 1, xt = \u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5 (1) where \u03f5 is a Gaussian noise. \u221a\u00af \u03b1t and \u221a1 \u2212\u00af \u03b1t are the strengths of signal and noise, respectively. When \u221a\u00af \u03b1t is small enough, we can approximate xt \u223cN(0, I). Next, given a motion-denoising model G\u03b8(xt, t, c) for predicting the original sample, parameterized by \u03b8, the optimization can be formulated as follows: min \u03b8 Et\u223cU[0,T ],x0\u223cpdata||G\u03b8(xt, t, c) \u2212x0||2 2 (2) In the inference process, a trained motion-denoising network can progressively generate samples from noise with various samplers. For instance, DDPM [8] iteratively denoise the noisy data from t to a previous timestep t\u2032, as shown in Algorithm 1. Algorithm 1 Inference Given a text prompt c xt \u223cN(0, I) for t = T to 1 do e x0 = G(xt, t, c) \u03f5 \u223cN(0, I) if t > 1, else \u03f5 = 0 xt\u22121 = \u221a\u00af \u03b1t\u22121\u03b2t 1\u2212\u00af \u03b1t e x0 + \u221a\u03b1t(1\u2212\u00af \u03b1t\u22121) 1\u2212\u00af \u03b1t xt + \u02dc \u03b2t\u03f5 end for return x0 4. Method Through comprehensive exploratory experiments conducted on diffusion-based motion generation, we propose a novel diffusion framework, named StableMoFusion, as illustrated in Figure 2, to facilitate robust and efficient motion generation. This section begins with our investigation on the architecture of motion-denoising networks. Next, we discuss several training strategies pivotal for enhancing model performance in Section 4.2. Subsequently, we introduce our improvements in the inference process in Section 4.3, tailored to enable efficient inference. Lastly, we discuss and present a solution to the footskate issue in Section 4.4. 4.1. Model Architecture Most existing work use Transformer [30]-based architectures as the motion-denoising network; however, it remains questionable whether these architectures are best for diffusion-based motion generation. In this subsection, we will present three new network architectures fine-tuned for the motion generation task: Conv1D UNet [4, 13], Diffusion Transformer (DiT) [20] and the latest Retentive Network (RetNet) [27]. 4.1.1 Conv1D UNet Baseline We chose the Conv1D UNet with AdaGN [5] and skip connections in GMD [13] as the Conv1D UNet baseline and modify the structure to a canonical Unet structure, which consist of four downsampling stages. The motion length n is successively reduced from Nto \u230aN/8\u230b, and then the corresponding up-sampling phase is used to upsample. There are two residual Conv1D blocks for each down-sampling or up-sampling stage, with a single block shown as Figure 3 (a). Block Adjustment We introduce Residual Linear MultiHead Cross-Attention after each block to effectively integrate textual cues, and dropout is incorporated into the original Conv1D block to enhance model generalization, as shown in Figure 3 (b). In the baseline block, text prompts 3 ... ... the person is dancing the waltz. N Random Mask Forward Process Reverse Process Inference Process the person is dancing the waltz. the person is is waving Text Encoder Timestep Encoder timestep t Text prompts ... SDE DPM-Solver++ 2M Karras Output Footskate Cleanup ... Conv1D U-Net DiT ( a ) ( b ) ( c ) ( d ) else Linear Multi-Head Cross-Attention Conv1d Mish Conv1d Mish Scale, Shift Dropout MLP x t text GroupNorm Rearrange Rearrange GroupNorm Rearrange Rearrange Conv1d GroupNorm Mish Conv1d GroupNorm Mish Scale, Shift x t MLP text Linear Multi-Head Cross-Attention Conv1d Mish Conv1d Mish Scale, Shift Dropout MLP x t text GroupNorm Rearrange Rearrange GroupNorm Rearrange Rearrange CondUNet1D Block Motion Padding Motion sequence x text Scale, Shift LayerNorm Scale Multi-Head Self-Attention Scale, Shift LayerNorm Scale FFN t MLP x Linear Multi-Head Cross-Attention Scale, Shift LayerNorm Scale Multi-Head Self-Attention Scale, Shift LayerNorm Scale FFN Figure 2. Overview of StableMoFusion, which is composed of a diffusion forward process, a reverse process on CondUNet1D motiondenoising network, and an efficient inference. The colors of the arrows indicate different stages: blue for training, red for inference, and black for both. LayerNorm Retention LayerNorm FFN x t Linear Multi-Head Cross-Attention ... ... person is dancing the waltz. N Random Mask ard Process se Process ence Process person is dancing the waltz. the person is is waving Text Encoder Timestep Encoder timestep t Linear Multi-Head Cross-Attention X[Cond_indices] Embedded_timestep X Dropout Scale, Shift Conv1d GroupNorm permute(0,2,1) permute(0,2,1) Mish Cond_indices Embedded_text Mish GroupNorm permute(0,2,1) permute(0,2,1) Conv1d MLP Text prompts ... SDE DPM-Solver++ 2M Karras Output Footskate Cleanup ... [N, D, L] [N, L] permute(0,2,1) permute(0,2,1) Conv1D UNet DiT RetNet ( a ) ( b ) ( c ) ( d ) ( e ) ( f ) text else Linear Multi-Head Cross-Attention Conv1d Mish Conv1d Mish Scale, Shift Dropout MLP x t text GroupNorm Rearrange Rearrange GroupNorm Rearrange Rearrange LayerNorm Retention LayerNorm FFN x t text Conv1d GroupNorm Mish Conv1d GroupNorm Mish Scale, Shift x t MLP text Linear Multi-Head Cross-Attention Conv1d Mish Conv1d Mish Scale, Shift Dropout MLP x t text GroupNorm Rearrange Rearrange GroupNorm Rearrange Rearrange CondUNet1D Block Motion Padding Motion sequence x text Scale, Shift LayerNorm Scale Multi-Head Self-Attention Scale, Shift LayerNorm Scale FFN t MLP x Linear Multi-Head Cross-Attention Scale, Shift LayerNorm Scale Multi-Head Self-Attention Scale, Shift LayerNorm Scale FFN text MLP t Figure 3. Visualization of the block structure and their adjustments of Conv1D UNet, DiT and RetNet. Pink blocks indicate structures that have been added or modified. are encoded with timesteps and integrated into motion coding using a simple formula: x \u00b7 (1 + scale) + shift. However, this approach doesn\u2019t effectively incorporate textual cues into motion sequences because it applies uniform operations across the entire sequence. In diffusion pipelines, noise uniformly affects the entire sequence, resulting in consistent mappings between motion frames and timesteps. However, since each frame\u2019s motion corresponds to distinct textual cues, a straightforward \"scale and shift\" approach is insufficient for injecting textual information. Our solution employs an attention mechanism to dynamically focus each motion frame on its associated textual information. Residual connections help mitigate potential computation biases introduced by cross attention. 4 GroupNorm Tweak We rearranged the data before and after applying Group Normalization, as depicted in Figure 3 (b), to minimize the impact of padded data during network forward propagation. When testing the adapted Conv1D UNet on datasets like KIT-ML with varying sequence lengths, we noticed a significant performance drop. This suggests that the model struggles with datasets containing extensive padding. Further investigation revealed that implementing Group Normalization within the baseline block caused this issue. Since Conv1D operates along the temporal dimension, directly applying Group Normalization to the input disrupted the differentiation between padded and non-padded data, affecting loss computation and gradient descent. 4.1.2 Diffusion Transformer Baseline To explore the effectiveness of the DiT structure for motion generation, we replace the Vision Transformer used for images in the DiT with self-attention used for motion data as the baseline, with the basic block structure shown in Figure 3 (c). For text-to-motion generation, we embed text prompts via the CLIP [22] encoder and project them into token concatenated with motion embeddings for self-attention. It scales and shifts the motion embedding before and after each autoregressive computation using timestep, which ensures the motion denoising trajectory closely aligned with the timestep. Block Adjustment We have also tried to incorporate Linear Multi-Head Cross-Attention into the DiT framework, as shown in Figure 3 (d). This adjustment allows for a more nuanced fusion of textual cues with motion dynamics than fusing all the text information into the one-dimensional text embedding in baseline, which enhances the coherence and relevance of generated motion sequences. 4.1.3 Retentive Network Baseline Our RetNet baseline follows a straightforward implementation similar to MDM, where the timesteps encoding is concatenated with the textual projection to form tokens, which are then fed along with motion embeddings into RetNet, with its basic block shown in Figure 3 (e). RetNet incorporates a gated multi-scale retention mechanism, which enhances information retention and processing capabilities, thereby enabling nuanced comprehension and generation of motion sequences. Through our investigation, we aim to ascertain the feasibility of leveraging RetNet for motion generation tasks. Block Adjustment To further integrate textual information, we also employ Linear Multi-Head Cross-Attention between retention and FFN, as shown in Figure 3 (f). By segregating temporal and textual features, our approach aims to preserve the distinct characteristics of each modality and allow the model to independently learn and leverage relevant cues for motion generation. This separation enhances the model\u2019s interpretability and flexibility, enabling it to better capture the intricacies of both temporal dynamics and semantic context. 4.1.4 Final Model Architecture Ultimately, we choose the Conv1D UNet with block adjustment and GroupNorm tweak as the motion-denoising model of StableMoFusion, as shown in Figure 2. We call this network as CondUNet1D. Both DiT and RetNet use the idea of attention to activate the global receptive field in the temporal dimension, which benefits the modeling of longrange dependency. The receptive field of Conv1D UNet is mainly in the convolution kernel window, promoting a coherent and smooth transition between frames. We tend to prioritize smoother generation in current applications of motion generation. In our StableMoFusion, we set the base channel and channel multipliers of UNet to 512 and [2,2,2,2] respectively. For text encoder, we leverage pre-trained CLIP [22] token embeddings, augmenting them with four additional transformer encoder layers, the same as MotionDiffuse [37], with a latent text dimension of 256. For timesteps encoder, it is implemented using position encoding and two linear layers, the same as StableDiffusion [23], with a latent time dimension of 512. 4.2. Training Strategies Recent research has shown that key factors in the training strategies of the diffusion model affect the learning pattern and its generative performance [2]. In this subsection, we will analyze the impact of two empirically valid training strategies on diffusion-based motion generation: exponential moving average and classifier-free guidance. 4.2.1 Exponential Moving Average Exponential Moving Average (EMA) calculates a weighted average of a series of model weights, giving more weight to recent data. Specifically, assume the weight of the model at time t as \u03b8t, then the EMA formulated as: vt = \u03b2 \u00b7 vt\u22121 + (1 \u2212\u03b2) \u00b7 \u03b8t, where vt denotes the average of the network parameters for the first t iterations (v0 = 0), and \u03b2 is the weighted weight value. During the training of the motion-denoising network, the network parameters change with each iteration, and the motion modeling oscillates between text-motion consistency and motion quality. Therefore, the use of EMA can smooth 5 out the change process of these parameters, reduce mutations and oscillations, and help to improve the stability ability of the motion-denoising model. 4.2.2 Classifier-Free Guidance To further improve the generation quality, we use Classifier-Free Guidance (CFG) to train the motiondenoising generative model. By training the model to learn both conditioned and unconditioned distributions (e.g., setting c = \u2205for 10% of the samples), CFG ensures that the models can effectively capture the underlying data distribution across various conditions. In inference, we can tradeoff text-motion consistency and fidelity using s: Gs (xt, t, c) = G (xt, t, \u2205) + s \u00b7 (G (xt, t, c) \u2212G (xt, t, \u2205)) (3) This ability to balance text-motion consistency and fidelity is crucial for producing varied yet realistic outputs, enhancing the overall quality of generated motion. 4.3. Efficient Inference Time-consuming inference time remains a major challenge for diffusion-based approaches. To address this problem, we improve inference speed by integrating four effecient and training-free tricks in the inference process: 1) efficient sampler, 2) embedded-text cache, 3) parallel CFG computation, and 4) low-precision inference. 4.3.1 Efficient Sampler We integrate the SDE variant of second-order DPMSolver++ sampler (SDE DPM-Solver++ 2M) into diffusionbased motion generation to reduce denoising iterations. DPM-Solver is a high-order solver for diffusion stochastic differential equations (SDEs), which implies additional noise will be introduced during the iterative sampling. Thereby, stochasticity of its sampling trajectories helps to reduce the cumulative error [33], which is crucial for the realism of generated motion. In addition, we adopt the Karras Sigma [12] to set discrete timesteps. This method leverages the theory of constant-velocity thermal diffusion to determine optimal timesteps, thereby maximizing the efficiency of motion denoising within a given number of iterations. 4.3.2 Embedded-text Cache We integrate the Embedded-text Cache mechanism into the inference process to avoid redundant calculations. In diffusion-based motion generation, the text prompt remain unchanged across iterations, resulting in same embedded text in each computation of the denoising network. Specifically, we compute the text embedding initially and subsequently utilize the embedded text directly in each network Figure 4. Red: the foot joints as 0th frame. Green: the corresponding keypoints. At 5th frame, the offset of red and green points indicate the footskate phenomenon. forward, thereby reducing computational redundancy and speeding up inference. 4.3.3 Parallel CFG Computation We implement the inference process of CFG in parallel to speed up the single iteration calculation while maintaining model generation performance. Due to the CFG mechanism Equation 3, in each iterative step during inference, it is necessary to execute a conditional and an unconditional denoising, respectively, using the trained motion network, and then sum up the results. 4.3.4 Low-precision Inference We utilize half-precision floating point (FP16) computation during inference to accelerate processing. Newer hardware supports enhanced arithmetic logic units for lowerprecision data types. By applying parameter quantization, we convert FP32 computations to lower-precision formats, effectively reducing computational demands, parameter size, and memory usage of the model. 4.4. Footskate Reduction Figure 4 shows an example for the foot skating phenomenon. The motion frame rate is 20. The two frames in the figure have a time difference of 0.25s. Based on our life experience, it is difficult to complete a motion and return to the original pose within 0.25s. Although the foot postures in the two frames remain unchanged, there are changes in the positions of the joints, as observed from the variations in joint position values and their distances relative to red points. For this motion, what we expect is that the feet are anchored at the same point. Typically, choosing the foot position of middle frames during foot skating as the fixed point minimizes the impact on the adjacent frames. The key to eliminating foot skating is to first identify the foot joints and frame ranges where foot skating occurs, and then anchor those keypoints at their positions p in the intermediate frames. We formulate this constraint as a loss 6 term shown in Equation 4 where j indicates joint and f is frame ranges. Lfoot = Jskating X j Fskating X f (Pj \u2212p) (4) Jskating contains all the joints where foot skating may occur, specifically including right ankle, right toes, left ankle and left toes. Fskating is a collection of all frame ranges of the joint j. Pj means the positions of joint j. We incorporate Equation 4 to a gradient descent algorithm to correct foot skating motion. Following UnderPressure [19], we use vertical ground reaction forces (vGRFs) to identity foot joint j and its skating frames f. The vGRFs predition model of UnderPressure V23 requires motion of a predefined 23 joints skeleton S23, which is different from our motion data. In our work, we utilize HumanML3D[6] with 22 skeletal joints S22 and KIT-ML [21] motion with 21 skeletal joints. The subsequent foot skating cleanup primarily focused on HumanML3D. We transferred the pre-trained weights of V23 to our own model V \u03b8 22 using the constraints Equation 5, enabling us to directly predict the vertical ground reaction forces for HumanML3D motions. P is keypoints of HumanML3D motion. PS23 is the result of retargeting P to skeleton S23. min \u03b8 \u2225V \u03b8 22(P) \u2212V23(PS23)\u22252 2 (5) L = \u03c9qLpose + \u03c9fLfoot + \u03c9tLtrajectory + \u03c9vLvGRFs (6) Lfoot = Lfoot(P, \u02c6 P, V23, PS23) (7) LvGRFs = LvGRFs(P, \u02c6 P, V \u03b8 22) (8) Drawing inspiration from UnderPressure [19], we use foot contact loss Lfoot to fix contact joints, pose loss Lpose and trajectory loss Ltrajectory to to keep the semantic integrity of motion, vGRFs loss LvGRF s to keep valid foot pose. Our supplementary material provides detailed definitions of these loss terms. The final definition of our loss function is as Equation 6 [19] where \u03c9q, \u03c9f, \u03c9t, \u03c9v are weights of its loss item. P is keypoints of footskating motion and \u02c6 P is the result keypoints after footskate cleanup. Through our method, the footskate cleanup process can be generalized to various skeletal motions. In a few cases, motion corrected by Equation 6 may occurs unreasonable or unrealistic poses. The diffusion model trained on a large amount of motion data learns the prior knowledge of real motions and has the ability to correct the invalid motions. Therefore, we use our pretrained diffusion model to correct such cases. Motivated by OmniControl [32] and Physdiff [35], we incorporates footskate cleaning method into the diffusion denoising process, denote as StableMoFusion\u2217. 5. Experiments 5.1. Dataset and Evaluation Metrics We use HumanML3D [6] and KIT-ML [21] dataset for our experiments. HumanML3D Dataset contains 14,646 motions and 44,970 motion annotations. KIT Motion Language Dataset contains 3,911 motions and 6,363 natural language annotations. The evaluation metrics can be summarized into four key aspects: 1) Motion Realism: Frechet Inception Distance (FID), which evaluates the similarity between generated and ground truth motion sequences using feature vectors extracted by a pre-trained motion encoder [6]. 2) Text match: R Precision calculates the average top-k accuracy of matching generated motions with textual descriptions using a pretrained contrastive model [6]. 3) Generation diversity: Diversity measures the average joint differences across generated sequences from all test texts. Multi-Modality quantifies the diversity within motions generated for the same text. 4) Time costs: Average Inference Time per Sentence (AITS) [3] measures the inference efficiency of diffusion models in seconds, considering generation batch size as 1, without accounting for model or data loading time. In all of our experiments, FID and R Precision are the principal metrics we used to conduct our analysis and draw conclusions. 5.2. Implements Details For training, we use DDPM [8] with T = 1, 000 denoising steps and variances \u03b2t linearly from 0.0001 to 0.02 in the forward process. And we use AdamW with an initial learning rate of 0.0002 and a 0.01 weight decay to train the sample-prediction model for 50,000 iterations at batch size 64 on an RTX A100. Meanwhile, learning rate reduced by 0.9 per 5,000 steps. On gradient descent, clip the gradient norm to 1. For CFG, setting c = \u2205for 10% of the samples. For inference, we use the SDE variant of second-order DPM-Solver++ [18] with Karras Sigmas [12] in inference for sampling 10 steps. The scale for CFG is set to 2.5. 5.3. Quantitative Results We compare our StableMoFusion with several state-of-theart models, including T2M [6], MDM [28], MLD [3], MotionDiffuse [37], T2M-GPT [36], MotionGPT [9], ReMoDiffuse [38], M2DM [14] and fg-T2M [31]. on the HumanML3D [6] and KIT-ML [21] datasets in Table 2 and Table 3, respectively. Most results are borrowed from their own paper and we run the evaluation 20 times and \u00b1 indicates the 95% confidence interval. Our method achieves the state-of-the-art results in FID and R Precision (top k) on the HumanML3D dataset, and also achieves good results on the KIT-ML dataset: the best R Precision (top k) and the second best FID. This 7 Table 2. Quantitative results on the HumanML3D test set. The right arrow \u2192means the closer to real motion the better. Red and Blue indicate the best and the second best result. Method FID \u2193 R Precision\u2191 Diversity \u2192 Multi-modality \u2191 top1 top2 top3 Real 0.002\u00b1.000 0.511\u00b1.003 0.703\u00b1.003 0.797\u00b1.002 9.503\u00b1.065 T2M [6] 1.067\u00b1.002 0.457\u00b1.002 0.639\u00b1.003 0.743\u00b1.003 9.188\u00b1.002 2.090\u00b1.083 MDM [28] 0.544\u00b1.044 0.320\u00b1.005 0.498\u00b1.004 0.611\u00b1.007 9.599\u00b1.086 2.799\u00b1.072 MLD [3] 0.473\u00b1.013 0.481\u00b1.003 0.673\u00b1.003 0.772\u00b1.002 9.724\u00b1.082 2.413\u00b1.079 MotionDiffuse [37] 0.630\u00b1.001 0.491\u00b1.001 0.681\u00b1.001 0.782\u00b1.001 9.410\u00b1.049 1.553\u00b1.042 GMD [13] 0.212 0.670 9.440 T2M-GPT [36] 0.116\u00b1.004 0.491\u00b1.003 0.680\u00b1.003 0.775\u00b1.002 9.761\u00b1.081 1.856\u00b1.011 MotionGPT [9] 0.232\u00b1.008 0.492\u00b1.003 0.681\u00b1.003 0.778\u00b1.002 9.528\u00b1.071 2.008\u00b1.084 ReMoDiffuse [38] 0.103\u00b1.004 0.510\u00b1.005 0.698\u00b1.006 0.795\u00b1.004 9.018\u00b1.075 1.795\u00b1.043 M2DM [14] 0.352\u00b1.005 0.497\u00b1.003 0.682\u00b1.002 0.763\u00b1.003 9.926\u00b1.073 3.587\u00b1.072 Fg-T2M [31] 0.243\u00b1.019 0.492\u00b1.002 0.683\u00b1.003 0.783\u00b1.002 9.278\u00b1.072 1.614\u00b1.049 StableMoFusion (Ours) 0.098\u00b1.003 0.553\u00b1.003 0.748\u00b1.002 0.841\u00b1.002 9.748\u00b1.092 1.774\u00b1.051 Table 3. Quantitative results on the KIT-ML test set. The right arrow \u2192means the closer to real motion the better. Red and Blue indicate the best and the second best result. Method FID \u2193 R Precision\u2191 Diversity \u2192 Multi-modality \u2191 top1 top2 top3 Real Motion 0.031\u00b1.004 0.424\u00b1.005 0.649\u00b1.006 0.779\u00b1.006 11.08\u00b1.097 T2M [6] 2.770\u00b1.109 0.370\u00b1.005 0.569\u00b1.007 0.693\u00b1.007 10.91\u00b1.119 1.482\u00b1.065 MDM [28] 0.497\u00b1.021 0.164\u00b1.004 0.291\u00b1.004 0.396\u00b1.004 10.847\u00b1.109 1.907\u00b1.214 MLD [3] 0.404\u00b1.027 0.390\u00b1.008 0.609\u00b1.008 3.204\u00b1.027 10.80\u00b1.117 2.192\u00b1.071 MotionDiffuse [37] 1.954\u00b1.062 0.417\u00b1.004 0.621\u00b1.004 0.739\u00b1.004 11.10\u00b1.143 0.730\u00b1.013 T2M-GPT [36] 0.514\u00b1.029 0.416\u00b1.006 0.627\u00b1.006 0.745\u00b1.006 10.921\u00b1.108 1.570\u00b1.039 MotionGPT [9] 0.510\u00b1.016 0.366\u00b1.005 0.558\u00b1.004 0.680\u00b1.005 10.35\u00b1.084 2.328 \u00b1.117 ReMoDiffuse [38] 0.155\u00b1.006 0.427\u00b1.014 0.641\u00b1.004 0.765\u00b1.055 10.80\u00b1.105 1.239\u00b1.028 M2DM [14] 0.515\u00b1.029 0.416\u00b1.004 0.628\u00b1.004 0.743\u00b1.004 11.417\u00b1.97 3.325\u00b1.37 Fg-T2M [31] 0.571\u00b1.047 0.418\u00b1.005 0.626\u00b1.004 0.745\u00b1.004 10.93\u00b1.083 1.019\u00b1.029 StableMoFusion (Ours) 0.258\u00b1.029 0.445\u00b1.006 0.660\u00b1.005 0.782\u00b1.004 10.936\u00b1.077 1.362\u00b1.062 demonstrates the ability of StableMoFusion to generate high-quality motions that align with the text prompts. On the other hand, while some methods excel in diversity and multi-modality, it\u2019s crucial to anchor these aspects with accuracy (R-precision) and precision (FID) to strengthen their persuasiveness. Otherwise, diversity or multimodality becomes meaningless if the generated motion is bad. Therefore, our StableMoFusion achieves advanced experimental results on two datasets and shows robustness in terms of model performance. Figure 5. Visualization comparison results before and after our footskate cleanup. The red bounding box shows details of skating feet. 8 5.4. Qualitative Result Figure 5 shows the visual results of our footskate cleanup method, StableMoFusion\u2217. The red bounding box of footskate motion clearly has multiple foot outlines, whereas ours shows only one. The comparison graph shows the effectiveness of our method for cleaning footskate. Directly applying the footskate cleanup method of UnderPressure [19] to our motion would result in motion distortion, while our method effectively avoids such deformation. In our supplementary material, we will further present a comparison between our method and the UnderPressure method by videos to illustrate it. 5.5. Inference Time We calculate AITS of StableMoFusion and ReMoDiffuse [38] with the test set of HumanML3D[6] on Tesla V100 as MLD [3] does, the other results of Figure 1 are borrowed from [3]. For instance, MDM [28] with CFG requires 24.74s for average inference; MotionDiffuse [37] without CFG uses condition encoding cache and still requires 14.74s of average inference. While the MLD [3] reduces the average inference time to 0.217s by applying DDIM50 in latent space, we find this approach lacks the ability to edit and control motion by manipulating the model input. To tackle this, we employ 1) efficient sampler, 2) embedded-text cache, 3) parallel CFG computation, and 4) low-precision inference to reduce iteration counts and network latency. As shown in Figure 1, our StableMoFusion significantly shortens the inference time and achieves higher performance within the original motion space. However, it remains incontrovertible that StableMoFusion\u2019s inference speed trails behind that of MLD, and fails to meet the industry\u2019s real-time standard with an average inference time of 0.5s. Thus, our future work will focus on acceleration: the inference time of StableMoFusion is currently tied to the computation of the network, and we will further investigate how to scale down the model and how to reduce single-step latency in inference. 5.6. Ablation 5.6.1 Network Architecture We evaluate and compare all the architectures mentioned in Section 4.1 with the same training and inference pipeline. For a fair comparison, all methods use the real motion length from the ground truth to clip generated motion and seed(0) for one evaluation. As Table 4 show, each network enhancement in cross-attention has demonstrated performance enhancements, elucidating its pivotal role in augmenting model efficacy and effectiveness. Among them, Conv1D UNet achieves the best generation performance. And fine-tuning Conv1D UNet\u2019s GroupNorm can effectively improve its performance on the KIT-ML dataset, reducing the FID by about 64%. It also proves that the GoupNorm tweak on UNet is mainly useful for the dataset with dispersed length distributions, such as KIT-ML dataset. Table 4. Comparison of various architectures and adjustments. Dataset Network FID \u2193 R Precision (top3) \u2191 HumanML3D Conv1D UNet basline 0.245 0.780 + cross-attention 0.074 0.821 + GroupNorm Tweak 0.089 0.840 DiT baseline 0.884 0.711 + cross-attention 0.113 0.787 RetNet baseline 1.673 0.740 + cross-attention 0.147 0.853 KIT-ML Conv1D UNet+ cross-attention 0.658 0.756 + GroupNorm Tweak 0.237 0.780 5.6.2 Effective Inference By using the SDE variant of second-order DPM-Solver++ with Karras sigma, the inference process of diffusion-based motion generation is able to significantly reduce the minimum number of iterations required for generation from 1000 to 10 while enhancing the quality of generated motions, approximately 99% faster than the original inference process, as shown in Table 5. The application of embedded text caching and parallel CFG further reduces the average inference time by about 0.3s and 0.15s, respectively. Our experiments also show that reducing the computational accuracy of the motiondenoising model by half, from FP32 to FP16, does not adversely affect the generation quality. This suggests that 32bit precision is redundant for motion generation task. Table 5. The progressive effect of each efficient and training-free trick of StableMoFusion in inference process. Method FID\u2193 R Precision (top3)\u2191 AITS\u2193 Inference Steps\u2193 base (DDPM1000) 1.251 0.760 99.060 1000 + Efficient Sampler 0.076 0.836 1.004(-99%) 10 + Embedded-text Cache 0.076 0.836 0.690(-31%) 10 + Parallel CFG 0.076 0.836 0.544(-21%) 10 + FP16 0.076 0.837 0.499(-8%) 10 6. Conclusion In this paper, we propose a robust and efficient diffusionbased motion generation framework, StableMoFusion, which uses Conv1DUNet as a motion-denoising network and employs two effective training strategies to enhance the network\u2019s effectiveness, as well as four training-free tricks to achieve efficient inference. Extensive experimental results show that our StableMoFusion performs favorably against current state-of-the-art methods. Furthermore, we 9 propose effective solutions for time-consuming inference and footskate problems, facilitating diffusion-based motion generation methods for practical applications in industry."
17
+ }
intro_8K/test_introduction_short_2405.05707v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05707v1",
3
+ "title": "LatentColorization: Latent Diffusion-Based Speaker Video Colorization",
4
+ "abstract": "While current research predominantly focuses on image-based colorization, the\ndomain of video-based colorization remains relatively unexplored. Most existing\nvideo colorization techniques operate on a frame-by-frame basis, often\noverlooking the critical aspect of temporal coherence between successive\nframes. This approach can result in inconsistencies across frames, leading to\nundesirable effects like flickering or abrupt color transitions between frames.\nTo address these challenges, we harness the generative capabilities of a\nfine-tuned latent diffusion model designed specifically for video colorization,\nintroducing a novel solution for achieving temporal consistency in video\ncolorization, as well as demonstrating strong improvements on established image\nquality metrics compared to other existing methods. Furthermore, we perform a\nsubjective study, where users preferred our approach to the existing state of\nthe art. Our dataset encompasses a combination of conventional datasets and\nvideos from television/movies. In short, by leveraging the power of a\nfine-tuned latent diffusion-based colorization system with a temporal\nconsistency mechanism, we can improve the performance of automatic video\ncolorization by addressing the challenges of temporal inconsistency. A short\ndemonstration of our results can be seen in some example videos available at\nhttps://youtu.be/vDbzsZdFuxM.",
5
+ "authors": "Rory Ward, Dan Bigioi, Shubhajit Basak, John G. Breslin, Peter Corcoran",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "With the rapid increase in the popularity of streaming video in recent years, today\u2019s media consumers have become ac- customed to high-definition and vibrant video experiences, in color and on demand. However, there are also many substantial video archives with content that remains available in black and white only. Unlocking the potential of these archives, and infusing them with color, presents an exciting opportunity to engage with modern audiences, and breathe new life into classic movies and television episodes. By seamlessly blending cutting-edge technology with classic content, we not only enhance the visual appeal for contem- porary viewers but also ensure that the historical significance of these timeless works are faithfully maintained. A. TRADITIONAL COLORIZATION Colorizing black-and-white multimedia is a formidable chal- lenge characterized by its inherent complexity. It presents a \u2018one-to-many\u2019 scenario, wherein multiple feasible coloriza- tion outcomes can be derived for a single black-and-white video, as illustrated by recent research [1]. Traditional approaches for video colorization are manual and labor-intensive, demanding the dedicated efforts of in- terdisciplinary teams comprised of skilled colorists and roto- scoping animators, artists and historians. These teams invest extensive hours to ensure the production of a convincing and coherent end result. The intricacies of colorization are particularly difficult in the realm of videos, where the sheer volume of frames per second amplifies the complexity [2]. 1 arXiv:2405.05707v1 [cs.CV] 9 May 2024 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 1a. \"Sherlock Holmes and the Woman in Green\" (1945) black-and-white frames. FIGURE 1b. \"Sherlock Holmes and the Woman in Green\" (1945) LatentColorization output frames. Therefore, automation of the video colorization process is highly desirable. B. AUTOMATIC COLORIZATION Automatic video colorization can be seen as a means to significantly reduce the cost traditionally associated with manually colorizing/restoring vintage movies, an expensive proposition that is often limited to organizations with sub- stantial budgets. Since the labor costs associated with expert colorists are a significant barrier, manual colorization has also been largely limited to popular films or TV shows (e.g., Doctor Who), with numerous other works (social history movies, documentaries, films by lesser-known directors, etc.) omitted where the cost-benefit analysis could not justify their colorization. As a consequence, various research efforts have tackled the need to automate aspects of the colorization process. These efforts span from earlier methods such as histogram matching [3], to more recent interactive approaches such as scribble-based systems [4] and exemplar-based approaches [5], as well as more recent developments in terms of deep learning-based colorization [6]. While the results still lag behind those that can be achieved of an experienced human colorizer, the automated approaches referred to above have made significant advancements in terms of their accuracy over prior systems. In terms of the state-of-the-art, one current benchmark for automatic video colorization is held by Wan et al. [7]. However, it is important to note that their approach not only colorizes but also restores videos, making it a difficult bench- mark for systems that are focused solely on colorization. De- Oldify [6], provides colorized outputs without image restora- tion, and therefore can be more easily compared against colorization-only approaches such as the one presented in this paper. Recent research [8] has shown the advantages of self- supervised learning methodologies for colorization, remov- ing the resource-intensive need for creating and curating manually labelled datasets for training models. Constructing custom labelled datasets can be a resource-intensive and time-consuming endeavor, particularly when dealing with video content which has both static- and motion-related information. C. RESEARCH CONTRIBUTION Driven by the recent increase in the adoption of diffusion models [9]\u2013[11], the field of generative modelling has pro- duced a variety of contributions including Stable Diffusion [12], Imagen [13], and DALL\u2022E 2 [14] which have gained attention in both research and the mainstream media. Within the context of video colorization, the majority of techniques are based on GAN-based methods [15], [16], as well as the utilization of transformer-based approaches [17] such as those featured in [7], [18], [19]. Notably, Saharia et al. [20] propose leveraging diffusion models for various image-to-image tasks, including colorization. This paper introduces an innovative approach to video- based colorization, employing a latent-based denoising dif- fusion model. Our method demonstrates improvements over the state-of-the-art DeOldify [6] method, across a range of standard evaluation metrics including Power Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Fr\u00e9chet Incep- tion Distance (FID), Fr\u00e9chet Video Distance (FVD), and Naturalness Image Quality Evaluator (NIQE). Furthermore, we provide comparative results for Blind/Referenceless Im- age Spatial Quality Evaluator (BRISQUE). It is also worth noting that our method yields an average improvement of approximately 18% when FVD is employed as the evaluation metric. This result is also collaborated by our user study where LatentColorization is preferred 80% of the time to the 2 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization previous state-of-the-art. We introduce a novel system for achieving temporal con- sistency in video colorization through the application of a latent diffusion model. A sample visual, before and after, is given in Figures 1a and 1b. To summarise, the unique contributions of our proposed work are as follows: \u2022 We apply fine-tuned latent diffusion models to the au- tomatic video colorization task with exemplar frame conditioning. \u2022 We ensure temporal consistency in automatic video colorization through the use of our autoregressive con- ditioning mechanism. \u2022 We build a novel end-to-end video colorization applica- tion. \u2022 We achieve state-of-the-art performance over a range of datasets, metrics and evaluations. The structure of this paper is as follows: In \u00a72, we examine related work. \u00a73 provides an in-depth description of our methodology. Then, \u00a74 presents the results of our evalua- tions, which are further examined in \u00a75. Conclusions are given in \u00a76, and we outline our future research directions in \u00a77.",
15
+ "main_content": "ABSTRACT While current research predominantly focuses on image-based colorization, the domain of video-based colorization remains relatively unexplored. Most existing video colorization techniques operate on a frameby-frame basis, often overlooking the critical aspect of temporal coherence between successive frames. This approach can result in inconsistencies across frames, leading to undesirable effects like flickering or abrupt color transitions between frames. To address these challenges, we harness the generative capabilities of a fine-tuned latent diffusion model designed specifically for video colorization, introducing a novel solution for achieving temporal consistency in video colorization, as well as demonstrating strong improvements on established image quality metrics compared to other existing methods. Furthermore, we perform a subjective study, where users preferred our approach to the existing state of the art. Our dataset encompasses a combination of conventional datasets and videos from television/movies. In short, by leveraging the power of a fine-tuned latent diffusion-based colorization system with a temporal consistency mechanism, we can improve the performance of automatic video colorization by addressing the challenges of temporal inconsistency. A short demonstration of our results can be seen in some example videos available at https://youtu.be/vDbzsZdFuxM. INDEX TERMS Artificial intelligence, artificial neural networks, machine learning, computer vision, video colorization, latent diffusion, image colorization Generative adversarial networks, commonly referred to as GANs [15], have emerged as a common technology in the enhancement of existing video content, in domains including sign-language addition [21], low-light enhancement [22], and video colorization [5]. GAN-based methods have also been extensively used for image colorization [23]\u2013[30]. For example, Isola et al. proposed Pix2Pix [23], which has performed well on various benchmarks, including the FID-5K benchmark using the ImageNet Val dataset. In the context of video colorization, DeOldify [6] and more recently GCP [31] stand out as two of the more prominent GAN-based approaches. DeOldify [6] is a self-attention-based Generative Adversarial Network (GAN) [32]. It incorporates NoGAN training [33] and adheres to a Two Time Scale Update Rule [34]. While DeOldify is capable of generating credible colorizations, it has a tendency to produce somewhat subdued or less vibrant colors, characteristic of GAN-based systems. GCP [31] leverages color priors encapsulated in a pretrained Generative Adversarial Networks (GAN) for automatic colorization. Specifically, they \u201cretrieve\u201d matched features (similar to exemplars) via a GAN encoder and then incorporate these features into the colorization process with feature modulations. Other works, such as [35]\u2013[37], have also made contributions to the field of video colorization. It is important to note that GANs, due to their reliance on multiple loss functions, are challenging to train, susceptible to mode collapse, and often encounter convergence issues [38]\u2013[40]. Furthermore, only certain GAN-based automatic colorization systems consider temporal consistency, such as Zhao et al. [41]. This means that the systems that do not account for temporal consistency do not maintain coherence across successive frames, which is a crucial aspect of video colorization. Video Colorization with Hybrid Generative Adversarial Network (VCGAN) [41] is an end-to-end recurrent colourization network that prioritises temporal consistency in automatic video colorization. DeepRemaster, as introduced by Iizuka et al. in their work [42], is a Convolutional Neural Network (CNN)-based colorization system. As well as colorization, it also performs super-resolution, noise reduction, and contrast enhancement. Its performance makes it a suitable benchmark for comparison in our work. Transformers, known for their success in diverse machine learning domains, including Natural Language Processing (NLP) and Computer Vision (CV), have achieved stateof-the-art results in various low-resolution computer vision tasks, exemplified by their second-place ranking on the FID5K benchmark using the ImageNet Val dataset. However, the computational complexity of their self-attention mechanism scales significantly with higher image resolutions, presenting a challenge for handling high-resolution images [19], [43]. While ongoing research efforts aim to mitigate this challenge, it remains an open area of investigation. Unlike GANs, transformers exhibit greater resilience to mode collapse, thanks to their distinctive attention mechanism. Kumar et al. have introduced ColTran [18], a transformerbased image colorization model that operates through a threestep process. Initially, it colorizes a low-resolution version of the image, as it leverages self-attention, which is computationally demanding for high-resolution photos. Subsequently, it upscales the image and then the colors, yielding highresolution colorized images. ColTran excels in producing vibrant colorizations, yet it falls short of catering to the specific demands of video colorization, leading to inconsistencies in video colorizations. B. DIFFUSION MODELS Diffusion models, as initially introduced by Sohl-Dickstein et al. [9], operate by learning how to reconstruct data from noise. They encompass two distinctive stages: Forward Diffusion Process: In this phase, Gaussian noise is incrementally incorporated into the data through a stepwise progression spanning multiple timesteps. This gradual introduction of noise gradually transforms the original information until the desired level of diffusion or alteration is attained. Reverse Diffusion Process: Subsequently, a learning model is employed to reverse this diffusion process, effectively reconstructing the original data [44], as illustrated in Figure 2. Unlike Generative Adversarial Networks (GANs), diffusion models are resilient to mode collapse, and they have demonstrated success across various domains, including 3 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 2. Diagram of the Diffusion Process: This diagram illustrates the operation of the diffusion model in both the forward and backward processes. In the forward process, it visually portrays the incremental addition of Gaussian noise to the input image x0 until it becomes visually indistinguishable from Gaussian noise xT (top). Subsequently, it showcases the learned backward diffusion process, where the model gradually removes the Gaussian noise from xT to return to the original image x0 (bottom). video generation [45], [46], audio generation [47], [48], and image generation [12], [14], [45]. An illustration of the application of diffusion models to still-image colorization can be found in Palette [49], a diffusion model tailored for a variety of image-to-image tasks. Palette attains the top position on the leader-board in the FID5K benchmark using the ImageNet Val dataset. Concurrently, Liu et al. [50] are engaged in research focused on the challenge of achieving temporally consistent video colorization, employing pre-trained diffusion models. A distinction lies in their approach as they utilize text-based conditioning for their system. In contrast, our methodology relies on exemplar frames as the conditioning input. This strategic choice was made based on our belief that using an image for conditioning provides a higher degree of expressive control compared to text-based approaches. A challenge with diffusion models is their demanding computational requirements during both the training and testing phases. Nevertheless, ongoing research endeavors are actively addressing this issue [51]\u2013[53]. Several approaches have emerged to mitigate this challenge: Down-sampling and Super-resolution: Works such as Make-A-Video [54] tackle this issue by initially downsampling the resolution of images in the diffusion process and subsequently restoring the resolution using a superresolution algorithm. Latent Diffusion: Another approach, exemplified by Latent Diffusion [12], modifies the diffusion process to operate in the latent space of a trained autoencoder, as opposed to the pixel space. This results in reductions in both inference and training times due to the reduced dimensionality of the data inputted into the diffusion process. The work presented in this paper represents the first instance, where the video colorization task is tackled through the use of an image-to-image latent diffusion model employing exemplar frames. III. METHODOLOGY A. DESIGN CONSIDERATIONS For achieving temporally consistent video colorization, there are two popular methods: Implicit Temporal Consistency: In this approach, the notion of ensuring explicit temporal consistency is considered unnecessary. The belief is that with a sufficiently accurate system and reasonably similar input (e.g., consecutive frames in a video sequence), the colorized output should naturally exhibit similarity and relative consistency. As a result, temporal consistency is managed implicitly. Explicit Temporal Consistency: This project aligns with the second methodology, which emphasizes the explicit addressing of temporal consistency. Rather than relying on the system to learn it implicitly, this approach involves conditioning for temporal consistency explicitly. The advantages of this approach include reduced training time, decreased data requirements, and a lower computational load. However, it necessitates more intricate system engineering to explicitly convey the requirements to the system. Within the realm of implicit temporal consistency methodologies, several approaches are prevalent, with three of the most common being: Optical Flow-Based: Optical flow-based colorization methods operate by conditioning the system to maintain color consistency over time. However, it is worth noting that a limitation of this approach is the potentially high computational cost associated with calculating optical flow, making it less practical in certain applications [55]. Exemplar-Based: Exemplar-based methods involve providing the system with a reference image to guide its col4 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization orization process. This typically entails human intervention or a database retrieval algorithm with a collection of reference images [56]. Hybrid-Based: Some methods adopt a hybrid approach by combining different methodologies to harness the benefits of multiple systems simultaneously. This strategy, as seen in works like [5], [57], seeks to leverage the strengths of various techniques to enhance overall performance. FIGURE 3. Comparison of 3 consecutive frames with different operations applied: First Row (Ground Truth): This row showcases the original, unaltered images, representing the ground truth reference. Second Row (Diffusion Model): In the second row, you can observe the colorization output generated by our original diffusion model. Third Row (Diffusion Model with Post-Processing): Here, the output of the diffusion model is presented with an additional post-processing procedure applied to enhance the results. Fourth Row (LatentColorization): The final row displays the results obtained from LatentColorization . B. DATA PROCESSING We use the following datasets as part of our experiments: GRID Dataset: The GRID dataset [58] is a collection of video recordings featuring individuals speaking. It encompasses high-quality facial recordings of 1,000 sentences spoken by each of 34 talkers, with a distribution of 18 males and 16 females, resulting in a total of 34,000 sentences. Lombard Grid Dataset: An extension of the GRID dataset, the Lombard Grid dataset [59], includes 54 talkers, each contributing 100 utterances. Among these 54 talkers, 30 are female, and 24 are male, expanding the dataset\u2019s diversity. Sherlock Holmes Movies Dataset: This dataset is a collection of professionally colorized frames extracted from \u2019Sherlock Holmes and the Woman in Green,\u2019 \u2019Sherlock Holmes Dressed to Kill,\u2019 \u2019Sherlock Terror by Night,\u2019 and \u2019Sherlock Holmes and the Secret Weapon.\u2019 These diverse datasets provide a foundation for our research in the field of speaker video colorization and temporally consistent diffusion models. Our dataset consisted of 10,000 frames allocated for training the model, with an additional 700 frames reserved for testing purposes. Each frame was uniformly resized to 128x128 pixels. To ensure the generalizability of our model, the training and testing frames were derived from distinct subjects, mitigating the risk of artificially inflated performance measures that would not extend to real-world scenarios. By conducting tests on benchmark datasets, we were able to compare our approach against previous methods. Furthermore, testing on the Sherlock Holmes-related data provided a valuable means of comparing our results to expert human colorizations. Additionally, training on open-domain videos underscores the potential of these resources in advancing the field of video colorization. C. SYSTEM OVERVIEW 1) Image Diffusion Based Set Up In our initial exploration, we considered adopting a setup akin to Palette [49], incorporating our temporal consistency mechanism and initial frame biasing, which will be elaborated on in Section III-D. However, we observed sub-optimal performance from this configuration, as the system\u2019s outputs exhibited undesired residual speckled noise, as illustrated in Fig. 3. To address the speckled noise in the diffusion colorization outputs, we explored two approaches: Non-Linear Means (nlmeans) Clustering: We initially applied the nlmeans clustering algorithm [60] to the images to mitigate the noise. However, this method relies on a hyperparameter that dictates the filter\u2019s strength. A stronger filter results in smoother images but may inadvertently remove high-quality details, such as hair and facial features. Conversely, a weaker filter may leave more residual speckled noise unfiltered. Overlaying Colorized Output with Black-and-White Inputs: As an alternative, we experimented with overlaying the colorized output with the original black-and-white inputs. This approach yielded superior results compared to the nlmeans filter, and it required less parameter tuning for filter strength. We opted to proceed with this approach, referred to as \u2019Diffusion Filtered\u2019. Despite our efforts to optimize noise reduction while preserving critical details, the final output quality still fell short of our improved approach, LatentColorization, which we will detail in the following section. Consequently, our final experiments did not incorporate the Palette-based approach [49]. 2) Latent Diffusion Based Set Up Inspired by Latent Diffusion [12], we devised LatentColorization. LatentColorization comprises three core components: an autoencoder, a latent diffusion model, and a conditioning mechanism, as visually represented in Fig 4a. 5 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 4a. The system architecture during training is depicted in the diagram, illustrating the key elements of the network and their interactions: Image Encoder: This component is responsible for encoding the input frames into embedding representations. It generates the ground truth embedding ZGT , the embedding of the current black-and-white frame ZBW , and the embedding of the previous color frame ZP . Denoising Unet: This is a critical part of the architecture, responsible for denoising and refining the embeddings generated by the Image Encoder that have passed through the forward diffusion process. Conditioning Mechanism: The conditioning mechanism is integral to the network, providing contextual information and conditioning signals to guide the colorization process. It takes into account various embeddings, including ZBW , ZP , and ZT , which represent the black and white input frame, the output of the model at the previous timestep, and the noisy frame to be denoised. Image Decoder: This component is responsible for decoding the predicted frames from their embedding representations. The architecture\u2019s design and interactions are essential for the model\u2019s training process, ensuring that it learns to generate accurate and temporally consistent colorizations over multiple timesteps. The latent diffusion model follows a two-step process, commencing with the forward diffusion phase (formulated in Eqn. 1). During this phase, Gaussian noise is systematically introduced to the data, incrementally transforming it until it becomes indistinguishable from Gaussian noise. During the second phase, the learned backward diffusion process is applied. This is where a neural network is trained to learn the original data distribution, and to draw samples from it by reconstructing the data from Gaussian noise. We represent formulations of this process with conditioning in Eqn.3 and without conditioning, as in Eqn.2. The forward diffusion process, as defined by [9], can be represented by the following formula: q(xt|xt\u22121) = N(xt; \u00b5t = p 1 \u2212\u03b2xt\u22121, \u03a3t = \u03b2tI) (1) In this formulation, the probability distribution q(\u00b7) of the image at each timestep xt, given the previous timestep xt\u22121, is characterized as a normal distribution N. This distribution is centred around a mean equal to the previous timestep xt\u22121, with noise incorporated. The magnitude of this noise is determined by the noise scheduler \u03b2 at time t and is further modulated by the identity matrix I. The noise scheduler \u03b2 typically follows a linear pattern, as exemplified in [44], or a cosine pattern, as demonstrated in [61]. The backward diffusion process, in accordance with [10], can be defined as follows: p\u03b8(xt\u22121|xt) = N(xt\u22121; \u00b5\u03b8(xt, t), \u03a3\u03b8(xt, t)) (2) In this definition, the probability distribution p\u03b8(\u00b7) of the slightly denoised image xt\u22121, given the noisier image xt, is characterized as a normal distribution N(). This distribution has a mean denoted as \u00b5 and a variance represented by \u03a3, both of which are learned and parameterized by the neural network indicated by \u03b8. The diffusion process can be conditioned using the following equation: p\u03b8(x0:T |y) = p\u03b8(xt) T Y t=1 p\u03b8(xt\u22121|xt, y) (3) In this equation, the probability density function p\u03b8 is akin to the unconditioned diffusion process, but conditioning is introduced at each timestep of the diffusion process, denoted as p\u03b8(xt\u22121|xt, y). In our specific scenario, the conditions encompass the previous frame, the grayscale frame, and the current frame during training, as illustrated in Fig.4a. During inference, the conditions consist of the previous frame, the grayscale frame, and noise, as indicated in Fig.4b. For a visual representation of our network architecture during training and inference, as well as a breakdown of where each equation is utilized, please refer to Fig 4a and Fig 4b. Additionally, for a more in-depth explanation of these equations and their derivation, you can explore the references provided in [9], [10], [62]. In the training process, the current frame ground truth, the current frame in black and white, and the previous frame are fed into the image encoder. These images are compressed into their respective embeddings, namely ZGT , ZBW , and ZP . The chosen autoencoder for this purpose is a Vector 6 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 4b. During inference, the system architecture remains largely consistent with the training phase, with one significant difference: Gaussian Noise in Place of Ground Truth Frame: Instead of the ground truth frame, the system introduces Gaussian noise as input during the testing phase. This alteration simulates real-world scenarios where the model must colorize frames without the ground truth. The rest of the architecture, including the Image Encoder, Denoising Unet, Conditioning Mechanism, Image Decoder, and their interactions, remains unchanged. This design allows the model to assess its performance under conditions that more closely resemble practical, ground truth-free scenarios. Quantized Variational AutoEncoder (VQ-VAE), as detailed in [63]. During the forward diffusion process, the current frame\u2019s ground truth embedding ZGT has noise applied to it based on the noise timestep, resulting in ZT . Simultaneously, the ground truth black and white embedding ZBW and the previous frame embedding ZP are concatenated. The noised embedding ZT is then denoised using the Unet and conditioned on ZBW and ZP . During the backward diffusion process, the neural network learns to predict the noise that was added during the forward diffusion process at time step T. Denoising the noise embedding ZT using the predicted noise results in ZT \u22121. We use a simple mean square error loss between the predicted noise, vs the actual noise added to the embedding in order to train the network. By employing the previous frame as conditioning, temporal consistency between frames is ensured throughout the video sequence, resulting in coherent colorization. During inference, the same process as the training scheme is followed, with the exception that the model is fed pure Gaussian noise representing frame ZT . The denoising is then repeated T times, after which the denoised embedding is passed through to the image decoder in order to produce the predicted frame. This process is depicted explicitly in Fig.4b. The system can be used in two different ways. First, it can be employed in an entirely end-to-end manner, where no additional guidance from the user is needed. In this setup, the system serves as an image colorization tool for the first frame. Then, this initial colorized frame is used in an auto-regressive fashion to guide the colorization of subsequent frames in the video clip. Second, the system can be used interactively, allowing the user to manually colorize the initial frame. This manual colorization becomes the condition for initiating the colorization of the following frames. This second approach provides control over the colorization process but requires the user to provide the initial colorization. D. TEMPORAL CONSISTENCY Temporal consistency was maintained through an autoregressive conditioning mechanism, where the current video frame was conditioned on the previous frame and the grayscale version of the current frame. This approach ensured that colorization remained consistent across the video frames. For a detailed illustration, refer to Fig 4a and Eqn 4. This mechanism is similar to the approaches used in other studies such as [64] and [65], where models were conditioned with information from the previous frame to guarantee temporal consistency in the context of video generation. Essentially, maintaining consistent colors throughout a video sequence becomes more achievable when the model can \"remember\" the colors from the previous frame. Ct = f(Ct\u22121..n, Gt)\u2200t \u2208T (4) In Eqn 4, we denote the color image as C \u2286RL,a,b, the grayscale image as G \u2286RL, and f() represents the colorization function performed by the neural network. Here, n signifies the length of the conditioning window frame, T is the total length of the video, and t indicates a specific moment within the video sequence. This equation describes how the colorization process is conditioned on both the previous frame and the grayscale version of the current frame, ensuring temporal consistency across the video frames. Throughout the video sequence, we maintain temporal consistency by providing the colorizer with the previous frame as a reference. However, a challenge arises at the 7 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization beginning of the sequence, denoted as t0, where there is no previous frame available for conditioning. To address this, we introduce an initial colorized frame at t0. This initial frame is advantageous because it introduces an element of user preference, which can be highly practical. It effectively reduces the video colorization task to that of coloring a single image, which then serves as the starting point for colorizing the entire video with a bias towards the initial frame. This approach offers flexibility and aligns with humancentric AI concepts for video colorization. We refer to this approach as \"initial frame biasing\". Additionally, it provides a clear method for evaluating the system, as ground truth is available for the initial frame, making traditional referencebased metrics such as PSNR, SSIM, FID, and FVD effective for assessment. It also allows for a user study where one can compare performance against the ground truth. E. HYPERPAREMETER AND TRAINING SET UP The hyperparameters used in the experiment are detailed in Table 1. The experiment employed the ADAM optimizer [66], with most of the values being adopted from the specifications of Stable Diffusion [12]. Any additional hyperparameters were determined through a process of empirical testing. An image size of 128x128 pixels required a 4x decrease in processing time as opposed to 256x256 pixels. Training at 256x256 pixels takes 165 minutes per epoch on an NVIDIA RTX 2080, whereas training at 128x128 pixels takes 38 minutes per epoch. Using 200 Diffusion steps for training and 50 for testing resulted in good performance. Input channels must be nine to account for the conditioning, three channels for color previous frame, three channels for the image and three channels for the black-and-white current frame. Having a batch size of 256 and a learning rate of 1.25e\u22127 resulted in convergence and reasonably fast training times. Train Test Image Size 128x128 128x128 Total Frames 10000 700 Diffusion Steps 200 50 Noise Schedule Linear Linear Linear Start 1.5e \u221203 1.5e \u221203 Linear End 0.0195 0.0195 Input Channels 9 9 Inner Channels 64 64 Channels Multiple 1, 2, 3, 4 1, 2, 3, 4 Res Blocks 2 2 Head Channels 32 32 Drop Out 0 0 Batch Size 256 8 Epochs 350 Learning Rate 1.25e\u22127 TABLE 1. The hyperparameter setup provides the values used for both training and testing IV. EVALUATION The performance evaluation of the colorization process combines both qualitative and quantitative assessments to gauge its success. Following similar colorization studies [7] our work is compared on standard metrics. The key metrics used for this evaluation are as follows: Power Signal to Noise Ratio (PSNR): This metric measures the quality of colorized images by comparing them to the corresponding ground truth images. It quantifies the difference between the pixel values of the colorized and ground truth images. Higher PSNR values indicate better performance. Structural Similarity Index (SSIM): SSIM evaluates the structural similarity between colorized images and ground truth images. It considers not only pixel values but also the structure and patterns in the images. Higher SSIM values indicate greater similarity to the ground truth. Fr\u00e9chet Inception Distance (FID): FID assesses the distance between the distribution of features extracted from colorized images and real images. Lower FID values indicate closer similarity to real images. Fr\u00e9chet Video Distance (FVD): FVD is a video-specific metric that measures the difference between generated and real videos by comparing the mean and covariance of their features. Lower FVD values represent better video colorization quality. Naturalness Image Quality Evaluator (NIQE): NIQE is a referenceless metric that quantifies the naturalness of colorized images using statistical measures. Lower NIQE values indicate more natural-looking images. Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE): BRISQUE is another referenceless metric that evaluates the quality of colorized images. It learns the characteristics of natural images and quantifies the deviation from these characteristics. Lower BRISQUE values represent better image quality. Mean Opinion Score (MOS): MOS is a weighted average of survey participants\u2019 perceived quality of an image or video. Higher MOS score represents a higher opinion of the subjective quality of the media. A combination of these quantitative metrics and visual inspection, see Fig 7, allows for a comprehensive assessment of the colorization process, enabling objective and subjective evaluation of its performance. Evaluating colorization is a very subjective task, and therefore, as well as the metrics used, a survey was conducted to obtain a subjective measure of our performance. This survey was conducted in a similar manner to the survey conducted by Wu et al. [31]. 8 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization A. QUALITATIVE ANALYSIS The qualitative results in Figure 7 visually compare the colorization performance of different methods, including DeOldify [6], ColTran [18], DeepRemaster [42], GCP [31], VCGAN [41], LatentColorization without temporal consistency enabled, LatentColorization, and the ground truth. These comparisons are based on image sequences from the GRID [58] and Lombard Grid [59] datasets. Additional qualitative results can also be seen in our appendices. This visual assessment allows for a direct comparison of how well LatentColorization performs in relation to other state-of-theart methods. Based on the qualitative analysis of the results in Figure 7, the following conclusions can be drawn: DeOldify [6] produces consistent colorizations, but they tend to appear dull and have a halo effect around the subject. ColTran generates colorful images, but it suffers from inconsistencies throughout the sequence. DeepRemaster [42] provides produces dull, conservative colorizations. GCP [31] produces colorful, consistent colorizations, but they are not faithful to the ground truth. VCGAN [41] seems to mostly apply a blueish filter to the frames. LatentColorization w/o TC produces colorization similar to the ground truth. It is difficult to visually distinguish between LatentColorization w/o TC, LatentColorization and the ground truth itself. LatentColorization impressively colorizes the sequence, maintaining faithfulness to the original, vibrancy in color, and overall consistency. Overall, LatentColorization appears to outperform the other methods in terms of fidelity to the original, colorfulness, and consistency. B. QUANTITATIVE ANALYSIS Quantitative evaluation is an essential aspect of assessing the quality and performance of colorization methods. It helps provide an objective measure of how well these methods perform. By evaluating colorizations both frame by frame and as a video sequence, you can gain insights into the strengths and weaknesses of each approach and determine how well they maintain consistency and quality throughout the sequence. This quantitative assessment complements the qualitative analysis and provides a more comprehensive understanding of the colorization results. Table 2 provides a quantitative evaluation of the colorization methods, considering various image metrics. It is a useful way to compare the performance of DeOldify [6], DeepRemaster [42], ColTran [18], GCP [31], VCGAN [41] , LatentColorization without temporal consistency mechanism, LatentColorization, and human colorization. By assessing metrics such as PSNR, SSIM, FID, FVD, NIQE, and BRISQUE, you can analyze the quality, similarity, and naturalness of the colorized images. This comparison enables a more data-driven and objective assessment of how well each method performs. The results presented in Table 2 indicate that LatentColorization performs well across all of the referenced and nonreferenced metrics, surpassing the state-of-the-art DeOldify [6] by an average of \u02dc =18% in terms of FVD. This performance showcases the effectiveness of LatentColorization in achieving high-quality and consistent video colorization results. Comparing LatentColorization against human-level colorization is an important evaluation. Using non-reference image quality assessment metrics like NIQE and BRISQUE to assess the relative performance when no ground truth is available is a valuable approach. These metrics provide insights into how closely the colorization generated by LatentColorization aligns with human-expert colorization in terms of image quality. The results in Table 2 show that LatentColorization outperforms human colorization according to NIQE and BRISQUE, which indicates that the colorizations produced by LatentColorization are of high quality when assessed using these non-reference metrics. The other methods also perform well on BRISQUE and NIQE scores relative to the Human Colorized version of the video. Colorization is a subjective matter, and therefore, these metrics must be paired with a user survey to evaluate the systems\u2019 performances. C. SURVEY A survey was conducted to get a more subjective view of the performance of LatentColorization. This study aimed to evaluate the difference in performance between our proposed approach, LatentColorization, and its closest competitor in our experiments, DeOldify [6]. Thirty-two participants were shown three sets of three videos and were asked a question on each set. Each dataset had an associated video set. The survey questions can be seen in our appendices. For the Grid [58] dataset, the participants were shown three versions of the same video taken from the dataset side-byside. One video version had been colorized by LatentColorization, the other by DeOldify [6], and the third was the ground truth. The ground truth video was labelled as such, whereas the LatentColorization and DeOldify [6] versions of the video were anonymous. To distinguish the LatentColorization version of the video from the DeOldify version [6] they were labelled with 1 and 2. After the participants had watched the videos, they were asked which video they thought was closer to the ground truth. The purpose of this question (question 1) was to differentiate in a head-to-head competition in which the colorization system was able to produce outputs which were similar to the ground truth colors of the video. For the Lombard Grid [59] dataset, the participants were shown three versions of an example video taken from the dataset shown side-by-side. Again, one version was colorized by LatentCololorization, the other by DeOldfiy [6], and the third was the ground truth. In contrast to the previous question, the ground truth video was anonymous this time, and the three videos were titled 1,2 and 3. After the participants watched the video, they were asked to rank the three videos in terms of which one looked the most realistic. Therefore, this question (question 2) acted as a visual turning test where 9 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization Dataset Method PSNR \u2191 SSIM \u2191 FID \u2193 FVD \u2193 NIQE \u2193 BRISQUE \u2193 Grid [58] DeOldify [6] 28.07 0.79 52.67 520.75 44.04 32.47 DeepRemaster [42] 27.7 0.77 108.68 927.91 51.19 41.44 ColTran [18] 28.08 0.76 91.76 759.32 49.69 34.55 GCP [31] 27.74 0.75 109.75 1555.53 48.44 33.76 VCGAN [41] 27.86 0.83 67.79 951.28 44.24 37.16 LatentColorization w/o TC 29.63 0.89 20.92 350.35 46.4 33.73 LatentColorization 30.88 0.9 22.26 241.94 41.46 34.68 Lombard Grid [59] DeOldify [6] 30.69 0.93 17.63 396.2 46.43 33.1 DeepRemaster [42] 30.09 0.95 32.9 1382.56 52.36 35.97 ColTran [18] 29.96 0.89 37.7 1583.94 51.25 29.71 GCP [31] 29.86 0.91 85.09 432.31 48.73 33.65 VCGAN [41] 30.2 0.96 72.17 2146.79 50.72 31.01 LatentColorization w/o TC 30.35 0.92 18.41 490.89 44.53 33.84 LatentColorization 30.71 0.93 17.01 375.34 45.79 34.61 Sherlock Holmes Movies DeOldify [6] 42.07 41.15 DeepRemaster [42] 62.36 42.98 ColTran [18] 47.15 37.52 GCP [31] 49.87 41.95 VCGAN [41] 49.84 39.86 Human colorized 48.43 39.78 LatentColorization w/o TC 47.13 38.49 LatentColorization 46.24 41.11 Overall DeOldify [6] 29.19 0.86 40.47 520.85 45.22 35.14 DeepRemaster [42] 28.90 0.86 70.79 1155.24 55.60 40.13 ColTran [18] 29.02 0.83 64.73 1171.63 49.36 33.93 GCP [31] 29.80 0.83 97.42 993.92 49.01 36.45 VCGAN [41] 29.03 0.9 69.98 1549.04 48.27 36.01 Human colorized 48.43 39.78 LatentColorization w/o TC 29.99 0.91 19.67 420.62 46.02 35.35 LatentColorization 30.80 0.92 19.64 308.64 44.50 36.80 TABLE 2. The quantitative comparisons provide a detailed evaluation of different colorization methods across various datasets. These methods include DeOldify, DeepRemaster, ColTran, GCP , VCGAN, Human Colorized, LatentColorization without Temporal Consistency and LatentColorization. The evaluation criteria encompass several metrics, including PSNR, SSIM, FID, FVD, NIQE, and BRISQUE. By comparing these metrics on individual datasets and a combined dataset (consisting of GRID, Lombard Grid, and Sherlock Holmes Movies), the study aims to assess and compare the performance of these colorization methods. This information allows for an evaluation of how LatentColorization compares to other state-of-the-art methods in various scenarios. humans were tested to see if they could tell the difference between a colorization and a ground truth video. The idea behind this is that the better the performance of the colorization system, the more difficult it should be to distinguish between the colorization system and the ground truth. For the Sherlock Holmes dataset, the participants were shown three versions of an example video from the dataset side-by-side. One version had been colorized by LatentColorization, the other by DeOldify [6], and the third was the human-colorized version. This time, the human-colorized version of the video was labelled, and the LatentColorization and DeOldify [6] versions were left anonymous. After the participants had watched the clips, they were asked which of the automatically colorized versions of the clip was closer to the human-colorized version. The purpose of this question (question 3) then was to determine the relative performance of LatentColorization, DeOldify [6] with respect to human expert colorizations. We then collated the survey results and analysed them. The results can be seen visually in fig. 5. The X-axis represents the Mean Opinion Score (MOS) for each question\u2019s methods. The Y axis indicates the relevant question. The color-coded bars represent each of the methods. The mean opinion score was calculated for each method for each question. For Question 1 and Question 2, the mean opinion score is simply the tally of each of the votes as it compares two methods. For Question 3, the mean opinion score is the sum of the ratings for each method divided by the number of methods. Interpreting the graph, we can see that overall LatentColorization was preferred to DeOldify [6]. For question 1, DeOldify [6] received seven votes, and LatentColorization received 25 votes, indicating a preference for LatentColorization on this question. For question 2, the ground truth received the highest MOS score of 28.00, followed by LatentColorization at 20.00 and DeOldiy [6] at 13.67. Summarising this result, the ground truth was preferred most of the time, followed by LatentColorization and finally DeOldify [6]. For question 3, LatentColorization was chosen 26 times out of 31, indicating a strong preference for LatentColorization. D. ABLATION STUDY An ablation study was undertaken to evaluate the impact of the temporal consistency mechanism on the LatentColorization system. The results for both LatentColorization and LatentColorization without temporal consistency mechanism are recorded in Table 2. LatentColorization refers to the version of LatentColorization with the temporal consistency mechanism enabled, and LatentColorization w/o TC refers 10 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 5. The graph of the results of the survey. Each group represents a particular question. The X-axis represents the Mean Opinion Score (MOS) for each question\u2019s methods. The Y axis indicates the relevant question. The color-coded bars represent each of the methods. to the version of LatentColorization where the temporal consistency metric has been disabled. The results of LatentColorization and LatentColorization without temporal consistency appear similar. The main difference is that the FVD values for LatentColorization are roughly 10% lower than LatentColorization without temporal consistency\u2019s FVD values. As a result of this observation, it can be deduced that the temporal consistency mechanism is indeed improving the video quality of the output and, therefore, ensuring temporal consistency. E. FAILURE CASES There were also instances where the system failed to colorize faithfully to the ground truth. This particularly occurred for out-of-distribution data where the videos were from a different domain than speaker videos 6. LatentColorization fails to apply realistic colors to the bedroom scene. It initially manages to separate the walls from the bed as it colorizes the walls blue and the bed orange, see Frame 1. As time passes, see Frame 2 and Frame 3; LatentColorization tends towards a dull grey color. This indicates that LatentColorization is sensitive to the domain that the video is from, and when it does not recognize the contents of a video, it resorts to drab, dull colors. V. DISCUSSION In this section, we discuss our model\u2019s results compared to other approaches from the field and an overview highlighting the main limitations associated with our approach. A. MODEL COMPARISONS The comparison between LatentColorization and nonautoregressive models like ColTran [18] provides insights into the importance of the autoregressive nature of the system FIGURE 6. The comparison of three frames from the system taken from out-of-distribution data. The top row is the black-and-white version of the video, the middle frame is the output of LatentColorization, and the bottom row is the ground truth. It can be seen that LatentColorization has failed to colorize faithfully to the ground truth. in the context of video colorization. Fig 7 demonstrates the difference in consistency between the two approaches. The frames colorized by LatentColorization appear more consistent throughout the video sequence, while those generated by ColTran [18] exhibit more variation. This suggests that the autoregressive nature of LatentColorization, where each frame is conditioned on the previous ones, plays a role in maintaining temporal consistency and ensuring that the colorization is coherent across the entire video. In contrast, nonautoregressive approaches like ColTran [18] may struggle to achieve the same level of consistency in colorized sequences. The qualitative assessment of the colorizations in Fig 7 highlights the differences in colorfulness among LatentColorization, and DeOldify [6]. LatentColorization produces colorful results. In contrast, DeOldify [6] appears grey, suggesting that it may suffer from a lack of color diversity. This observation is consistent with the idea that GANs, which DeOldify [6] is based on, can be susceptible to mode collapse, where they produce limited and less diverse color variations. This observation also correlated with the survey results where LatentColorization was preferred to DeOldify [6] 80% of the time. DeepRemaster [42] Vs LatentColorization: DeepRemaster [42] has struggled with the colorization of this material and has resorted to very bland, dull colors, unlike LatentColorization. GCP [31] Vs LatentColorization: it can be seen that LatentColorization is closer to the ground truth than GCP [31]. GCP has produced colorful output, but it is different in color from the ground truth. It has not succumbed to the mode collapse of its GAN-based architecture, especially on the Lombard Grid [59] dataset. This could potentially be a result of its retrieval mechanism. VCGAN [41] Vs LatentColorization: it can be seen that LatentColorization is closer to the ground truth than VCGAN [41]. VCGAN has produced a blue filter type effect on the 11 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization FIGURE 7. The qualitative comparison of colorization results from various systems, including DeOldify [6], ColTran [18], DeepRemaster [42], GCP [31], VCGAN [41], LatentColorization without the temporal consistency mechanism enabled (LatentColorization w/o TC), LatentColorization and the ground truth, for both the GRID [58] dataset (left) and the Lombard Grid [59] dataset (right) is shown. In the GRID [58] dataset, DeOldify\u2019s [6] colorization, depicted in the first row, exhibits desaturated colors and a halo effect around the subject. ColTran [18], in the second row, produces more colorful results but lacks consistency throughout the sequence. DeepRemaster [42] produces dull, conservative colorizations. GCP [31] produces colorful, consistent colorizations, but they are not faithful to the ground truth. VCGAN [41] produces drab, monotone colorizations. LatentColorization w/o TC produces colorization similar to the ground truth. It is difficult to visually distinguish between LatentColorization w/o TC, LatentColorization and the ground truth itself. The ground truth, represents the original color frames. Similar observations can be made for the Lombard Grid [59] dataset. These visual comparisons demonstrate that LatentColorization consistently delivers colorization results that closely match the original colors, making it a promising technique for automatic video colorization tasks. frames. LatentColorization Vs LatentColorization without temporal consistency: has been investigated in the ablation study. Essentially, it is difficult to visually differentiate between the two, and the main difference can be seen quantitatively in their relative FVD scores. The quantitative evaluation, as shown in Table 2, indicates that LatentColorization achieved scores on the NIQE and BRISQUE metrics that are close to human-level colorization. In summary, these results suggest that LatentColorization, in this experiment, is comparable to human-level colorization in terms of the assessed quality metrics. This highlights the effectiveness of the LatentColorization method in generating high-quality colorized videos. This evaluation also correlates with our survey, where LatentColorization received a higher preference from the subjects than DeOldify [6]. The survey also shows a tendency of the users to prefer the ground truth videos over both LatentColorization and DeOldify [6]. 12 Ward et al.: LatentColorization: Latent Diffusion-Based Speaker Video Colorization B. LIMITATIONS One of the main limitations of our approach is that the datasets we use are specific to speaker videos, causing our model to perform more poorly on out of domain data. We intend to address this in our future work by training our model on a more diverse dataset capturing a wide range of scenarios. Our model also exhibits poor performance when it compares to inference speed. For instance, colorizing a fivesecond clip at fifty diffusion steps takes roughly one hundred and fifty seconds on an Nvidia 2080 with 8GB of VRAM. One of the main drawbacks to a diffusion modelbased system is its inference time, as the model must sample each frame equal to the number of diffusion steps chosen. Real-time colorization is beyond the scope of this work, but generally, real-time is not a requirement in applications. Ethical concerns which must also be considered. Two main worries associated with this type of technology are potential misuse and bias. Defining the potential misuse of colorization systems is a difficult task with various nuances. Opponents of colorization believe that it is unnecessary and defaces the original work. Proponents retort that it makes the material more approachable to wider audiences [67]. In addition to the ethical considerations surrounding the potential issue of these systems, there are also concerns regarding the bias of these systems [68]. Through experimentation it has been found that these systems can be susceptible to tending towards outputs which are similar to the data that they were trained on. As datasets can be biased, the models can also inherit this bias and therefore output inaccurate results. In colorization systems, this can present itself in such manners as incorrect skin colors or incorrect color uniforms which may give a distorted view of history. VI. CONCLUSION In conclusion, our work demonstrates the effectiveness of diffusion-based models, particularly the LatentColorization method, in achieving results comparable to the state of the art across multiple datasets. Notably, the system performs comparably to human-level colorization on the \u2019Sherlock Holmes Movie\u2019 dataset, indicating its practical significance and the potential for application-specific video colorization. The use of a latent diffusion model and the incorporation of a temporally consistent colorization approach contribute to realistic and convincing colorization results, making the process more accessible and reducing the reliance on traditional human-driven colorization methods. This research provides insights into the potential of diffusion models for video colorization and opens up opportunities for further developments in this field. VII. FUTURE WORK Expanding on our research, adapting the system to work with various video styles, types, and content would be a promising direction for future work, enabling a broader assessment of the approach\u2019s applicability in the context of general video colorization. These endeavors will further enhance the practicality and versatility of our research in the field of automatic video colorization."
16
+ }
intro_8K/test_introduction_short_2405.05791v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05791v1",
3
+ "title": "Sequential Amodal Segmentation via Cumulative Occlusion Learning",
4
+ "abstract": "To fully understand the 3D context of a single image, a visual system must be\nable to segment both the visible and occluded regions of objects, while\ndiscerning their occlusion order. Ideally, the system should be able to handle\nany object and not be restricted to segmenting a limited set of object classes,\nespecially in robotic applications. Addressing this need, we introduce a\ndiffusion model with cumulative occlusion learning designed for sequential\namodal segmentation of objects with uncertain categories. This model\niteratively refines the prediction using the cumulative mask strategy during\ndiffusion, effectively capturing the uncertainty of invisible regions and\nadeptly reproducing the complex distribution of shapes and occlusion orders of\noccluded objects. It is akin to the human capability for amodal perception,\ni.e., to decipher the spatial ordering among objects and accurately predict\ncomplete contours for occluded objects in densely layered visual scenes.\nExperimental results across three amodal datasets show that our method\noutperforms established baselines.",
5
+ "authors": "Jiayang Ao, Qiuhong Ke, Krista A. Ehinger",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Robots often encounter unfamiliar objects in ever-changing unstructured environments such as warehouses or homes [31]. These scenarios require systems capable of manipulating objects based on their complete shape and occlusion relationships rather than their visibility or category [2, 7, 33]. However, most state-of-the-art amodal segmentation methods [1, 8, 15, 32], which are usually constrained by the need for class-specific data, struggle to generalize to unseen objects and are susceptible to misclassification. Diffusion probabilistic models specialize in capturing and reproducing complex data dis- tributions with high fidelity [11], making them well-suited for generating the invisible parts of unknown objects. In contrast to traditional convolutional networks that often struggle with the complexity of occlusions [10, 27], diffusion models proficiently reconstruct ob- jects through their iterative refinement process. This process is particularly advantageous for inferring occluded object regions, as it progressively recovers the occluded parts based on visible context and learned possible object shapes. Additionally, while current amodal segmentation methods typically overlook the uncertainty in the shape of the hidden part, diffusion models inherently sample from the learned distribution [25, 38], providing multi- ple plausible hypotheses for the occluded shape. Given these capabilities, diffusion models present a fitting approach for advancing the field of amodal segmentation. We introduce a novel diffusion model for sequential amodal segmentation that does not rely on object categories. Our approach transcends traditional single or dual-layer prediction limitations [12, 17, 22] by enabling the simultaneous segmentation of unlimited object layers in an image. In addition, our framework generates multiple plausible amodal masks for each arXiv:2405.05791v1 [cs.CV] 9 May 2024 2 Layer 1 Layer 2 Ground Truth Prediction Cumulative Mask Image Layer 3 Layer 4 Image Layer 3 Layer 4 Ground Truth Prediction Image Layer 3 L Ground Truth Prediction Layer 4 Ground Truth Pr Layer 3 Figure 1: The cumulative mask and amodal mask predictions for an input image. Our method can generate reliable amodal masks layer by layer and allows multiple objects per layer. object from a single input image, contrasting with prior approaches that depend on multiple ground truths to achieve varied results [9, 25, 34]. Tailored to the amodal task, our method requires only a single ground truth per object during training to capture the diversity of occlusions, overcoming the limitations of existing amodal datasets that typically provide only one annotation per object and neglect the variability in invisible regions. Our framework takes an RGB image as input and sequentially predicts the amodal masks for each object, as illustrated in Fig. 1. The iterative refinement process of our proposed algorithm, inspired by human perception mechanisms for invisible regions [28], leverages preceding identified items to infer subsequent occluded items. Specifically, it employs a cu- mulative mask, which aggregates the masks of previously identified objects. This strategy allows the model to maintain a clear record of areas already segmented, directing its focus to- ward unexplored regions. By focusing the prediction effort on uncertain or occluded regions, our approach improves the accuracy and reliability of the amodal segmentation process. We validate our approach through comprehensive ablation studies and performance bench- marking across three amodal datasets, demonstrating its superiority in handling complex sequential amodal segmentation challenges. The main contributions of our work are: \u2022 A new sequential amodal segmentation method capable of predicting unlimited layers of occlusion, enabling occlusion modelling in complex visual scenes. \u2022 Occluded shape representation which is not based on labelled object categories, en- hancing its applicability in diverse and dynamic settings. \u2022 A diffusion-based approach to generating amodal masks that captures the uncertainty over occluded regions, allowing for diverse segmentation outcomes.",
15
+ "main_content": "1 The University of Melbourne Parkville, 3010, Australia 2 Monash University Clayton, 3800, Australia Abstract To fully understand the 3D context of a single image, a visual system must be able to segment both the visible and occluded regions of objects, while discerning their occlusion order. Ideally, the system should be able to handle any object and not be restricted to segmenting a limited set of object classes, especially in robotic applications. Addressing this need, we introduce a diffusion model with cumulative occlusion learning designed for sequential amodal segmentation of objects with uncertain categories. This model iteratively refines the prediction using the cumulative mask strategy during diffusion, effectively capturing the uncertainty of invisible regions and adeptly reproducing the complex distribution of shapes and occlusion orders of occluded objects. It is akin to the human capability for amodal perception, i.e., to decipher the spatial ordering among objects and accurately predict complete contours for occluded objects in densely layered visual scenes. Experimental results across three amodal datasets show that our method outperforms established baselines. The code will be released upon paper acceptance. Amodal segmentation with order perception requires segmentation of the entire objects by including both visible and occluded regions while explicitly resolving the layer order of 3 all objects in the image. Establishing layering of objects allows for a comprehensive understanding of the scene and the spatial relationships between objects, which is essential for tasks such as autonomous driving, robot grasping, and image manipulation [2, 14, 40]. Current amodal segmentation methods mainly assess occlusion states of individual objects [6, 22, 26, 30] or between pairs [2, 12, 37], but tend to ignore the global order in a complex scene, such as the relationship between independent groups. While some work [1, 40] has begun to address amodal segmentation with perceptible order, they fall short for class-agnostic applications due to design constraints on category-specific dependencies. Class-agnostic segmentation aims to detect masks without relying on pre-learned categoryspecific knowledge. It is vital for scenarios where comprehensive labelling is resourceintensive or when encountering unseen categories [23, 31]. However, amodal segmentation approaches usually depend on predefined class labels and thus have limited ability to handle unknown objects [15, 19]. While there are a few methods which consider the class-agnostic amodal segmentation, [2] is for RGB-D images with depth data rather than RGB images, [5] relies on the bounding box of the object as an additional input to predict amodal masks, [41] treats amodal masks prediction and ordering as separate tasks thus designs the methods individually, and other requires additional inputs for prediction such as visible mask [20, 39] Segmentation with diffusion models has recently attracted interest as its ability to capture complex and diverse structures in an image that traditional models might miss [4, 16, 35, 36]. Particularly in medical imaging, diffusion models are used to generate multiple segmentation masks to simulate the diversity of annotations from different experts [9, 25, 34, 38]. However, these methods are designed for the visible part of images and do not adequately address the diversity of predictions required for the hidden part of objects. In summary, our approach addresses sequential amodal segmentation with two key improvements: First, a novel segmentation technique capable of globally predicting occlusion orders, offering a comprehensive understanding of object occlusion relationships in a scene. Second, a diffusion-based model to provide diverse predictions for amodal masks, especially for the occluded portions. This model uniquely employs cumulative occlusion learning that utilises all preceding masks to provide vital spatial context, thus boosting its ability to segment occluded objects. 3 Problem Definition Our goal is to amodally segment multiple overlapping objects within an image without object class labels, while determining the occlusion order of these objects. Specifically, the task requires inferring complete segmentation masks of all objects, including both the visible and occluded portions, and assigning a layering order to these segments. For a given RGB image I, the goal of our sequential amodal segmentation approach is two-fold. First, to produce a collection of amodal segmentation masks {Mi}N i=1, where each mask Mi represents the full extent of the corresponding object Oi within the scene\u2014this includes both visible and occluded regions. Second, to assign a layer ordering {Li}N i=1 to these objects based on their mutual occlusions, thereby constructing an occlusion hierarchy. The layer variable Li adheres to the occlusion hierarchy defined by [1]. The bi-directional occlusion relationship Z(i, j) indicates if Oi is occluded by Oj, given by: Z(i, j) = ( 1, if object Oi is occluded by object O j, 0, otherwise. (1) 4 The set Si comprises indices of those objects occluding Oi, is defined by Si = { j|Z(i, j) = 1}. Subsequently, the layer ordering Li for each object Oi is computed based on: Li = ( 1, if Si = / 0, 1+max j\u2208Si Lj, otherwise. (2) The ultimate goal is to derive an ordered sequence of amodal masks \u03c4 = \u27e8M1,...,MN\u27e9 that correctly represents the object layers in image I. 4 Methodology The architecture of our proposed model is shown in Fig. 2. Details on the architectural components, the cumulative guided diffusion model and the cumulative occlusion learning algorithm are discussed in Sections 4.1 and 4.2, respectively. ,QSXW\u0003,PDJH 3UHGLFWLRQV &XPXODWLYH\u00032FFOXVLRQ\u0003/HDUQLQJ ,QSXW\u0003,PDJH &XPXODWLYH\u00030DVN 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 2XWSXW \u0011\u0003\u0011\u0003\u0011 \u0011\u0003\u0011\u0003\u0011 \u0011\u0003\u0011\u0003\u0011 Figure 2: Architecture of our model. Our model receives an RGB image as input and predicts multiple plausible amodal masks layer-by-layer, starting with the unoccluded objects and proceeding to deeper occlusion layers. Each layer\u2019s mask synthesis receives as input the cumulative occlusion mask from previous layers, thus providing a spatial context for the diffusion process and helping the model better segment the remaining occluded objects. 4.1 Diffusion-based Framework Denoising diffusion probabilistic models (DDPM) are popular generative models that provide powerful frameworks for learning complex data distributions [11]. Building on the improved DDPMs [21], we introduce a novel approach that extends the capabilities of diffusion models to the domain of amodal segmentation, which involves segmenting visible regions while inferring the shapes of occluded areas. This is distinct from existing diffusion models that focus primarily on visible image features, where additional understanding of occlusion structure in an image makes it a unique challenge. Cumulative mask. We introduce the cumulative mask\u2014a critical innovation that incorporates the spatial structures of objects, facilitating the understanding of both visible and occluded object parts. The cumulative mask aggregates the masks of all objects which are in front of (and potentially occluding) the current layer. Specifically, the cumulative mask for an object Oi with layer order Li encompasses the masks of all objects with a layer order lower than Li, thereby representing the cumulative occlusion up to that layer. For each object Oi with its amodal mask Mi and layer order Li, the cumulative mask CMi is formalized as: CMi = [ { j|L j<Li} Mj, (3) 5 MT Mt Mt-1 M0 \u2295 . . . . . . Figure 3: Cumulative guided diffusion. The diffusion process is informed by the input image and the dynamically updated cumulative mask at each depth layer. The diffusion only perturbs the amodal masks, maintaining the contextual and spatial integrity of the image and the corresponding cumulative mask unaltered. where S denotes the union operation, CMi is the cumulative mask for object Oi, Mj are the masks of objects with a lower layer order Lj than that of Oi, reflecting the cumulative occlusion encountered up to object Oi. CM = \u2205denotes no prior occlusion and is used for the fully visible objects in L1. Cumulative guided diffusion. We enhance DDPMs [11, 21] to address the unique challenge of understanding occluded regions for amodal segmentation. The diffusion process is informed by a static representation of the input image and the cumulative mask from previous layers. The diffusion process generates an amodal mask for the current layer\u2019s objects, which is then added to the cumulative occlusion mask to generate the next layer. Fig. 3 illustrates the proposed cumulative guided diffusion process. Following the standard DDPMs implementation [11], the diffusion process is modelled as a Markov chain. The forward process q at time t evolves from the previous step t \u22121 is: q(xt|xt\u22121) := N(xt;\u221a\u03b1txt\u22121,(1\u2212\u03b1t)I), (4) where xt is the noisy data at t, \u03b1t is the scheduler which determines the noise variance at each step, and I is the identity matrix. The reverse process, which is a learned neural network parameterized by \u03b8, endeavours to reconstruct the original data from its noisy version, thus performing denoising: p\u03b8(xt\u22121|xt) := N(xt\u22121;\u00b5\u03b8(xt,t),\u03a3\u03b8(xt,t)), (5) where the parameters of mean and variance are \u00b5\u03b8 and \u03a3\u03b8. As proven in Ho et al. [11], xt\u22121 can be computed from xt: xt\u22121 = 1 \u221a\u03b1t (xt \u22121\u2212\u03b1t \u221a1\u2212\u00af \u03b1t \u03b5\u03b8(xt,t))+\u03c3tz (6) where z \u223cN(0,I), \u00af \u03b1t := \u220ft s=1 \u03b1s, \u03b5\u03b8 is a neural network function that learns noise prediction, and \u03c3t is the standard deviation schedule. We inform our model with the input image and its dynamically updated cumulative mask at each depth layer. This allows the model to recover the occluded objects progressively based on previously learned context. We achieve this by concatenating a given image I, 6 the cumulative mask CMi and amodal mask Mi for objects in layer Li along the channel dimension, and define: Xi := I \u2295CMi \u2295Mi (7) The forward processing of q adds noise only to the amodal masks, keeping the input image and the corresponding cumulative mask unaltered. For a given image I and cumulative mask CMi, we only add noise to the amodal mask Mi: Mi,t = \u221a\u00af \u03b1tMi + p 1\u2212\u00af \u03b1t\u03b5, \u03b5 \u223cN(0,I) (8) Since we can define Xi,t := I \u2295CMi \u2295Mi,t, Equation 6 is modified as, Mi,t\u22121 = 1 \u221a\u03b1t (Mi,t \u22121\u2212\u03b1t \u221a1\u2212\u03b1t \u03b5\u03b8(Xi,t,t))+\u03c3tz (9) where z \u223cN(0,I). The reverse process aims to reconstruct the noise-free amodal mask from its noisy counterpart, effectively denoising the mask at each timestep as t decreases. The neural network\u2019s parameters are trained to minimize the difference, measured by the Kullback-Leibler divergence, between the forward and reverse distributions across all timesteps. The loss function is expressed as: L(\u03b8) = Et,Mi,\u03b5 h \u2225\u03b5 \u2212\u03b5\u03b8(\u221a\u00af \u03b1tMi + p 1\u2212\u00af \u03b1t\u03b5,t)\u22252i , (10) where \u03b5 is the true noise, and \u03b5\u03b8 is the model-predicted noise. The training process optimizes \u03b8 by minimizing the mean squared error between the true and predicted noise, facilitating a precise recovery of the amodal mask through the reverse diffusion sequence. During inference, the model utilizes the learned reversal mechanism to generate multiple plausible amodal masks by sampling from a standard Gaussian distribution and conditioning on each object\u2019s unique context: M(k) gen,i = f\u03b8(N(0,I),I,CMi), k = 1,...,K, (11) where f\u03b8 represents the trained generative function of the model, and M(k) gen,i is the k-th generated amodal mask prediction for the object Oi. This process allows the generation of multiple plausible occlusion masks for each object layer. Through the specialized design of cumulative guided diffusion, our framework is able to address the intricacies of the sequential amodal segmentation task, uncovering the full scope of objects in complex, multi-layered visual scenes. 4.2 Cumulative Occlusion Learning Lack of spatial contextual awareness of surrounding objects in amodal segmentation can yield inaccurate or incomplete scene interpretations. To address this, we propose the cumulative occlusion learning algorithm, which employs a hierarchical procedure that learns to predict amodal masks in an order-aware manner. It operates by accumulating visual information, where the history of observed data (previous segmentation masks) influences the perception of the current data (the current object to segment). This strategy is akin to human perception, where the understanding of a scene is constructed incrementally as each object is identified and its spatial relation to others is established. 7 Algorithm 1 Training Algorithm for cumulative occlusion learning Input: Image I with number of N layers Output: Ordered sequence of amodal masks \u03c4 = \u27e8\u02c6 M1, \u02c6 M2,..., \u02c6 MN\u27e9 Initialize CM0 to a blank mask Initialize the ordered sequence \u03c4 as an empty list for i = 1 to N do Input to model: I, CMi\u22121 Predict amodal mask \u02c6 Mi for objects in layer Li Update CMi \u2190CMi\u22121 \u222aMi (Ground Truth) Append \u02c6 Mi to the sequence \u03c4 end for Perform a final prediction \u02c6 MN+1 with I and CMN assert \u02c6 MN+1 is a blank mask return \u03c4 Training. We initiate with an empty cumulative mask (CM0) and an image I with N layers. The model proceeds iteratively, predicting the amodal mask \u02c6 Mi for each layer while updating the cumulative mask using ground truth amodal masks to ensure the accuracy of the spatial context during training. Note that the diffusion is applied solely to the amodal mask predictions, while the image I and the cumulative mask CM remain intact. This cumulative strategy enhances accuracy by incorporating occlusion context into each layer in the learning process, enhancing the model\u2019s spatial understanding. Alg. 1 shows the complete training process. Notably, we introduce a predictive step for a layer N +1, which trains the model to expect a blank mask after all object layers have been identified and segmented. This ensures that the model learns to identify the last layer with any partially-visible objects and does not continue to hallucinate fully-occluded objects behind these. Inference. Different from training, the inference phase needs to operate without available ground truth. Thus, it selects the most probable amodal mask from multiple predictions generated by the diffusion model to update the cumulative mask. Inference commences with an image I and aims to reconstruct an ordered sequence of amodal masks by layer. For each layer, a set of K diffusion-generated amodal mask predictions are evaluated to select the most representative amodal mask \u02c6 Mi for that layer. The selection criterion is based on the minimum absolute difference from each mask to the mean of non-null predictions, while ensuring spatial continuity between consecutive layers. The selected mask is then utilized to update the cumulative mask for subsequent layers\u2019 predictions. The process continues iteratively for an image I until a stopping criterion is met. The stopping criteria are established to avoid over-generation of invalid predictions when (1) reaching the maximum number of layers, or (2) all predicted masks are empty or the predicted object pixels of the selected mask are below a threshold area. Alg. 2 shows the complete inference process, where the stopping criteria Nmax and Areamin are determined by the maximum number of layers and the minimum object area present in the corresponding training data, respectively. Strategies for using ground truth or predicted cumulative mask. Our model leverages the ground truth cumulative mask as input during training, while inference uses the predicted masks from previous layers to build the cumulative mask. A common idea is to utilize the predicted cumulative mask in training, mirroring the inference setup. However, this complicates the early stages of training, when all of the predicted masks (and thus the cumulative mask) are similar to random noise. We conducted experiments in which we introduced controlled noise into the cumulative mask during training, to simulate the types of errors which occur during inference, but the results showed that this did not noticeably change the trained model\u2019s performance (see Sec. 5.4). Therefore, the model presented here uses the ground truth cumulative mask during training. 8 Algorithm 2 Inference Algorithm for cumulative occlusion learning Input: Image I, Maximum number of layers Nmax, Minimum object pixel area Areamin Output: Ordered sequence of amodal masks \u03c4 = \u27e8\u02c6 M1, \u02c6 M2,...\u27e9 Initialize CM0 to a blank mask Initialize the ordered sequence \u03c4 as an empty list; Initialize i = 1 while i \u2264Nmax do Generate K mask predictions { \u02c6 M1 i , \u02c6 M2 i ,..., \u02c6 MK i } Compute mean map Mi from non-null \u02c6 M j i Select \u02c6 Mi with minimum || \u02c6 Mk i \u2212Mi|| Enforce spatial integrity: if \u02c6 Mi \u2229\u02c6 Mi\u22121 = \u2205, reassign \u02c6 Mi to the same layer as \u02c6 Mi\u22121 if \u02c6 Mi is null or \u02c6 Mi.area < Areamin then Break end if Update CMi \u2190CMi\u22121 \u222a\u02c6 Mi Append \u02c6 Mi to \u03c4 i \u2190i+1 end while return \u03c4 In summary, cumulative occlusion learning allows the network to learn a robust internal representation of class-agnostic amodal object shape through occlusion and learns to recognize the depth layer ordering of objects in scenes. This approach means the model can any arbitrary number of layers of occlusions, because it automatically learns to recognise when all visible objects have been segmented. Moreover, by preserving the input image and cumulative mask unaltered during the diffusion perturbations, our model maintains the fidelity of the contextual information, which is crucial for generating accurate amodal predictions. 5 Experiments and Discussions 5.1 Datasets We focus on amodal datasets highly relevant to robotics applications. Intra-AFruit, ACOM and MUVA [1, 15] include objects such as fruits, vegetables, groceries, and everyday products, effectively simulate the kind of visual clutter and occlusion challenges encountered in industrial robotics, making them ideal for our study. We enhanced these three datasets tailored for novel sequential amodal segmentation tasks, with layer structure annotations and class-agnostic masks. The training and test images in these datasets are sourced directly from the corresponding partitions of the original dataset. All images have been downsampled to a resolution of 64 \u00d7 64 pixels for computational efficiency. To eliminate indistinguishable or misleading ground truth data, we excluded images with post-downsampling visible object areas under 10 pixels. Intra-AFruit [1] dataset contains ten classes of fruits and vegetables. We limited the original test set to a random subset of 3,000 images to enhance experimental efficiency. The reprocessed dataset includes 187,204 training and 3,000 test images, with each image potentially containing up to five layers. ACOM [1] dataset contains ten classes of common objects with synthetically generated annotations. The reprocessed dataset includes 9,378 training and 2,355 test images with up to five layers. MUVA [15] dataset contains twenty categories of supermarket items. To avoid compression distortion of non-square images, we cropped square images using the shortest edge and aligned the crop to the leftmost or centre, which follows object distribution rules to preserve more objects. The reprocessed dataset includes 5,582 training and 1,722 test images with up to seven layers. 9 5.2 Implementation Details We set the timestep T=1,000 with a linear noise schedule for all the diffusion models. The models were trained using the AdamW optimizer [18] at a learning rate of 0.0001 and a batch size of 256. The other hyperparameters of the diffusion models follow the implementation in [21]. All experiments were implemented using the PyTorch framework and trained/tested on one A100 GPU. Evaluation metrics. The performance of class-agnostic segmentation is generally measured by comparing predicted masks with ground truth annotations [3, 24, 29]. We adopted two commonly used metrics: intersection over union (IOU) and average precision (AP). 5.3 Architecture Analysis Number of generated amodal masks. Our proposed method enables the generation of multiple amodal masks for each object, thus enabling the capture of uncertainty and allowing for the diversity of reasonable configurations of the occluded parts without the need for diverse training annotations for each image (see Fig. 4 (a)). This is particularly useful for amodal tasks considering occluded areas, where manual annotation is very expensive and synthetic images often provide only the sole ground truth. Images GT mask Prediction 1 Prediction 2 Prediction 3 Layer 1 Layer 2 Layer 3 Layer 1 Layer 2 Ground Truth Prediction CPLM Image Layer 3 Layer 4 Ground Truth Prediction 1 Prediction 2 Prediction 3 Image Layer 1 Layer 2 Ground Truth Prediction Cumulative Mask Image Layer 3 Layer 4 Ground Truth Prediction 1 Prediction 2 Prediction 3 Image Image Layer 3 Layer 4 Ground Truth Prediction Image Layer 3 Layer 4 Ground Truth Prediction Layer 4 Ground Truth Prediction Layer 3 (a) (b) Figure 4: (a) Our approach considers the diversity of possible amodal masks, especially for occluded regions (indicated by dashed circles). (b) Example of misjudgement of the order of occluded objects in adjacent layers. Layer 3\u2019s prediction reflects Layer 4\u2019s ground truth and vice versa. This can also be a challenge for human perception. Metric Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Ensemble IOU AP IOU AP IOU AP IOU AP IOU AP k=3 57.1 57.8 44.8 45.4 28.8 30.0 12.2 14.2 1.9 3.6 k=5 56.7 57.5 44.3 44.9 28.8 29.7 12.7 14.3 2.3 3.7 k=7 56.8 57.5 44.7 45.4 29.4 30.0 12.6 14.1 2.6 3.6 k=9 56.9 57.7 44.4 45.1 29.5 30.2 12.9 14.2 2.4 3.7 Table 1: Ablation study for generating different numbers of masks during inference. While an arbitrary number of masks could be generated, we need to set a reasonable number for inference. Tab. 1 shows the performance of generating different numbers of masks for each layer during inference on the ACOM dataset, where the IOU and AP do not vary much, but the computation increases dramatically with more masks. Considering the computational efficiency, we generated 3 masks per layer in subsequent experiments. Selection of cumulative mask. The inference process could give multiple predictions for each layer, so there might be two options to update the cumulative mask for a given layer: (1) use one most plausible prediction for that layer. Here, we choose the prediction with the 10 minimum absolute difference from the mean of all predictions as the one. (2) use the mean of all predictions for that layer to form a mean mask. While the mean mask more explicitly takes into account all predictions, the risk is that when a prediction incorrectly gives an object that does not belong in that layer, the mean mask reacts to that as well. For example, a previous prediction showing an object in the next layer may cause the next prediction to ignore that object, because the object is already included in the given mean mask. Therefore, in the inference process, the cumulative mask employs the most representative amodal mask (with the minimum absolute difference from the mean mask) rather than directly using the mean mask of all predictions for that layer. This avoids confusion due to the simultaneous prediction of objects in different layers. Tab. 2 shows the superiority of our mask selection method over using the mean mask for occluded layers on ACOM dataset. Choice of L1 L2 L3 L4 L5 Cumulative Mask AP Mean mask 57.7 43.1 27.9 10.4 2.8 Select mask 57.8 (+0.1) 45.4 (+2.3) 30.0 (+2.1) 14.2 (+3.8) 3.6 (+0.8) Table 2: For predicting occluded objects (Layer L>1), the mask we selected is more suitable for constructing cumulative masks than using the mean mask directly. Failure analysis. A common challenge arises from errors in sequential prediction, particularly determining which of two objects is in front of the other when the overlapping region is occluded by a third object. This may lead to objects being predicted in incorrect layers, as illustrated in Fig. 4 (b). Synthetic images can amplify this challenge due to fewer spatial cues (such as height in the image plane or scene semantics) to disambiguate occluded object order. Our cumulative occlusion learning mitigates the impact of these errors by considering the cumulative mask for all preceding layers. We demonstrate the robustness of our method to such failures through noise introduction experiments in the next section. 5.4 Noise Introduction Experiment in Cumulative Mask Our model leverages the ground truth cumulative mask as input during training, while inference uses the predicted masks from previous layers to build the cumulative mask, as described in Sec. 4.2. A common idea is to utilize the predicted cumulative mask in training, mirroring the inference setup. However, this complicates the early stages of training, when all of the predicted masks (and thus the cumulative mask) are similar to random noise. To bridge the gap between training and inference, we conducted experiments in which we introduced controlled noise into the cumulative mask during training, to simulate the types of errors which occur during inference. The experiment was designed to mimic common types of inference errors, such as continuous prediction errors due to layer dependencies or over-segmentation due to boundary ambiguity. This was achieved by selectively omitting instances from a random layer in the cumulative mask while keeping the input RGB image and the prediction mask unchanged. These experiments also simulate and seek to understand the impact of sequential prediction errors on the model\u2019s performance. By introducing noise into the cumulative mask during training, we effectively create scenarios where the model must handle instances segmented into the wrong layer, as happens when the model makes sequential prediction errors. 11 Specifically, instances from a randomly chosen layer (excluding the fully visible layer) are excluded from the cumulative mask. Mathematically, selecting a random layer index irand from [2, n], the perturbed version of the cumulative mask, denoted as P, is derived by: P = CM \u2212Mirand (12) Where CM is the original cumulative mask, and Mi is the ground truth mask of the ith layer instance (i \u2208[2,n]). The subtraction here is a pixel-wise binary operation. During training, the model will replace CM with P as input at a specified noise level ratio. Noise 0% 5% 10% 15% 20% 0% 5% 10% 15% 20% Layer AP IOU 1 57.8 51.7 56.6 56.0 57.6 57.1 50.3 55.8 55.3 56.9 2 45.4 37.5 44.1 40.2 40.3 44.8 35.5 43.2 38.8 39.2 3 30.0 24.6 28.0 24.9 23.5 28.8 21.9 26.8 22.4 20.8 4 14.2 10.7 12.1 10.3 9.2 12.2 7.9 10.3 8.0 6.5 5 3.6 3.3 3.4 3.2 2.9 1.9 1.9 2.2 1.7 1.0 Table 3: Comparison at different noise levels, evaluated with AP and IOU. Noise-free training results in the highest AP across the layers, and the highest IOU for the first four layers and the second highest for the fifth layer. Tab. 3 illustrates the model\u2019s performance in terms of AP and IOU across different layers and noise levels. It was observed that the highest AP was achieved with 0% noise for all layers. Similar to AP, the IOU results also showed that the highest performance was generally observed with 0% noise, except for the 5th layer, where a slight increase was noted at 10% noise level. Overall, this suggests that adding noise in training has very limited benefit. On the contrary, training without noise achieves the best performance in terms of AP or IOU in the vast majority of cases. The results of the experiment provide insight into the model\u2019s robustness to errors in the sequential segmentation process and validate the effectiveness of our cumulative occlusion learning approach. By focusing on the cumulative mask for all preceding layers, our approach avoids the cascading effects of sequential prediction errors, ensuring more reliable performance even in complex occlusion scenarios. Despite the theoretical appeal of mimicking inference conditions during training, the results indicate that using ground truth cumulative masks remains the more effective approach. This strategy consistently yielded superior results across most metrics and layers, showing its suitability to our model training process. Based on these findings, our training strategy uses the ground truth cumulative masks. 5.5 Comparisons with Other Methods We benchmark against DIS [34], a leading diffusion-based segmentation method. For comparison, we trained distinct DIS models for each layer under the same iterations and evaluated the segmentation results separately for each layer. Tab. 4 comprehensively compares our method and the improved DIS across different layers on three amodal datasets. The performance of the MUVA dataset after five layers is omitted because the performance of both models approaches zero. The superiority of our method is particularly evident in deeper layers, where our method maintains reasonable performances, whereas DIS shows a marked 12 Layer 1 2 3 4 5 Dataset Method IOU / AP IOU / AP IOU / AP IOU / AP IOU / AP Intra-AFruit DIS 89.5 / 90.7 81.6 / 82.6 52.4 / 52.6 9.8 / 12.4 0.5 / 2.0 Ours 94.3 / 94.7 87.4 / 88.2 76.2 / 77.3 26.7 / 27.6 7.2 / 7.4 ACOM DIS 31.6 / 34.8 26.6 / 28.7 1.6 / 10.2 0.2 / 6.0 0.1 / 2.5 Ours 57.1 / 57.8 44.8 / 45.4 28.8 / 30.0 12.2 / 14.2 1.9 / 3.6 MUVA DIS 68.2 / 71.5 19.3 / 27.3 0.1 / 8.6 0.2 / 3.4 0 / 0.5 Ours 77.0 / 79.3 48.7 / 51.2 25.4 / 27.8 8.5 / 9.9 1.0 / 1.1 Table 4: Comparison with a diffusion-based segmentation model [34] without cumulative occlusion learning. Our method exhibits great improvement in complex, deeper-layer scenes. Dataset Intra-AFruit ACOM MUVA Method Supervision Framework AP w/ Layer AP w/o Layer AP w/ Layer AP w/o Layer AP w/ Layer AP w/o Layer PointRend Supervised CNN-based N/A 70.9 N/A 22.0 N/A 38.9 AISFormer Supervised Transformer-based N/A 70.4 N/A 34.9 N/A 49.7 PLIn Weakly supervised CNN-based 42.2 78.9 3.9 17.0 16.3 47.3 Ours Supervised Diffusion-based 84.6 92.6 45.4 65.5 53.1 55.7 Table 5: Comparison with category-specific segmentation models. PointRend [13], AISFormer [32] and PLIn [1] are trained on category-specific data, whereas our models are trained using class-agnostic data. We evaluate the models by focusing solely on the segmentation quality, disregarding any category information. decline, especially in the MUVA dataset. These results highlight the robustness of cumulative occlusion learning in handling layered occlusions across various datasets, particularly in more complex scenarios involving multiple layers of object occlusion. Due to the lack of class-agnostic amodal segmentation methods with layer perception, we compare against category-specific methods like PLIn for amodal segmentation with occlusion layer prediction [1], AISFormer for amodal segmentation without layer perception [32], and PointRend for modal segmentation [13]. We trained these comparison models using category-labelled amodal masks to meet their requirement for category-specific learning, while our model is trained on data without category labels. For evaluation, we ignore category label accuracy for the comparison models, reporting only segmentation accuracy. We present the AP results considering two scenarios in Tab. 5: with layer prediction, where segmentation precision is contingent on correct layer assignment, and without layer prediction, where segmentation is recognized irrespective of layer placement. Despite being trained on class-agnostic data, our method surpasses category-specific models trained on category-labelled data. Furthermore, Fig. 5 visually demonstrates our method\u2019s superiority in amodal mask segmentation. Our approach provides plausible masks even for heavilyoccluded objects, showcasing its enhanced segmentation capability in complex scenes involving multiple layers of object occlusion. We provide more visualisations of our model\u2019s predictions for the Intra-AFruit [1], MUVA [15] (Fig. 7), (Fig. 6) and ACOM [1] (Fig. 8) test sets. As we can see from the figures, our model performs robustly with different objects and different levels of occlusion. 13 Image Ground Truth (a) Ours (b) DIS (c) CIMD (d) PLIn (e) PointRend Figure 5: Comparison of predictions on Intra-AFruit (top) and MUVA (bottom) test image by (b) DIS [34] (c) CIMD [25] (d) PLIn [1] (e) PointRend [13] and (a) ours, where (b) and (c) are diffusion-based methods. Dashed circles indicate objects that missed being predicted. Others fail to segment objects or provide less plausible amodal masks compared to ours. Layer 1 Layer 2 Layer 3 Layer 4 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Layer 5 Figure 6: Visualisation of the prediction of our model on the Intra-AFruit [1] test set. Each layer\u2019s amodal mask synthesis receives the cumulative mask of the previous layers as input, thus providing a spatial context for the prediction and helping to segment the remaining occluded objects better. We can see that our model can predict amodal masks and occlusion layers well for multiple objects in a given image. 14 Layer 1 Layer 2 Layer 3 Layer 4 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Figure 7: Visualisation of the prediction of our model on the MUVA [15] test set. Layer 1 Layer 2 Layer 3 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Figure 8: Visualisation of the prediction of our model on the ACOM [1] test set. 15 6 Conclusion The task of sequential amodal segmentation is essential for understanding complex visual scenes where objects are frequently occluded. Our proposed method, leveraging cumulative occlusion learning with mask generation based on diffusion models, allows robust occlusion perception and amodal object segmentation over unknown object classes and arbitrary numbers of occlusion layers. We demonstrate in three publicly-available amodal datasets that the proposed method outperforms other layer-perception amodal segmentation and diffusion segmentation methods while producing reasonably diverse results. Future work will aim to augment efficiency and maintain output quality through super-resolution techniques and learned compression methods like VAEs. These advances will optimize our downsampling strategy, enabling a more efficient application to high-resolution datasets."
16
+ }
intro_8K/test_introduction_short_2405.05846v1.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05846v1",
3
+ "title": "Could It Be Generated? Towards Practical Analysis of Memorization in Text-To-Image Diffusion Models",
4
+ "abstract": "The past few years have witnessed substantial advancement in text-guided\nimage generation powered by diffusion models. However, it was shown that\ntext-to-image diffusion models are vulnerable to training image memorization,\nraising concerns on copyright infringement and privacy invasion. In this work,\nwe perform practical analysis of memorization in text-to-image diffusion\nmodels. Targeting a set of images to protect, we conduct quantitive analysis on\nthem without need to collect any prompts. Specifically, we first formally\ndefine the memorization of image and identify three necessary conditions of\nmemorization, respectively similarity, existence and probability. We then\nreveal the correlation between the model's prediction error and image\nreplication. Based on the correlation, we propose to utilize inversion\ntechniques to verify the safety of target images against memorization and\nmeasure the extent to which they are memorized. Model developers can utilize\nour analysis method to discover memorized images or reliably claim safety\nagainst memorization. Extensive experiments on the Stable Diffusion, a popular\nopen-source text-to-image diffusion model, demonstrate the effectiveness of our\nanalysis method.",
5
+ "authors": "Zhe Ma, Xuhong Zhang, Qingming Li, Tianyu Du, Wenzhi Chen, Zonghui Wang, Shouling Ji",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CR",
9
+ "cats": [
10
+ "cs.CR",
11
+ "cs.CV"
12
+ ],
13
+ "label": "Original Paper",
14
+ "paper_cat": "Diffusion AND Model",
15
+ "gt": "Diffusion probabilistic models [14, 39] have shown impressive ca- pability in the generation of images [31, 32], videos [7], 3D point cloud [24], etc. These techniques lay the foundation for commercial systems or communities such as Stable Diffusion [32], Midjour- ney [4], DALL\u00b7E 2/3 [3, 31] and Imagen [33], which have attracted millions of active users. The popularity of diffusion models can be attributed to the hierarchical denoising procedure, which offers high stability when trained on billions of data [36] and scalability to multimodal conditional generation. The large-scale dataset used to train the state-of-the-art text- to-image generation models, e.g., the open-source image-caption dataset LAION-5B [36], are widely acknowledged to contain content that will raise concerns about copyright and privacy. For example, as reported, LAION-5B could refer to photographers\u2019 work without authorization [12] and private medical photographs were also found therein [1]. With the uncurated data for training, diffusion models are likely to generate content that infringes the copyright of creators or exposes private information. Caption: Mothers influence on her young hippo Training Image Generated Images Caption: Emma Watson to play Belle in Disney's <i>Beauty and the Beast</i> Figure 1: Examples of memorized images in Stable Diffusion. The right four random samples are all the same as the corre- sponding training image in the first column. In this work, we focus on the problem of memorization in text- to-image diffusion models, a worst case of training data misuse. Memorization in text-to-image diffusion models is a failure of gen- eration that, when input with certain prompt but different random seeds, a model always rigidly generates the same data as those in its training set. This type of generation is regarded as failed because a probabilistic generative model is supposed to generate novel and diversified images. Figure 1 illustrates two examples of memorization in Stable Diffusion. Memorization in text-to-image diffusion models is not only a technical problem analogous to mode collapse as Generative Adversarial Networks (GAN) [6], but also a prejudice to the interests of image owners. In terms of copyright protection, even the model developers are authorized to train their model with copyrighted images, the image owners will never expect their images to be replicated to arbitrary users as this would cause arXiv:2405.05846v1 [cs.CR] 9 May 2024 indisciplinable dissemination. In past years, text-to-image models have been facing lawsuits for generating derivative images that mimic the style of artists. However, compared to derivative gener- ations whose legality is still in pending [35], exact replication of copyrighted images is undisputedly intolerable. For privacy preser- vation, a series of works [16, 27] have proposed to use synthetic data in place of real data to prevent sharing of private information. For this goal, potential memorization should also be carefully circum- vented. The existence of memorization in text-to-image models was first demonstrated by Carlini et al. [8] and Somepalli et al. [40, 41]. They studied the most popular open-source text-to-image diffusion model Stable Diffusion [32] and discovered prompts that trigger the model to generate training images. Although text-to-image diffusion models are found to be vulner- able to memorization, a practical analysis method is still a challeng- ing problem. First of all, existing analysis methods [8, 40, 41, 46] are all prompt-based: They first generate massive candidate images using captions from the original training set and then detect risky generations of low diversity [8], search for generated images highly similar to training images [40, 41] or detect prompts with high prediction errors [46]. The prompt-based analysis methods are un- able to determine whether an arbitrary image is memorized or not. Actually they are unaware of which images might be memorized only after memorization has been discovered. Besides, for the other images whose training captions seem not trigger memorization phenomena, their safety against memorization is still uncertain and hard to be analyzed by existing methods, because it is impossible to exhaustively test all prompts. To this end, a practical analysis method is expected to be image-based rather than prompt-based. Second, a practical analysis method requires quantitative measure- ment of memorization. Previous works focus on the discovery of memorized images and lack accurate description of memorization for each instance. Quantitative measurement of memorization not only provides strong evidence for the security risks of memorized images, but allows model developers to responsibly claim safety for normal images to their owners. To cope with the challenges, we consider a practical scenario where the model developers predefine a target set of copyrighted or privacy-preserving images. They aim to perform a security anal- ysis on the target images to decide whether they are memorized by the model and to quantify the extent to which they are memo- rized. Based on the analysis, developers are able to claim the safety against memorization for the target images to their data providers, or discover memorized images in advance and fix the vulnerability. To perform the security analysis, we first formally define image memorization in diffusion models and identify three conditions to say an image is memorized, named similarity, existence and probability. The similarity condition means that generated images should be exactly alike a target image. As mentioned before, this condition reflects the worst case misuse of training data and poses a significant security threat. Instead of calculating the similarity between generated images and target images, we utilize the model\u2019s prediction error as a metric to recognize image replications. This metric is as effective as previous metrics in recognition of image replication. It also enables us to invert the model to find inputs that cause replication, based on which we conduct analysis for the other two conditions. The existence condition requires that there exist a prompt to trigger the replication of a target image. We propose a prompt inversion algorithm to analyze this condition and verify by contradiction the existence of such prompt. The probability condition is fulfilled when a target image are frequently replicated at sampling time. We propose to measure the condition by comparing model\u2019s prediction error on the target image to those of a safe model. If the target image would be replicated with high probability, a significant distribution shift away from the error distribution of the safe model can be observed. We verify by contradiction that the unconditional diffusion models trained on large-scale data are safe from memorization and thus utilized as the safe model. We conduct comprehensive experiments on Stable Diffusion to demonstrate the effectiveness of our analysis method. In summary, we make the following contributions in this paper: \u2022 We perform a more practical analysis on the memorization in text-to-image diffusion models. Our analysis method is image-based and does not need to collect massive prompts, which is more reliable than prompt-based analysis methods. \u2022 We provide a formal definition of memorization in text-to- image diffusion models and identify three conditions of it. We then propose effective metrics and algorithms to measure each condition and ultimately quantify the extent to which the target images are memorized. \u2022 We demonstrate the viability of our analysis method through detailed experiments on Stable Diffusion, which reveals the intrinsic properties of memorization in text-to-image diffu- sion models.",
16
+ "main_content": "Diffusion probabilistic models [14, 39] are a class of latent variable models consisting of a hierarchy of denoising autoencoders. The encoder is not learned but replaced by a manually designed diffusion process. Given input image \ud835\udc6501 and a total of \ud835\udc47steps, the diffusion process is modeled as a Markov chain that gradually adds Gaussian noises \ud835\udf160:\ud835\udc47\u22121 to the input image \ud835\udc650 according to a weight schedule \ud835\udefc1:\ud835\udc47: \ud835\udc5e(\ud835\udc651:\ud835\udc47|\ud835\udc650) = \ud835\udc47 \ufffd \ud835\udc61=1 \u221a\ud835\udefc\ud835\udc65 \ufffd \ud835\udc61=1 \ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc65\ud835\udc61\u22121), \u221a1 \ufffd \ud835\udc61=1 \ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc65\ud835\udc61\u22121) = N (\ud835\udc65\ud835\udc61; \u221a\ud835\udefc\ud835\udc61\ud835\udc65\ud835\udc61\u22121, (1 \u2212\ud835\udefc\ud835\udc61)\ud835\udf16\ud835\udc61\u22121), \ud835\udc61 \ufffd \ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc650) = N (\ud835\udc65\ud835\udc61; \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc650, (1 \u2212\u00af \ud835\udefc\ud835\udc61)\ud835\udf160), \u00af \ud835\udefc\ud835\udc61= \u22121), \ud835\udc61 \ufffd \ud835\udc56=1 tep \ud835\udc61 \ufffd \ud835\udc56=1 \ud835\udefc\ud835\udc56. (1) \ufffd \u00af \ud835\udefc\ud835\udc61gradually decreases to almost zero in the last step \ud835\udc47so that \ud835\udc65\ud835\udc47 is close to pure Gaussian noise. The process of generating image \ud835\udc650 is the reverse of the diffusion process and also a Markov chain starting at \ud835\udc65\ud835\udc47\u223cN (0, \ud835\udc3c): \ufffd \ud835\udc5d(\ud835\udc650:\ud835\udc47) = \ud835\udc5d(\ud835\udc65\ud835\udc47) \ud835\udc47 \ufffd \ud835\udc61=1 \ud835\udc47 \ufffd \ud835\udc61=1 \ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61). (2) 1In this paper we intensionally confuse the use of \ud835\udc65and \ud835\udc650 to denote an image. In the contexts related to the diffusion process we use \ud835\udc650 and otherwise \ud835\udc65. If the diffusion process is divided into sufficient steps, each reverse step \ud835\udc5d\ud835\udf03(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) can be approximated by a Gaussian transformation that is trained to match the corresponding diffusion step \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61,\ud835\udc650). This is implemented by minimizing the following objective: L = E\ud835\udc61,\ud835\udc650,\ud835\udf160 \u0002 \u2225\ud835\udf160 \u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)\u22252 2 \u0003 , (3) where \ud835\udf16\ud835\udf03is a neural network that predicts the added noise \ud835\udf160, \ud835\udc65\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc650 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16. After training, the vanilla sampling procedure starts with a random Gaussian noise \ud835\udc65\ud835\udc47\u223cN (0, \ud835\udc3c) and removes the predicted noise stepwise by \ud835\udc65\ud835\udc61\u22121 = 1 \u221a\ud835\udefc\ud835\udc61(\ud835\udc65\ud835\udc61\u22121\u2212\ud835\udefc\ud835\udc61 \u221a1\u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)) + \ud835\udf0e\ud835\udc61N (0, \ud835\udc3c), where \ud835\udf0e\ud835\udc61= \u221a (1\u2212\ud835\udefc\ud835\udc61) (1\u2212\u00af \ud835\udefc\ud835\udc61\u22121) \u221a1\u2212\u00af \ud835\udefc\ud835\udc61 when \ud835\udc61> 1 and 0 when \ud835\udc61= 1. The vanilla sampling algorithm is extremely slow to generate an image as it must invoke the network \ud835\udf16\ud835\udf03for \ud835\udc47times (e.g., 1000 steps in Stable Diffusion). To mitigate the problem, a variety of efficient sampling algorithms are proposed, such as DDIM sampler [42], PLMS sampler [22], etc. 2.2 Conditional Diffusion Model Diffusion models can be extended to conditional variants to generate images under the guidance of some input condition, e.g., object class, textual prompt. Text-to-image models are conditional diffusion models that allow users to input some prompts to indicate the desired content of generated images. There are mainly two types of guidance, i.e., Classifier Guidance [9] and Classifier-Free Guidance [15]. Classifier Guidance additionally trains a classifier on the noisy image \ud835\udc65\ud835\udc61to predict its coupled condition \ud835\udc50and utilizes the gradients from the classifier to guide the sampling. Most diffusion models like Stable Diffusion choose Classifier-Free Guidance because it does not need to train an extra classifier. Classifier-Free Guidance implicitly trains two models, an unconditional model \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61) and a conditonal model \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50). The two models share parameters and the unconditional model is trained by randomly replacing input condition \ud835\udc50with null (for textual condition, the unconditional model is always input an empty string). At sampling time, the predicted noise is a linear combination of unconditonal prediction and conditional prediction: \u02c6 \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50) = \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61) + \ud835\udefe(\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50) \u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61)), (4) where a larger hyperparameter \ud835\udeferesults in generated images more consistent with the input condition. 2.3 Text-To-Image Diffusion Model An instance of conditional diffusion models, which we will study in this work, is text-to-image diffusion models. To obtain semantically meaningful condition \ud835\udc50, the input prompt is first tokenized and projected into a sequence of continuous token embeddings \ud835\udc52= [\ud835\udc520,\ud835\udc521, ...,\ud835\udc52\ud835\udc41\u22121], where \ud835\udc41is the number of tokens. The token embeddings are further encoded as the condition \ud835\udc50by a pre-trained image-text model, for example, CLIP [29] or language model, for example, T5 [30]. Depending on the specific modeling, the condition \ud835\udc50is either incorporated into the middle layers of the noise prediction network \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50) via cross-attention [32, 33], or concatenated with a sequence of image tokens, modeling \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc61,\ud835\udc50) autoregressively as a single stream [31]. (a) Exactly alike generation (b) Derivative generation Figure 2: Examples of exactly alike generation and derivative generation. Among the advanced text-to-image diffusion models, Stable Diffusion is open-sourced both in implementation and its training data, therefore we will utilize it for our study. To generate high-resolution images, Stable Diffusion first train an autoencoder which encodes an image \ud835\udc65into a lower-dimensional representation \ud835\udc67= E(\ud835\udc65) perceptually equivalent to the data space. The diffusion model is trained in the reduced space. At sampling time, after generating a latent \ud835\udc67\u2032, a high-resolution image \ud835\udc65\u2032 is obtained via the pre-trained decoder \ud835\udc65\u2032 = D(\ud835\udc67\u2032). 3 DEFINITION OF MEMORIZATION We first formalize the definition of memorization and then make comparisons to existing ones: Definition. A training sample \ud835\udc65is memorized if, at sampling time, there exists a prompt, under whose guidance the model will generate samples that are exactly alike \ud835\udc65with a significant probability. Exactly alike does not mean verbatim same or byte-by-byte match in the file system. It is still on the perception level but excludes even a minor transformation such as change in view point and component recombination. Exactly alike training sample \ud835\udc65, existence of a prompt and significant probability are three conditions to say a training sample is memorized. For brevity, we call them the similarity, existence and probability conditions. Existing works cover the three conditions to varying degrees. Carlini et al. [8] provide a strict definition of memorization that a training image is eidetic memorized if it has at most \ud835\udc58instances in the training set and is extractable from the model via some prompts. We both count it as memorization if the generated samples are exactly alike or eidetic to training ones (Figure 2a). Other works [40, 41, 46, 48] do not give a formal definition and discuss a wider scope of memorization in the form of derivative generation, such as partial copy and style-like copy (Figure 2b). Restricting memorization to the most extreme case \"exactly alike\" has several advantages over a wider scope. First, lawsuits against derivative actions in image generation models are still in very early stages [35]. It takes time to render decisions on its legality. In contrast, \"exactly alike\" memorization is by no means allowed if the related images are copyrighted or private. Second, from a technical perspective, diffusion models are inherently trained to replicate training samples pixel by pixel, as in Equation 4. Therefore, \"exactly alike\" memorization is not only defined at the problem level, but also possible to find evidence in the model itself. This allows us to utilize the internal statistics of the model to measure its memorization problem, rather than relying on external models to match training images and generate images, which is less reliable due to potential risks such as adversarial attack [49]. The existence condition is not a concern for previous works as they analyze memorization in a prompt-based way such that the condition is always satisfied. For our image-based analysis, the condition is important to be able to expose realistic risks, as discussed later. As for the probability condition, Carlini et al. do not involve the probability condition explicitly in the definition but in their membership inference attack designed to detect abnormal prompts, which motivates us in our definition. Other works [40, 41, 46, 48] do not place an emphasis on probability. The probability condition is critical for analyzing memorization; as we will show later, any samples can be extracted from diffusion models, but not all are memorized. 4 RECOGNIZING IMAGE REPLICATION We begin the measurement of memorization in diffusion models with a preliminary investigation on the recognition of image replication, which aims to decide the condition that a generated image \ud835\udc65\u2032 replicates the target image \ud835\udc650 (the similarity condition). Effective recognition is the basis for further measurement. Existing works adopted a \"tiled\" \ud835\udc592 distance [8] or SSCD [40, 41] (a pre-trained model for copy detection) representations to calculate the similarity between \ud835\udc65\u2032 and \ud835\udc650. Wen et al. [48]\u2019s metric was designed to detect abnormal prompts and could not be used to identify a replication of \ud835\udc650. Nevertheless, to have an in-depth understanding of training image replication and accurate recognition, a more intrinsic and informative metric is necessary. 4.1 Methodology Suppose that the input prompt is represented as \ud835\udf11(\ud835\udc52), where \ud835\udc52= [\ud835\udc520,\ud835\udc521, ...,\ud835\udc52\ud835\udc41\u22121] is a sequence of token embeddings and \ud835\udf11is a text encoder. To generate an image, a random Gaussian noise \ud835\udf160 \u223cN (0, \ud835\udc3c) is sampled and follows an iterative denoising process as introduced in Section 2.1. Besides the initial noise \ud835\udf160, the vanilla sampling algorithm of diffusion models adds a different Gaussian noise at each step. Therefore, the generated image is determined by an array of noises. However, in practice more efficient samplers are utilized, e.g., DDIM sampler [42] and PLMS sampler [22], which only sample once at the beginning and then follow a deterministic denoising process. If the same initial noise is used, then the generated image will be exactly the same. We adopt DDIM sampler [42] in our experiments, therefore only consider the initial noise. To recognize whether a noise-prompt pair (\ud835\udf160,\ud835\udc52) can replicate the target image \ud835\udc650, we find it strongly correlated with the model\u2019s prediction error when we utilize \ud835\udf160 to blur \ud835\udc670 = E(\ud835\udc650). Instead of the default \ud835\udf160-prediction error, we consider a more direct and effective \ud835\udc670-prediction error: L(\ud835\udc650,\ud835\udf160,\ud835\udc52) = E\ud835\udc61 \u0002 \u2225\ud835\udc670 \u2212\ud835\udc67\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udf11(\ud835\udc52))\u22252 2 \u0003 = E\ud835\udc61 \"\r \r \r \r\ud835\udc670 \u2212\ud835\udc67\ud835\udc61\u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udf11(\ud835\udc52)) \u221a\u00af \ud835\udefc\ud835\udc61 \r \r \r \r 2 2 # = E\ud835\udc61 \u0014 1 \u2212\u00af \ud835\udefc\ud835\udc61 \u00af \ud835\udefc\ud835\udc61 \u2225\ud835\udf160 \u2212\ud835\udf16\ud835\udf03(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udf11(\ud835\udc52))\u22252 2 \u0015 , (5) where \ud835\udc67\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc670+\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf160. The \ud835\udc670-prediction error is equivalent to reweighted \ud835\udf160-prediction error. The weight term 1\u2212\u00af \ud835\udefc\ud835\udc61 \u00af \ud835\udefc\ud835\udc61 increases with larger \ud835\udc61, which favors more accurate predictions in earlier sampling steps (later steps in the diffusion process correspond to earlier steps in the generation process). The intuition is that if the diffusion model can accurately predict \ud835\udc670 out of \ud835\udf160-blurred \ud835\udc67\ud835\udc61at all steps (especially early sampling steps), then the sampling trace starting at \ud835\udf160 will head towards \ud835\udc670 and finally generate \ud835\udc650 = D(\ud835\udc670). Note that L(\ud835\udc650,\ud835\udf160,\ud835\udc52) only performs single-point detection (single noise \ud835\udf160 and single prompt \ud835\udc52) and cannot be readily used to analyze memorization. Aligning the starting point. In Stable Diffusion, the timestep schedule is discrete over a range (1000). The noisy image \ud835\udc67\ud835\udc47= \u221a\u00af \ud835\udefc\ud835\udc47\ud835\udc670 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc47\ud835\udf160 at the last step has minor difference from the Gaussian noise \ud835\udf160, with Signal-to-Noise Ratio (SNR) of 0.0047. However, we have found that the minor difference could exert significant influence over the generation results, i.e., the generated images by \ud835\udc67\ud835\udc47and \ud835\udf160 could be different. The gap between \ud835\udc67\ud835\udc47and \ud835\udf160 is not constrained during diffusion model training; thus the behavior of \ud835\udf160 generation cannot be fully captured by the related loss function. To eliminate the inconsistency, we generate using \ud835\udc67\ud835\udc47= \u221a\u00af \ud835\udefc\ud835\udc47\ud835\udc670 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc47\ud835\udf160, a practice involved in image editing works [26]. This equals to sample from a biased Gaussian distribution N (\u221a\u00af \ud835\udefc\ud835\udc47\ud835\udc670, (1 \u2212\u00af \ud835\udefc\ud835\udc47)\ud835\udc3c). 4.2 Experiment Setup The correlation between our proposed metric L(\ud835\udc650,\ud835\udf160,\ud835\udc52) and the replication of \ud835\udc650 through (\ud835\udc67\ud835\udc47,\ud835\udc52) can be verified through a pair of bidirectional experiments. 4.2.1 The Ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to Recognize Replication. This experiment evaluates that given a realistic dataset {(\ud835\udc65\ud835\udc56 0,\ud835\udf16\ud835\udc56 0,\ud835\udc52\ud835\udc56,\ud835\udc66\ud835\udc56)}\ud835\udc40 \ud835\udc56=1, where \ud835\udc66\ud835\udc56= 1 indicates replication and otherwise not, whether L(\ud835\udc650,\ud835\udf160,\ud835\udc52) is able to accurately recognize replications. We use Stable Diffusion V1.4 for evaluation. To build the dataset, we collect a set of 78 memorized image-prompt pairs found by Webster [46]. Each image is augmented with an additional BLIP [20] generated prompt. The BLIP-generated prompt provides adequate non-replcation samples. This results in 156 image-prompt pairs. For each pair, we randomly sample 50 different Gaussian noises and then manually annotate \ud835\udc66\ud835\udc56for each sample (\ud835\udc65\ud835\udc56 0,\ud835\udf16\ud835\udc56 0,\ud835\udc52\ud835\udc56). Finally, we build a dataset consisting of 7800 samples, where replication occurs in 3645 samples. An accurate estimation of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) requires a traversal of 1000 steps for Stable Diffusion. For efficiency, we uniformly sample 50 steps. Following Wen et al. [48], the detection performance is measured by Area Under Curve (AUC) of the Receiver Operating Characteristic (ROC) and the True Positive Rate at the False Positive Rate of 1% (TPR@1%FPR). 4.2.2 The Ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to Generate Replication. The effectiveness of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) can also be presented in reverse. It can Table 1: Recognition results of replication. Metric Sample Level Image Level AUC TPR@1%FPR AUC TPR@1%FPR Tiled \ud835\udc592 [8] 0.999 0.973 1.000 1.000 SSCD [40, 41] 1.000 0.999 1.000 1.000 \ud835\udc670-prediction error 0.999 0.986 1.000 0.999 0 250 500 750 1000 Timestep 0.0 0.1 0.2 0.3 0.4 0.5 0.6 z0-prediction error Replication Normal Replication Normal 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 z0-prediction error (a) Image Level 0 250 500 750 1000 Timestep 0.0 0.2 0.4 0.6 0.8 1.0 z0-prediction error Replication Normal Replication Normal 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 z0-prediction error (b) Sample Level Figure 3: The \ud835\udc670-prediction error over each timesteps (left) and the distribution (right). (a) presents the error distribution of one example image. be shown that a small level of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) is sufficient for generating replications. We study this effect in a tough setting. For the unmemorized (normal) images from LAION-Aesthetics V2 6.5+, a subset of Stable Diffusion\u2019s training set with predicted aesthetics scores no less than 6.5, it is generally of low probability to sample an \ud835\udf160 \u223cN (0, \ud835\udc3c) that replicates \ud835\udc650 [40]. However, we are able to invert a feasible \ud835\udf16\u2217 0 that replicates the original \ud835\udc650 by minimizing L(\ud835\udc650,\ud835\udf160,\ud835\udc52), \ud835\udf16\u2217 0 = arg min \ud835\udf160 L(\ud835\udc650,\ud835\udf160,\ud835\udc52). (6) The ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to trigger a rare event yields strong evidence for its correlation with replication. For all experiments, we use the Adam optimizer with an initial learning rate of 0.1 and without weight decay. We use a batch size of 32 (timesteps) and train for a total of 1K iterations. 4.3 Results 4.3.1 The Ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to Recognize Replication. The performance is evaluated in sample-level and image-level. The samplelevel takes the 7800 samples all together for evaluation. The Imagelevel evalution calculates AUC and TPR@1%FPR respectively for each image and average them. Table 1 presents the recognition results. All the metrics achieve almost perfect performance. Figure 3 shows the distribution of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) for replication samples and normal ones. For each individual sample, there is a clear margin between replication and normal samples across most timesteps (Figure 3a), particularly in later steps. While sample-level distribution shows a large overlap between replication and normal samples (Figure 3b). This indicates that there is not a universal criterion for recognizing replication for all images. What\u2019s more, the normal samples present \ud835\udc670-prediction error with a larger variance (Figure 3 right), which indicates that the normally generated images are more diversified than the memorized generations. 4.3.2 The Ability of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) to Generate Replication. We invert the initial noise \ud835\udf160 for each image with different input prompts, including their training caption, a BLIP-generate caption and an empty string. As shown in Figure 4, for either memorized training images or randomly sampled normal images, for either original training captions, BLIP-genearted new captions or empty captions, minimizing L(\ud835\udc650,\ud835\udf160,\ud835\udc52) produces successful inversion of the input noise \ud835\udf160 that leads to replication of \ud835\udc650. It demonstrates that L(\ud835\udc650,\ud835\udf160,\ud835\udc52) is a strong indicator for training image replication. Compared to normal images, the inversion for memorized images presents relatively more authentic reconstruction, which indicates that memorized images are easier to replicate. Condition 1: similarity. The \ud835\udc670-prediction error meets the similarity condition. We directly utilize internal prediction errors of diffusion models as an indicator of the similarity between the generated image and target image. We believe that based on the model\u2019s own function for comparison is more reliable than using a coarse metric [8] or external independently trained models [40, 41]. 5 TRIGGER THE MEMORIZATION Recognizing image replication works after the deployment of diffusion models to prevent possible leakage of training images. The developers of an image generation model also have strong motivation to perform safety analysis on a target set of sensitive images during development of their model. This acts as a proactive defense against memorization. The main goal of the safety analysis against memorization is to determine whether the target images are memorized and to measure the extent to which they are memorized. As a straightforward approach, searching for prompts that are prone to generate the target images is not feasible for safety measurement because it is random and laborious. Instead, we propose an inversion-based analysis without the need to access any prompts. The safety analysis against memorization is accomplished in two steps. First, for each target image, we attempt to invert an input prompt that triggers the model\u2019s memorization behavior on it. We verify by contradiction that if an image is safe, then it is impossible to invert a prompt that triggers its memorization. Second, we perform an analysis on the unconditional diffusion model and find that the unconditional diffusion model trained on Training Caption: Mothers influence on her young hippo. BLIP Caption: There are two hippos standing next to each other near a body of water. Inverse Images Training Caption: A girl reading poster by Johann Georg Meyer. BLIP Caption: Painting of a girl reading a book in a corner of a room. Inverse Images Training Caption: Emma Watson to play Belle in Disney's <i>Beauty and the Beast</i>. BLIP Caption: A close up of a woman with a black shirt and tie. Training Caption: Spring Thaw Charles White. BLIP Caption: Painting of a stream in a snowy forest with trees and snow. Training Image Training Image Figure 4: The results of noise inversion for memorized images (left) and normal images (right). In each block, the leftmost image is the training image. The right three are generated images using inverted noises, along with training caption, BLIP-generated caption and an empty string \"\". Overall, each image can be sucessfully inverted by minimizing the \ud835\udc670-prediction error, while memorized images are easier to invert and of higher fidelity. large-scale data is safe from memorization. It thus serves as a guard for measuring the safety of the conditional text-to-image model. In this section, we elaborate how to trigger the memorization of an image. The measurement of memorization is descirbed in the next section. 5.1 Methodology To answer the question that if a target image could be memorized, we attempt to search for a prompt that triggers the generation of the target image. This can be done by minimizing the expectation of conditional prediction error with respect to the input token embeddings \ud835\udc52, \ud835\udc52\u2217= arg min \ud835\udc52 E \ud835\udf160\u223cN(0,\ud835\udc3c) [L(\ud835\udc650,\ud835\udf160,\ud835\udc52)]. (7) However, this straightforward prompt inversion causes overestimation of memorization. Indeed, we are always able to invert an optimal \ud835\udc52\u2217that reduces the prediction error of any target image \ud835\udc650 to a desired low level. As a result, the image appears to be \"memorized\". This is because the pre-trained vocabulary embeddings V only distribute as a finite number of spots in the infinite large embedding space. A valid \ud835\udc52\u2217that reflects the memorization of \ud835\udc650 should not only lead to a low level of prediction error but also be close to the manifold of vocabulary embeddings V. The condition can be fulfilled by adding a regularizer R(\ud835\udc52, V) to Equation 7, \ud835\udc52\u2217= arg min \ud835\udc52 E \ud835\udf160\u223cN(0,\ud835\udc3c) [L(\ud835\udc650,\ud835\udf160,\ud835\udc52)] + \ud835\udf06R(\ud835\udc52, V), (8) where \ud835\udf06is a hyperparameter to control the weight of regularizer. Condition 2: existence. The regularizer meets the existence condition. It works as an adversary to the expectation of conditional prediction error: A target image \ud835\udc650 is memorized if and only if the contradiction between them can be solved. If the regularized objective is not optimizable for a target image, then we can claim that the image is safe from memorization. The reliability of making (a) t-SNE. (b) \ud835\udc592-norm. Figure 5: Pre-trained token embeddings do not present a regular distribution. such a claim is estabilished on the trust in the optimizers utilized to minimize Equation 8. For deep neural networks, we believe that modern optimizers [18, 23] are capable of taking responsibility. It is challenging to accurately constrain the distance of token embeddings \ud835\udc52to the manifold of pre-trained vocabulary embeddings, because the pre-trained vocabulary embeddings do not present a regular distribution, as shown in Figure 5a for CLIP (CLIP is used as the text encoder of Stable Diffusion). We devise two regularizers that constrain the \ud835\udc592-norm of optimized token embeddings \ud835\udf16\u2217. This is motivated by the observation that minimizing the prediction error without regularization for normal images typically produces token embeddings with sufficiently large \ud835\udc592-norm. Therefore, the first regularizer equals an \ud835\udc592-norm regularizer R1(\ud835\udc52, V) = \u2225\ud835\udc52\u22252 2. R1 seems irrelevant to the vocabulary V but takes advantage of the fact that pre-trained vocabulary embeddings have relatively small \ud835\udc592-norm (see Figure 5b). Another regularizer R2 adds a term to R1 that encourages the learned token embeddings to be as close to any of the pre-trained vocabulary embeddings as possible, R2(\ud835\udc52, V) = \u2225\ud835\udc52\u22252 2 + 1 \ud835\udc41 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=0 H (\ud835\udc52\ud835\udc56, V), (9) where H (\ud835\udc52\ud835\udc56, V) is the entropy calculated over the probabilistic distribution on the inner-product distance between \ud835\udc56-th token and the vocabulary. This regularizer enables to search for realistic hard prompts. 5.2 Experiment Setup We use the 78 memorized images and 100 randomly sampled normal images from LAION as the target image set. For all experiments, we do not access training captions of the target images. We use the Adam optimizer with an initial learning rate of 0.01 without decay. The \ud835\udc592-norm regularization is implemented by Adam\u2019s inner weight decay. \ud835\udf06is set to 0.01. We use a batch size of 16 and optimize for a total of 500 iterations. Each image is resized and center cropped to 512 \u00d7 512 without augmentations. 5.3 Results Note that a prompt \ud835\udc52is composed of \ud835\udc41token embeddings, each of which represents a token. Stable Diffusion\u2019s text encoder by default uses a maximum length of 77 tokens, in which the first and last tokens are padded tokens indicating the start and end of a prompt. The rest 75 tokens are free to optimize. Figure 6: Examples of generated images using optimized token embeddings. 4 images are randomly generated for each optimization and all examples present the problem of memorization. The last row exhibits an example of partial memorization, where not all generations collapse to the same image. Through adjusting the number of tokens to optimize from 1 to 75, we find that out of the 78 memorized images discovered by Webster [46], the memorization of 66 images can be triggered by optimizing only 1 token, 2 images can be triggered by optimizing 2 tokens, the other 10 images are only partially memorized images, no matter how many tokens are optimized, as illustrated in Figure 6. In contrast, the memorization of normal images cannot be triggered with regularization. Figure 7 shows training statistics of memorized images and normal images, it can be seen that the prediction error and regularization term can be simultaneously optimized to small values for memorized images. In contrast, for normal images, only the \ud835\udc592-norm of token embeddings is minimized, while the prediction error of normal images remains high. It demonstrates that for Figure 7: \ud835\udc670-prediction errors and \ud835\udc592-norm of token embeddings during training time. Memorized images present low values for both prediction errors and \ud835\udc592-norm of token embeddings at the end of training. Prompt inversion for normal images can only optimize the \ud835\udc592-norm of token embeddings while prediction errors remain high across the whole training process. Algorithm 1 Hard prompt inversion for memorization. Input: the target image \ud835\udc650, encoder E, token embeddings \ud835\udc52= [\ud835\udc520,\ud835\udc521, ...,\ud835\udc52\ud835\udc41\u22121], vocabulary embeddings V, weight \ud835\udf06, number of candidate tokens\ud835\udc58, optimization steps \ud835\udc40, batch size \ud835\udc35, learning rate \ud835\udefe, timesteps \ud835\udc47in diffusion model Output: optimal hard prompt \u02c6 \ud835\udc61= [\u02c6 \ud835\udc610, \u02c6 \ud835\udc611, ..., \u02c6 \ud835\udc61\ud835\udc41\u22121] \ud835\udc670 = E(\ud835\udc650) \ud835\udc52\ud835\udc5f\ud835\udc5f= +\u221e for \ud835\udc56= 1 to \ud835\udc40do sample \ud835\udc35\ud835\udf16\u223cN (0, \ud835\udc3c), \ud835\udc61\u223c\ud835\udc48(1,\ud835\udc47) \ud835\udc67\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc670 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16 \ud835\udc54= \u2207\ud835\udc52L(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udc52) + R2(\ud835\udc52, V) \ud835\udc52= \ud835\udc52\u2212\ud835\udefe\ud835\udc54 end for Sample a test set \ud835\udf16\u223cN (0, \ud835\udc3c), \ud835\udc61\u223c\ud835\udc48(1,\ud835\udc47) \ud835\udc67\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc670 + \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf16 for [\ud835\udc610,\ud835\udc611, ...,\ud835\udc61\ud835\udc41\u22121] \u2208 top-\ud835\udc58(\ud835\udc520V\ud835\udc47) \u00d7 top-\ud835\udc58(\ud835\udc521V\ud835\udc47) \u00d7 ... \u00d7 top-\ud835\udc58(\ud835\udc52\ud835\udc41\u22121V\ud835\udc47) do \ud835\udc52\u2032 = V(\ud835\udc610,\ud835\udc611, ..,\ud835\udc61\ud835\udc41\u22121) if L(\ud835\udc67\ud835\udc61,\ud835\udc61,\ud835\udc52\u2032) < \ud835\udc52\ud835\udc5f\ud835\udc5fthen \u02c6 \ud835\udc61= [\ud835\udc610,\ud835\udc611, ...,\ud835\udc61\ud835\udc41\u22121] end if end for return \u02c6 \ud835\udc61 normal (unmemorized) images, the contradiction between reducing prediction errors and aligning the learned tokens to the pre-trained tokens is unsolvable. Therefore, for the target images to protect, if we cannot optimize token embeddings that follows the pre-trained token embedding distribution to reduce the prediction error, then we can claim that the images are not memorized. Training Caption: The no woman limits business podcast Invert Invert Invert Generate Generate Generate Ours AUTOPROMPT PEZ Training Image limits businesses podcast <|startoftext|> ek <|endoftext|> gar ze ze Training Caption: on her young Mothers influence hippo Invert Invert Invert Generate Generate Generate Ours AUTOPROMPT PEZ Training Image mothers hippo influence \u00e9\u0123 <|endoftext|> \u00f0\u0141\u0131\u00be ah vh eeee Figure 8: Results of hard prompt inversion for memorized images. Our inversion algorithm is able to invert an training image to a prompt that triggers its memorization. Existing hard prompt tuning methods AUTOPROMPT and PEZ are not effective in analyzing memorization. For the valid token embeddings that successfully trigger the memorization of some images, there is still a gap between the learned continuous token embeddings and discrete tokens. Simple regularizer, e.g., \ud835\udc592-norm regularizer as we used, does not provide a guarantee that the learning continuous token embeddings can be projected to realistic tokens. This is challenging because there are infinite number of points in the continuous embedding space, a subset of which have lower error than a possible hard prompt. The token embeddings could be over-optimized to areas that produce lower error but do not correspond to any tokens prompts. What\u2019s more, existing hard prompt tuning methods based on greedy algorithm are not applicable to search prompts that trigger the memorization of target images, because we have observed that prompts that trigger memorization do not necessarily have greedy property. To solve the problem, we propose a simple but effective algorithm to optimize hard prompts that trigger memorization, as in Algorithm 1. Algorithm 1 performs brute-force search in the Cartesian product of \ud835\udc41sets, each of which contains \ud835\udc58candidate tokens with smallest distance to the learned token embeddings. The optimal prompt is the one with minimal prediction error. The effectiveness of the algorithm heavily relies on the initialization, a common problem in hard prompt tuning [38, 47]. We repeat Algorithm 1 for a maximum of 20 runs with different initialization. We compare our algorithm with two hard prompt tuning algorithms AUTOPROMPT [38] and PEZ [47]. The number of tokens to optimize is set to 3. For the 20 inverted prompts, we choose the one with the lowest prediction error for illustration. Figure 8 illustrates 2 successful inversions. Our hard prompt inversion algorithm successfully inverts a prompt that trigger the memorization. It reflects that the memorization is only determined by a few key tokens (3 tokens in the example). It also reflects that the prompts that cause training image replication are not unique. The positions of the key tokens could be different. As shown in the example, the three words \"limits\", \"business\" and \"podcast\" are respectively the 3rd, 4th and 6th. It has no influence to shift them to the head of the prompt, as inverted by us. However, the order of tokens does not always have no effect. Permuting the prompt to \"businesses limits podcast\" would fail to trigger memorization. This explains why the hard prompt inversion is sensitive to initialization states. It is hard to constrain the position of inverted tokens simply by gradient descent. In contrast, AUTOPROMPT and PEZ do not work in prompt inversion for memorization. It demonstrates that inverting prompt for memorization is more difficult than semantic understanding tasks as their original applications. We have observed that the prompts that trigger memorization does not have greedy-solvable property, therefore they cannot be found by AUTOPROMPT and PEZ. Specifically, we initialize the prompt to \"limits business <|endoftext|>\" for AUTOPROMPT and PEZ, and run them to search for the third token \"podcast\". If it is greedy-solvable, AUTOPROMPT and PEZ would leave the first two words unchanged and find the last word \"podcast\". However, they gradually change the first two words and do not converge. Due to the dilemma, continuous token embeddings are adopted in subsequent measurement. Although the continuous token embeddings do not strictly meet the existence condition for potential memorized images, we would like to clarify that it is reasonable to use them for measurement for two reasons. Firstly, for potential memorized images, continuous token embeddings inverted with regularization are sufficient to indicate that memorization has happened. Secondly, for normal images, it is meaningless to invert hard prompts for them. Projecting the optimized token embeddings to hard prompts anyway will introduce additional error into measurement. 6 MEASURE THE MEMORIZATION We have discussed how to recognize the replication of a training image \ud835\udc650 given a pair of noise and prompt (\ud835\udf160,\ud835\udc52), and how to verify the existence of a prompt to trigger memorization of a training image. In this section, we focus on the measurement of memorization and describe how the measurement meets the last \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc4f\ud835\udc4e\ud835\udc4f\ud835\udc56\ud835\udc59\ud835\udc56\ud835\udc61\ud835\udc66 condition. Given previous results, an intuitive method to measure the memorization would be first determining a threshold of the \ud835\udc670-prediction error L(\ud835\udc650,\ud835\udf160,\ud835\udc52) (Section 4) for recognizing replications and then estimate the probability of that L(\ud835\udc650,\ud835\udf160,\ud835\udc52) is no larger than the threshold when the inverted prompt \ud835\udc52\u2217(Section 5) is input. However, the intuitive method is difficult to implement. As demonstrated by Figure 3, there is not a universal threshold applicable to every image, hence a unique threshold must be determined for each image. To accurately locate the threshold, we can either take the upper bound of L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) or the lower bound of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) for all normal prompts \ud835\udc52. Both options are difficult to implement, because the upper bound of L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) is prone to overestimation (not strictly \ud835\udc592 bounded) and the lower bound of L(\ud835\udc650,\ud835\udf160,\ud835\udc52) requires to evaluate all potential prompts, which is laborsome. Instead, we avoid deciding the boundary of replication and novel generation but propose an indirect measurement of memorization by comparing the distribution of L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) to the distribution of a safe model. Then the measurement of memorization equals how much threat an inverted prompt has introduced into the safe model. Motivated by previous observations [40], we find the unconditional diffusion model trained on large-scale data is safe from memorization and thus could be utilized as the safe model. For the remainder of this section, we first verify the safety of unconditional diffusion model and then describe the measurement. 6.1 Unconditional Model The unconditional model is part of the text-to-image model and used as penalty at sampling time (see Section 2.2). It can be safe from memorization for the following reasons. First, the unconditional model is trained to maximize the likelihood of data distribution without any outer guidance (empty string in Stable Diffusion). The memorization can only happen when the unconditional model frequently generates a certain image, a form of representation space collapse. However, one of the advantages of diffusion models is its stability in training, where no collapse is discovered. Second, under the observation that memorization is caused by overfitting to an image-prompt pair [41], the unconditional model has no chance to Figure 9: Results of noise inverison for memorized images in Stable Diffusion. Left: training image. Middle: generated image without regularization. Right: generated image with regularization. Even with regularization, memorized images can be successfully inverted. Unreg Reg 0.0 0.1 0.2 0.3 0.4 0.5 0.6 p-value Unreg Reg 0.010 0.005 0.000 0.005 0.010 0.015 mean Unreg Reg 0.96 0.98 1.00 1.02 1.04 1.06 1.08 1.10 1.12 var Figure 10: The distribution of \ud835\udc5d-value, mean and variance of noises inverted for memorized images in Stable Diffusion using their training captions. The minimum \ud835\udc5d-value and desired values of mean and variance are plotted in green dashed line. Without regularization, straightforward noise inversion caused the inverted noises far from the standard Guassian distribution. The regularized noise inversion successfully circumvent the over-optimization problem. Unreg: Unregularized, Reg: Regularized. overfit because its training data consists of image-null pairs which forms a many-to-one correspondence. Last, Somepalli et al. [40] have found that when the number of training data is large enough, unconditional diffusion models would not replicate training images, but only generate similar ones. 6.1.1 Methodology. It is intractable to estimate the probability that the model replicates \ud835\udc650 as it requires to find all the potential \ud835\udf16\u2217 0 and accumulate the probability within their \"exactly alike\" boundary. Therefore, it is impossible to estimate the safety of unconditional diffusion models directly by probability. We verify the safety of unconditional diffusion models against memorization by contradiction based on noise inversion that replicate a target image \ud835\udc65(Equation 6). In practice, it was shown that massive sampling from N (0, \ud835\udc3c) to generate \ud835\udc65for the unconditional model does not work [40]. Noise inversion seems to provide an approach, but we will demonstrate (a) Memorized images in LAION (b) Normal images in LAION (c) Normal images in FFHQ Figure 11: Results of noise inversion in unconditional Stable Diffusion and a diffusion model trained on FFHQ. Each block contains three images: left: training image, middle: generated image without regularization, right: generated image with regularization. For unconditional models, the training images, even memorized ones, cannot be replicated with normality regularization, which means that unconditional models have little probability of memorizing their training data. Unreg Reg 0.0 0.1 0.2 0.3 0.4 0.5 p-value Unreg Reg 0.005 0.000 0.005 0.010 0.015 0.020 mean Unreg Reg 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 var (a) Memorized images in LAION Unreg Reg 0.0 0.1 0.2 0.3 0.4 0.5 p-value Unreg Reg 0.012 0.010 0.008 0.006 0.004 0.002 0.000 0.002 0.004 mean Unreg Reg 0.2 0.4 0.6 0.8 1.0 var (b) Normal images in LAION Unreg Reg 0.0 0.1 0.2 0.3 0.4 0.5 p-value Unreg Reg 0.030 0.025 0.020 0.015 0.010 0.005 0.000 0.005 0.010 mean Unreg Reg 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 var (c) Normal images in FFHQ Figure 12: Unconditional models\u2019 distribution of \ud835\udc5d-value, mean and variance of inverted noises. Memorized images in text-toimage Stable Diffusion can not be replicated by its unconditional part. that noises found by this way are impossible to be drawn from N (0, \ud835\udc3c). Directly minimizing L(\ud835\udc65,\ud835\udf16) leads to over-optimization: Even for memorized image-prompt pairs, the noise \ud835\udf16\u2217obtained by minimizing L(\ud835\udc65,\ud835\udf16) are away from N (0, \ud835\udc3c), indeed there are a wealth of normal noises (noises that are likely drawn from N (0, \ud835\udc3c)) available. It becomes confusing for our verification whether there exist normal noises that will replicate \ud835\udc65. If there exist, we might just over-optimize and miss them. To avoid this interference factor, we assume that the noise \ud835\udf16to be optimized is drawn from another Gaussian distribution N (\ud835\udf07, \ud835\udf0e2) with parameters \ud835\udf07and \ud835\udf0e2. Motivated by the prior matching in Variational AutoEncoder (VAE) [19], we invert \ud835\udf07and \ud835\udf0e2 with an regularized objective: \ud835\udf07\u2217, (\ud835\udf0e2)\u2217= arg min \ud835\udf07,\ud835\udf0e2 E \ud835\udf16\u223cN(0,\ud835\udc3c) [L(\ud835\udc65, \ud835\udf07+ \ud835\udf0e\ud835\udf16)] +\ud835\udc37\ud835\udc3e\ud835\udc3f(N (\ud835\udf07, \ud835\udf0e2)\u2225N (0, \ud835\udc3c)), \ud835\udc37\ud835\udc3e\ud835\udc3f(N (\ud835\udf07, \ud835\udf0e2)\u2225N (0, \ud835\udc3c)) = 1 2 \u2211\ufe01 \ud835\udc56 (\ud835\udf072 \ud835\udc56+ \ud835\udf0e2 \ud835\udc56\u2212log\ud835\udf0e2 \ud835\udc56\u22121). (10) The regularization term calculates the distance between the Gaussian distribution where the noise is drawn and the standard Gaussian distribution. Through this reparameterization trick, we do not directly optimize \ud835\udf16but the distribution it follows. In this way, the prediction error of the diffusion model L(\ud835\udc65, \ud835\udf07+ \ud835\udf0e\ud835\udf16) and the regularization term become two adversaries. The contradiction between them can be solved iff noises drawn from a distribution close to the standard Gaussian distribution have low prediction errors (indicating memorization) simultanously. This constraint can be satisfied by the memorized image-prompt pairs in conditional text-to-image models, as shown in experiments. However, for unconditional models, it cannot be solved, which demonstrates that unconditional models are safe from memorization. 6.1.2 Experiment Setup. Apart from Stable Diffusion\u2019s unconditional model, we additionally investigate an unconditional diffusion model trained on the human face dataset FFHQ [17, 32] consisting of 70000 images. For Stable Diffusion, we perform the noise inversion for the 78 memorized images and 100 normal images randomly sampled from its training set. The input prompt is fixed to an empty string. For the model trained on FFHQ, 100 randomly sampled training images are used for experiments. We perform the Kolmogorov-Smirnov hypothesis test (KS test) on the optimized \ud835\udf16\u2217\u223cN (\ud835\udf07\u2217, (\ud835\udf0e2)\u2217) to decide whether \ud835\udf16\u2217can be drawn from a standard Gaussian distribution. The null hypothesis is set to \"\ud835\udf16\u2217is drawn from a standard Gaussian distribution\" and the \ud835\udc5d-value is set to 0.05 for all experiments. In a Kolmogorov-Smirnov test, if the calculated \ud835\udc5d-value is less than 0.05, the null hypothesis should be rejected and otherwise accepted. For each learned Gaussian distribution N (\ud835\udf07\u2217, (\ud835\udf0e2)\u2217), we randomly sample 1000 samples from it and take the average \ud835\udc5d-value over the 1000 samples. For optimization, Adam optimizer is used with an initial learning rate of 0.1 following cosine Distribution Shift Unconditional Conditional Figure 13: The introduction of a prompt shifts the unconditional error distribution. decay without weight decay. We use a batch size of 32 and train for a total of 500 iterations. 6.1.3 Results. We first demonstrate the effectiveness of our regularized noise inversion (Equation 10) to circumvent over-optimization through a study on memorized images in Stable Diffusion. For each image, we adopt their training prompt that will trigger memorization. Figure 9 shows the generation results using optimized noise \ud835\udf16\u2217. Whether regularized or not, memorized images are easy to reproduce. Figure 10 exhibits the \ud835\udc5d-value, mean and variance of the inverted noises by unregularized (Equation 6) and regularized (Equation 10) optimizations. It can be observed that inversion via our regularized objective produces normally distributed noises with high \ud835\udc5d-value of KS test, zero mean and unit variance. It effectively circumvents the over-optimization problem, which can be then utilized to measure the safety of unconditional models. For unconditional models, we perform noise inversion using Equation 10, with or without the KL-divergence regularization term. The results can be found in Figures 11 and 12. For unconditional models, it fails to reproduce training images on both models when the normality of noises is constrained. However, without normality regularization, as in Figure 12, the optimized noises present lower \ud835\udc5d-values, which indicates that they cannot be drawn from the standard Gaussian distribution with high probability. The results demonstrate that unconditional models are more safe to protect their training images from replication. Note that compared to Stable Diffusion trained on LAION, the diffusion model trained on FFHQ presents better normality for the inverted noises. This might be attributed to its limited number of training data (70000) embedded into a large latent space R3\u00d764\u00d764. In contrast, Stable Diffusion is trained on 2 billions of data with a slightly larger latent space R4\u00d764\u00d764. The large contrast between the number of training data and the dimensionality of latent space \"leaves more space to memorize one instance\", which can be observed in Figure 12c that noises inverted on FFHQ tend to have larger variance than those on LAION. 6.2 Measurement 6.2.1 Methodology. As discussed in Section 6.1, unconditional diffusion model trained on large-scale data is safe from memorization. Therefore, the unconditional error L(\ud835\udc650,\ud835\udf160) represents a safe distribution when \ud835\udf160 is sampled from the standard Gaussian distribution. 0.025 0.050 0.075 0.100 0.125 0.150 0.175 Conditional Unconditional (a) Memorized Image 0.26 0.27 0.28 0.29 0.30 Conditional Unconditional (b) Normal Image Figure 14: Example of prediction error distributions for a memorized image and a normal image in Stable Diffusion. It can then serve as a guard to measure the safety against memorization of any conditional error distribution L(\ud835\udc650,\ud835\udf160,\ud835\udc52) when some prompt \ud835\udc52is introduced. We consider the worst-case conditional error distribution L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) where \ud835\udc52\u2217is obtained through Equation 8. We then measure the extent to which \ud835\udc650 is memorized as the distribution shift of prediction errors from unconditional to the worst-case conditional, as illustrated in Figure 13. Distribution shift. The distribution shift can be calculated by the Wasserstein distance between unconditional error distribution and the worst-case conditional error distribution. Wasserstein distance measures the minimal cost to convert unconditional error distribution to conditional error distribution. Wasserstein distance is suitable for measurement of memorization because it takes into consideration the amount of errors that are lowered by introducing a prompt. The larger the Wasserstein distance is, the lower the prediction error has been reduced, and to more extent the target image is memorized. We denote the measure by M(\ud835\udc650). The distributions of L(\ud835\udc650,\ud835\udf160) and L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217) are estimated using the Monto Carlo method. Condition 3: probability. The measurement based on the distribution shift meets the probability condition of memorization. We do not directly calculate the probability of memorization but calculate a correlated measure by referring to the safe unconditional model. Through this way, we avoid to determine an absolute threshold to distinguish between replicating and normal generations. According to Chebyshev\u2019s inequality, the probability that unconditional prediction errors deviates from its mean by more than \ud835\udc58\ud835\udf0eis at most 1/\ud835\udc582. Therefore, when a prompt is input instead of an empty string, the larger the distribution of the prediction errors is shifted towards the original rare case, the more probable that memorization has been triggered. 6.2.2 Experiment Setup. Based on the prompt inversion results, the extent to which a target image is memorized M(\ud835\udc650) can be estimated by the Wasserstein distance between the unconditional error distribution L(\ud835\udc650,\ud835\udf160) and worst-case conditional error distribution L(\ud835\udc650,\ud835\udf160,\ud835\udc52\u2217). For any image, we invert a sequence of token embeddings \ud835\udc52\u2217as in Equation 8. All the 75 free tokens are optimized. We calculate M(\ud835\udc650) for the 78 memorized images and 100 randomly sampled normal images. 1000 Gaussian noises are randomly sampled to estimate each error distribution. The probability density function is calculated with 2000 bins over the range [0, 0.4]. Memorized Normal 0.0 0.2 0.4 0.6 0.8 Figure 15: Memorization measured by Wasserstein distance for memorized images and normal images. Memorized images present a significant higher level of memorization. Figure 16: Examples of images that are memorized to relatively smaller extent. Original training images are in the first column. 6.2.3 Results. Figure 14 shows an example of the prediction error distribution for both memorized and normal images. The conditional error distribution of memorized images shows an obvious gap to the unconditional error distribution. However, the conditional error distribution of normal images get entangled in its unconditional error distribution. Figure 15 illustrates the Wasserstein distance distribution of all test images. Memorized images present significantly larger Wasserstein distances compared to normal images. Recall that there are partially memorized images in the test set. We find that these images correspond to lower distance compared to other completely memorized images, as shown in Figure 16. This demonstrates the effectiveness of our measurement to quantify the extent to which an image is memorized beyond simply distinguishing memorized images from normal ones. 7 RELATED WORK 7.1 Memorization in Image Generation Models Memorization has previously raised concerns in image generation models, e.g., GAN and VAE, mainly focusing on the type of unconditional generation. There have been studies on training algorithm [34] and evaluation metric [13] to improve the generalization ability of GANs to get rid of simply copying from training data. It has been shown that small data size [10] or too longer training [43] can cause memorization in GANs. Van der Burg et al. [44] measure memorization in VAE as the changed probability when removing one sample from the training set. For diffusion models, Vyas et al. [45] propose a copyright protection method to prevent replication of sensitive training images. The model is trained to match a safe model that does not take sensitive data for training. Carlini et al [8] and Somepalli et al. [40, 41] demonstrates that memorization also occur in text-to-image diffusion models. Memorized images are found from numerous generated samples by membership inference attack or searching for the most similar training images using image retrieval models. Webster [46] provides more efficient attacks to extract training images from text-to-image models. Subsequently, Wen et al. [48] focus on the detection of abnormal prompts that will trigger generation of training images. Compared to these works, we perform a practical analysis on training image memorization with no need to access any prompts. Our analysis not only is able to find memorized images, but also provides quantative measurement and allows developers to claim safety on normal images. 7.2 Inversion of Diffusion Models Inversion techniques in diffusion models are widely studied mainly for image editing [11, 26, 52]. Through inversion, the object, style and concept contained in the source images can be compressed in latent noises or input token embeddings. Then the inverted latent noises or input token embeddings are utilized to generate novel images that preserve the desired content. We leverage analogous inversion techniques to analyze training image memorization in diffusion models. Instead of utility, we focus more on the regularity of inverted signals, which is essential to identify memorized images. In this sense, memorized images are a class that is \"naturally\" invertible. 8 DISCUSSION AND CONCLUSION In this work, we perform a practical analysis on memorization in text-to-image diffusion models. Our analysis targets a set of images and measures the extent to which they are memorized, without the need to collect massive prompts. We first provide a formal definition of training image memorization and identify three conditions to say an image is memorized. We show that the model\u2019s internal prediction error is a strong indicator for training image replication. Based on it, the existence of a prompt that triggers memorization is analyzed by inverting an array of token embeddings. We highlight the importance of regularization in inversion-based verification. Furthermore, we propose a regularized noise inversion method to verify that the unconditional diffusion models trained on largescale data are safe from memorization. Based on the verification, the extent to which an image is memorized is measured by the distribution shift between unconditional error and conditional error. In practice, developers of text-to-image diffusion models can leverage our analysis method to perform safety analysis on a set of sensitive training images. Our method enables developers to discover potential memorization risks and fix them timely or to responsibly claim safety against memorization to their data providers. We study the security of training data in diffusion models in terms of memorization. In the future, analysis on wider scopes is also in urgent need. Defense against unsafe derivative generation. A majority of generated images by diffusion models are more than copies of training images, which is called derivative generation. Among them, considerable ethical threats have been found [2, 25, 28], including bias, pornography, violence, etc. Through editing methods and several pieces of images, diffusion models can also be used to generate variants of personal human photos [11] and imitate artists\u2019 works [21, 37]. In the past, a large portion of efforts to circumvent unsafe generation are put into training data cleaning [2, 5] and toxic content detection. Nonetheless, it is still possible to induce them by visual synonyms [2] or seemingly innocuous prompts [51]. Towards more reliable defense against unsafe derivative generation, improving interpretability of diffusion models\u2019 latent space would be beneficial. We haven\u2019t had a full understanding of the semantic structure of diffusion models\u2019 latent space yet [50]. Devising training algorithms that align human ethical notions to diffusion models\u2019 latent space would an interesting direction. Limitations. Our work has two limitations. First, although our hard prompt inversion algorithm is more effective than existing hard prompt tuning methods in analyzing memorization, it does not work for all memorized images, especially those requiring more key tokens to trigger. In practice, it would provide stronger evidence if an example hard prompt is available. We hope more stable and effective hard prompt inversion algorithms for analyzing memorization can be devised. Second, we only provide security analysis method for unconditional and text-to-image diffusion models. A more comprehensive investigation for other type of conditional models and corresponding regularization methods should also be conducted. Despite the limitations, we believe our method provides a practical security analysis tool for developers to optimize their models."
17
+ }
intro_8K/test_introduction_short_2405.05852v1.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05852v1",
3
+ "title": "Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control",
4
+ "abstract": "Embodied AI agents require a fine-grained understanding of the physical world\nmediated through visual and language inputs. Such capabilities are difficult to\nlearn solely from task-specific data. This has led to the emergence of\npre-trained vision-language models as a tool for transferring representations\nlearned from internet-scale data to downstream tasks and new domains. However,\ncommonly used contrastively trained representations such as in CLIP have been\nshown to fail at enabling embodied agents to gain a sufficiently fine-grained\nscene understanding -- a capability vital for control. To address this\nshortcoming, we consider representations from pre-trained text-to-image\ndiffusion models, which are explicitly optimized to generate images from text\nprompts and as such, contain text-conditioned representations that reflect\nhighly fine-grained visuo-spatial information. Using pre-trained text-to-image\ndiffusion models, we construct Stable Control Representations which allow\nlearning downstream control policies that generalize to complex, open-ended\nenvironments. We show that policies learned using Stable Control\nRepresentations are competitive with state-of-the-art representation learning\napproaches across a broad range of simulated control settings, encompassing\nchallenging manipulation and navigation tasks. Most notably, we show that\nStable Control Representations enable learning policies that exhibit\nstate-of-the-art performance on OVMM, a difficult open-vocabulary navigation\nbenchmark.",
5
+ "authors": "Gunshi Gupta, Karmesh Yadav, Yarin Gal, Dhruv Batra, Zsolt Kira, Cong Lu, Tim G. J. Rudner",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV",
11
+ "cs.AI",
12
+ "cs.CL",
13
+ "cs.LG",
14
+ "cs.RO",
15
+ "stat.ML"
16
+ ],
17
+ "label": "Original Paper",
18
+ "paper_cat": "Diffusion AND Model",
19
+ "gt": "As general-purpose, pre-trained \u201cfoundation\u201d models [Rom+22; Tou+23; Bro+20; Ope23; Liu+23; Ala+22; Che+22] are becoming widely available, a central question in the field of embodied AI has emerged: How can foundation models be used to construct model representations that improve generalization in challenging robotic control tasks [Bro+22; Zit+23; Sha+23]? Robotic control tasks often employ pixel-based visual inputs paired with a language-based goal specification, making vision-language model representations particularly well-suited for this setting. However, while vision-language representations obtained via Contrastive Language-Image Pre-training [CLIP; Rad+21]\u2014a state-of-the-art method\u2014have been suc- cessfully applied to a broad range of computer vision tasks, the use of CLIP representations has been shown to lead to poor downstream performance for robotic control. This short- coming has prompted the development of alternative, control-specific representations for embodied AI [Nai+22; Ma+23] but has left other sources of general-purpose pre-trained vision-language representations\u2014such as text-to-image diffusion models\u2014largely unex- plored for control applications. *Equal Contribution. 1 arXiv:2405.05852v1 [cs.CV] 9 May 2024 \u0003\t\u0006\u0005\u0004\u0002\b\u0001\u0000\u0007 \f\n\u0003\u0003 \u0018\t\u0017\t\u0001\u0014\u0012\u0011\u0010\u0001\t\u000e\u000e\u000f\b\r \u0001\u0005\r!\u0005\u0014#\u000f\u0006\u001b\u001a\t\r 2\u0001\u0005\u000e\u0010\u0014-\u0001\t\u0007\u000f\u001b\u0006\u000f\b\r 97\u00056\t3\u00054 \u00124\u0005\u0000B\u0005\u0006\u000f\b\r\u0014=B\u000f\u0006\t OSQRPINKMLJMQMHGPGSFH X\u00043\t\u0006 \n]\\ [\t\u001b\b\u0007\t\u0001 \n]\\ \u0012\r\u001b\b\u0007\t\u0001 ef9-\u0014f\u0005\r6B\u00056\t\u0014\u0012\r\u001b\b\u0007\t\u0001 \u007f-\u000f\u001b!\u000f\r6\u0014B\u0010\u0014\u0005\u0014!\t\u0006\u0006\u0000\tq \u0088\u0086\u0085\u0087\u0082\u0084 \u0083\u0082\u0081\u0082\u0080 =\u0006\u0005\u0093\u0000\t\u0014[\u000f\u0017\u0017B\u000e\u000f\b\r \f\u0001\u000f6\u000f\r\u0005\u0000\u001497\u00056\t [\t\r\b\u000f\u000e\t\u0007\u001497\u00056\t 0.0 0.2 0.4 0.6 0.8 1.0 Average Norm Success VAE CLIP R3M VC-1 SCR (Ours) SCR-FT (Ours) Overall Representation Comparison Figure 1: Left: Our paper proposes Stable Control Representations, which uses pre-trained text-to- image diffusion models as a source of language-guided visual representations for downstream policy learning. Right: Stable Control Representations enable learning control policies that achieve all-round competitive performance on a wide range of embodied control tasks, including in domains that require open-vocabulary generalization. Empirical results are provided in Section 5. In this paper, we propose Stable Control Representations (SCR): pre-trained vision- language representations from text-to-image diffusion models that can capture both high and low-level details of a scene [Rom+22; Ho+22]. While diffusion representations have seen success in downstream vision-language tasks, for example, in semantic segmenta- tion [Bar+22; Tia+23; Wan+23], they have\u2014to date\u2014not been used for control. We perform a careful empirical analysis in which we deconstruct pre-trained text-to-image dif- fusion model representations to understand the impact of different design decisions. In our empirical investigation, we find that diffusion representations can outperform general- purpose models like CLIP [Rad+21] across a wide variety of embodied control tasks despite not being trained for representation learning. This is the case even for purely vision-based tasks and settings that require task understanding through text prompts. A highlight of our results is the finding that diffusion model representations enable better generalization to unseen object categories in a challenging open-vocabulary navigation benchmark [Yen+23] and provide improved interpretability through attention maps [Tan+23]. Our key contributions are as follows: 1. In Section 4, we introduce a multi-step approach for extracting vision-language rep- resentations for control from text-to-image diffusion models. We show that these representations are capable of capturing both the abstract high-level and fundamental low-level details of a scene, offering an alternative to models trained specifically for representation learning. 2. In Section 5, we evaluate the representation learning capabilities of diffusion models on a broad range of embodied control tasks, ranging from purely vision-based tasks to problems that require an understanding of tasks through text prompts, thereby showcasing the versatility of diffusion model representation. 3. In Section 6, we systematically deconstruct the key features of diffusion model repre- sentations for control, elucidating different aspects of the representation design space, such as the input selection, the aggregation of intermediate features, and the impact of fine-tuning on enhancing performance. We have demonstrated that diffusion models learn versatile representations for control and can help drive progress in embodied AI. The code for our experiments can be accessed at: https://github.com/ykarmesh/stable-control-representations. 2",
20
+ "main_content": "We first review prior work on representation learning and diffusion models for control. Representation Learning with Diffusion Models. Diffusion models have received a lot of recent attention as flexible representation learners for computer vision tasks of varying granularity\u2014ranging from key point detection and segmentation [Tia+23; Wan+23] to image classification [YW23; Tra22]. Wang et al. [Wan+23] has shown that intermediate layers of a text-to-image diffusion model encode semantics and depth maps that are recoverable by training probes. These approaches similarly extract representations by considering a moderately noised input, and find that the choice of timestep can vary based on the granularity of prediction required for the task. Yang and Wang [YW23] train a policy to select an optimal diffusion timestep, we simply used a fixed timestep per class of task. Several works [Tia+23; Wan+23; Tan+23] observe that the cross-attention layers that attend over the text and image embeddings encode a lot of the spatial layout associated with an image and therefore focus their method around tuning, post-processing, or extracting information embedded within these layers. Visual Representation Learning for Control. Over the past decade, pre-trained representation learning approaches have been scaled for visual discrimination tasks first, and control tasks more recently. Contrastively pre-trained CLIP [Rad+21] representations were employed for embodied navigation tasks by EmbCLIP [Kha+22]. MAE representations have been used in control tasks by prior works like VC-1 [Maj+23], MVP [Xia+22] and OVRLv2 [Yad+23]. R3M [Nai+22] and Voltron [Kar+23] leverage language supervision to learn visual representations. In contrast, we investigate if powerful text-to-image diffusion models trained for image generation can provide effective representations for control. Diffusion Models for Control. Diffusion models have seen a wide range of uses in control aside from learning representations. These can broadly be categorized into three areas. First, diffusion models have been used as a class of expressive models for learning action distribution for policies [Chi+23; Pea+23; HE+23]; this can help model multimodality and richer action distributions than Gaussians. Second, off-the-shelf diffusion models have been used to augment limited robot demonstration datasets by specifying randomizations for object categories seen in the data through inpainting [KVJ23; Yu+23; Man+22]. Diffusion models trained from scratch have also been shown to be an effective method for data augmentation [Lu+23; Jac+24]. Third, planning can be cast as sequence modeling through diffusion models [Jan+22; Aja+23; Du+23]. 3 Background We briefly review diffusion models and text-conditional image generation, and then describe the control setting we consider in this work. 3.1 Diffusion Models Diffusion models [SD+15; HJA20] are a class of generative models that learn to iteratively reverse a forward noising process and generate samples from a target data distribution p(x0), starting from pure noise. Given p(x0) and a set of noise levels \u03c3t for t = 1,..., T, a denoising function \u03f5\u03b8(xt, t) is trained on the objective \ufffd \ufffd \ufffd \ufffd () \ufffdDM(\u03b8) = \ufffdx0,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \ufffd xt, t))\u22252 2 \ufffd = \ufffdx0,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \ufffd x0 + \u03c3t \u00b7 \u03f5, t))\u22252 2 \ufffd , (3.1) \u03f5 \u223c\ufffd(0,1), t \u223cUnif(1, T), and x0 \u223cp(x0). To generate a sample x0 during \ufffd\u2225 \u2212 \ufffd \u2225 \ufffd \u2225 \u2212 \ufffd \u00b7\u2225 \ufffd where \u03f5 \u223c\ufffd(0,1), t \u223cUnif(1, T), and x0 \u223cp(x0). To generate a sample x0 during inference, we first sample an initial noise vector xT \u223c\ufffd(0,\u03c3T) and then iteratively denoise this sample for t = T,...,1 by sampling from p(xt\u22121|xt), which is a function of \u03f5\u03b8(xt, t). 3 In some settings, we may want to generate samples with a particular property. For example, we may wish to draw samples from a conditional distribution over data points, p(x0|c), where c captures some property of the sample, such as classification label or a text description [Rom+22; Sah+22]. In these settings, we may additionally train with labels to obtain a conditioned denoiser \u03f5\u03b8(xt, t, c) and generate samples using classifier-free guidance [HS21]. 3.2 Latent Diffusion Models Latent diffusion models [Rom+22] reduce the computational cost of applying diffusion models to high-dimensional data by instead diffusing low-dimensional representations of high-dimensional data. Given an encoder E(\u00b7) and decoder D(\u00b7), (3.1) is modified to operate on latent representations, z0 \u02d9 =E(x0), yielding LLDM(\u03b8) = Ex0,c,\u03f5,t[\u2225\u03f5 \u2212\u03f5\u03b8 \u0000E(x0) + \u03c3t \u00b7 \u03f5, t, c)\u22252 2 \u0003 , (3.2) where \u03f5 \u223cN (0,1), t \u223cUnif(1, T), x0, c \u223cp(x0, c). After generating a denoised latent representation z0, it can be decoded as x0 = D(z0). A popular instantiation of a conditioned latent diffusion model is the text-to-image Stable Diffusion model [SD; Rom+22]. The SD model is trained on the LAION-2B dataset [Sch+22] and operates in the latent space of a pre-trained VQ-VAE image encoder [ERO21]. The model architecture is shown at the top of Figure 1 and is based on a U-Net [RFB15], with the corresponding conditioning text prompts encoded using a CLIP language encoder [Rad+21]. 3.3 Policy Learning for Control We model our environments as Markov Decision Processes (MDP , Sutton and Barto [SB18]), defined as a tuple M = (S ,A , P ,R,\u03b3), where S and A denote the state and action spaces respectively, P(s\u2032|s, a) the transition dynamics, R(s, a) the reward function, and \u03b3 \u2208(0,1) the discount factor. Our goal is to optimize a policy \u03c0(a|s) that maximizes the expected discounted return E\u03c0,P \u0002P\u221e t=0 \u03b3tR(st, at) \u0003 . In this paper, we consider visual control tasks that may be language-conditioned, that is, states are given by s = [simage,stext], where stext specifies the task. We are interested in pretrained vision-language representations capable of encoding the state s as f\u03c6(simage,stext). This encoded state is then supplied to a downstream, task-specific policy network, which is trained to predict the action at. Our evaluation encompasses both supervised learning and reinforcement learning regimes for training the downstream policies. We train agents through behavior cloning on a small set of demonstrations for the few-shot manipulation tasks we study in Section 5.2. For the indoor navigation tasks we study in Sections 5.3 and 5.4, we use a version of the Proximal Policy Optimization [PPO, Sch+17] algorithm for reinforcement learning. 4 Stable Control Representations In this paper, we consider extracting language-guided visual representations from the opensource Stable Diffusion model. We follow a similar protocol as Wang et al. [Wan+23], Traub [Tra22], and Yang and Wang [YW23]: Given an image-text prompt, s = {simage,stext}, associated with a particular task, we use the SD VQ-VAE model as the encoder E(\u00b7) and partially noise the latents z0 \u02d9 =E(simage) to some diffusion timestep t. We then extract representations from the intermediate outputs of the denoiser \u03f5\u03b8(zt, t,stext). This process is illustrated in Figure 2. We refer to the extracted representations as Stable Control Representations (SCR). We will describe the design space for extracting SCR in the remainder of this section. 4 \u000f\r\t\u000b\u0007\u0005\u0004\u0006\u0003\f \u000e\b \r\u0002\u000f\u0001\f\u0000\u001c\u0016\u0017\u0015\u001b\u0013\u0018\u0018\u001a\u0016\u0012\u0011\u0019\u0010\u0014\u0013\u001b \u000f\"!\r\u001d \u000f\"!\r# \u000f\"!\r) \u000f\"!\r/ \u0002\u000e\u0001\r 9\u0000\u000f \u0005 C\u000f!A\r>! PNQJOMKQOMKIHLKGFKEKQMOMDNQ R R R [QMKFGNSOMK [QMKFGNSOMK [QMKFGNSOMK \u0019\u0010\u0012da\u0010d\u0013\u0011_\u0017^\u0013\\\\\u001a\u0012d \u001c\u0019tp\u0011\u0019\u0010\u0012da\u0010d\u0013\u0011_\u0012k\u0016\\\u0013\u001b {\u000b!y\u007f\u000by\f\r\u0080}\u000fz\u0003w \u008d\u008c\u0089\r\u0089! \u000f\u0001\f} \u0097\u000f\u000e\u0007\f\r\u0000\fA\f\u0000\r\u0090\r\r\r\r\r\u008e Figure 2: Extraction of Stable Control Representations from Stable Diffusion. Given an image-text prompt, s = {simage,stext}, we encode and noise the image and feed it into the U-Net together with the language prompt. We may then aggregate features from multiple levels of the downsampling process, as described in Section 4. 4.1 Layer Selection and Aggregation We are interested in evaluating the internal representations from the denoiser network, that is, the U-Net \u03f5\u03b8(\u00b7). The first design choice we consider is which layers of \u03f5\u03b8 to aggregate intermediate outputs from. The U-Net does not have a representational bottleneck, and different layers potentially encode different levels of detail. Trading off size with fidelity, we concatenate the feature maps output from the mid and down-sampling blocks to construct the representation. This results in a representation size comparable to that of the other pretrained models we study in Section 5. This is shown at the bottom of Figure 2 and we ablate this choice in Section 6.1. Since outputs from different layers may have different spatial dimensions, we bilinearly interpolate them so that they are of a common spatial dimension and can be stacked together. We then pass them through a learnable convolutional layer to reduce the channel dimension before feeding them to downstream policies. The method used to spatially aggregate pre-trained representations can significantly affect their efficacy in downstream tasks, as we will discuss in Section 6.4. We use the best-performing spatial aggregation method for all the baselines that we re-train in Section 5. 4.2 Diffusion Timestep Selection Next, we consider the choice of extraction timestep t for the denoising network (shown on the left of Figure 2). Recall that the images we observe in control tasks are un-noised (i.e., corresponding to x0), whereas the SD U-Net expects noised latents, corresponding to zt for t \u2208[0,1000]. The choice of timestep t influences the fidelity of the encoded latents since a higher value means more noising of the inputs. Yang and Wang [YW23] have observed that there are task-dependent optimal timesteps and proposed adaptive selection of t during training, while Xu et al. [Xu+23] have used t = 0 to extract representations from un-noised inputs to do open-vocabulary segmentation. We hypothesize that control tasks that require a detailed spatial scene understanding benefit from fewer diffusion timesteps, corresponding to a later stage in the denoising process. We provide evidence consistent with this hypothesis in Section 6.2. To illustrate the effect of the timestep, we display final denoised images for various t values in different domains in Figure 9. 4.3 Prompt Specification Since text-to-image diffusion models allow conditioning on text, we investigate if we can influence the representations to be more task-specific via this conditioning mechanism. For tasks that come with a text specifier, for example, the sentence \u201cgo to object X\u201d, we simply 5 Input Image Refer Expression \u201cpear\u201d \u201cA\u201d \u201cbeside\u201d \u201cbook\u201d \u201ca\u201d \u201cA\u201d \u201cand\u201d \u201crocket\u201d Input Image OVMM \u201cchair\u201d \u201ca\u201d Figure 3: The Stable Diffusion model allows us to extract word-level cross-attention maps for any given text prompt. We visualize these maps in a robotic manipulation environment and observe that they are accurate at localizing objects in a scene. Since these maps are category agnostic, downstream policies should become robust to unseen objects at test time. encode this string and pass it to the U-Net. However, some tasks are purely vision-based and in these settings, we explore whether constructing reasonable text prompts affects downstream policy learning when using the U-Net\u2019s language-guided visual representations. We present this analysis in Section 6.3. 4.4 Intermediate Attention Map Selection Recent studies [Wan+23; Tan+23] demonstrate that the Stable Diffusion model generates localized attention maps aligned with text during the combined processing of vision and language modalities. Wang et al. [Wan+23] leveraged these word-level attention maps to perform open-domain semantic segmentation. We hypothesize that these maps can also help downstream control policies to generalize to an open vocabulary of object categories by providing helpful intermediate outputs that are category-agnostic. Following Tang et al. [Tan+23], we extract the cross-attention maps between the visual features and the CLIP text embeddings within the U-Net. An example of the word-level attention maps is visualized in Figure 3. We test our hypothesis on an open-domain navigation task in Section 5.4, where we fuse the cross-attention maps with the extracted feature maps from the U-Net. We refer to this variant as SCR-ATTN. 4.5 Fine-Tuning on General Robotics Datasets Finally, we consider fine-tuning strategies to better align the base Stable Diffusion model towards generating representations for control. This serves to bridge the domain gap between the diffusion model\u2019s training data (e.g., LAION images) and robotics datasets\u2019 visual inputs (e.g., egocentric tabletop views in manipulation tasks or indoor settings for navigation). Crucially, we do not use any task-specific data for fine-tuning. Instead, we use a small subset of the collection of datasets used by prior works on representation learning for embodied AI [Maj+23; Xia+22]: we use subsets of the EpicKitchens [Dam+18], Something-Something-v2 [SS-v2; Goy+17], and Bridge-v2 [Wal+23] datasets. We adopt the same text-conditioned generation objective as that of the base model for the fine-tuning phase. As is standard, we fine-tune the denoiser U-Net \u03f5\u03b8 but not the VAE encoder or decoder. Image-text pairs are uniformly sampled from the video-text pairs present in these datasets. A possible limitation of this strategy is that text-video aligned pairs (a sequence of frames that correspond to a single language instruction) may define a many-to-one relation for image-text pairs. However, as we see in experiments in which we compare to the base Stable Diffusion model in Section 5, this simple approach to robotics alignment is useful in most cases. Further details related to fine-tuning are provided in Appendix B.1. We refer to the representations from this fine-tuned model as SCR-FT. 6 5 Empirical Evaluation In this work, we evaluate Stable Control Representations (SCR) on an extensive suite of tasks from 6 benchmarks covering few-shot imitation learning for manipulation in Section 5.2, reinforcement learning-based indoor navigation in Sections 5.3 and 5.4, and owing to space limitations, two tasks related to fine-grained visual prediction in Section 5.5. Together, these tasks allow us to comprehensively evaluate whether our extracted representations can encode both high and low-level semantic understanding of a scene to aid downstream policy learning. We begin this section by listing the common baselines used across tasks, followed by the description of individual task setups and results obtained. 5.1 Baselines We compare SCR and its variants (i.e., SCR-FT and SCR-FT-ATTN) to the following prior work in representation learning for control: 1. R3M [Nai+22] pre-trains a ResNet50 encoder on video-language pairs from the Ego4D dataset using time-contrastive video-language alignment learning. 2. MVP [Xia+22] and VC-1 [Maj+23] both pre-train ViT-B/L models with the masked auto-encoding (MAE) objective on egocentric data from Ego4D, Epic-Kitchens, SS-v2, and ImageNet, with VC-1 additionally pre-training on indoor navigation videos. 3. CLIP [Rad+21] trains text and ViT-based image encoders using contrastive learning on web-scale data. 4. Voltron [Kar+23] is a language-driven representation learning method that involves pre-training a ViT-B using MAE and video-captioning objectives on aligned text-video pairs from SS-v2. 5. SD-VAE [Rom+22] is the base VAE encoder used by SD to encode images into latents. To assess how well the vision-only methods would do on tasks with language specification, we concatenate their visual representations with the CLIP text embeddings of the language prompts. While we are limited by the architecture designs of the released models we are studying, to ensure a more fair comparison we try to match parameter counts as much as we can. We use the ViT-Large (307M parameters) versions of CLIP , MVP , and VC-1 since extracting SCR involves a forward pass through 400M parameters. 5.2 Few-shot Imitation Learning We start by evaluating SCR on commonly studied representation learning benchmarks in few-shot imitation learning. Specifically, our investigation incorporates five commonly studied tasks from Meta-World [Yu+19] (same as CORTEXBENCH [Maj+23]), which includes bin picking, assembly, pick-place, drawer opening, and hammer usage; as well as five tasks from the Franka-Kitchen environments included in the RoboHive suite [Kum+23], which entail tasks such as turning a knob or opening a door. We adhere to the training and evaluation protocols adopted in their respective prior works to ensure our results are directly comparable (detailed further in Appendix C.1). Results. We report the best results of SCR and baselines in Table 1a. On Meta-World, we see that SCR outperforms most prior works, achieving 94.9% success rate. In comparison, VC-1, the visual foundation model for embodied AI and CLIP achieved 92.3 and 90.1% respectively. On Franka-Kitchen, SCR obtains 49.9% success rate, which is much higher than CLIP (36.3%) and again outperforms all other baselines except for R3M. We note that R3M\u2019s sparse representations excel in few-shot manipulation with limited demos but struggle to transfer beyond this setting [Maj+23; Kar+23]. 7 Table 1: Average Success Rate and standard error evaluated across different representations. (a) Meta-World & Franka-Kitchen. Model Meta-World Franka-Kitchen R3M 96.0 \u00b1 1.1 57.6 \u00b1 3.3 CLIP 90.1 \u00b1 3.6 36.3 \u00b1 3.2 VC-1 92.3 \u00b1 2.5 47.5 \u00b1 3.4 Voltron 72.5 \u00b1 5.2 33.5 \u00b1 3.2 SD-VAE 75.5 \u00b1 5.2 43.7 \u00b1 3.1 SCR 94.4 \u00b1 1.9 45.0 \u00b1 3.3 SCR-FT 94.9 \u00b1 2.0 49.9 \u00b1 3.4 (b) ImageNav Model Success R3M 30.6 CLIP-B 52.2 VC-1 70.3 MVP 68.1 SD-VAE 46.6 SCR 73.9 SCR-FT 69.5 (c) OVMM Model Success Oracle 77.6 Detic 36.7 CLIP 38.7 \u00b1 1.7 VC-1 40.6 \u00b1 2.2 SCR 38.7 \u00b1 1.2 SCR-FT 41.9 \u00b1 1.0 SCR-FT-ATTN 43.6 \u00b1 2.1 We see that while the SD-VAE encoder performs competitively on Franka-Kitchen, it achieves a low success rate on Meta-World. This observation allows us to gauge the improved performance of SCR from the base performance gain we may get just from operating in the latent space of this VAE. Additionally, we see that the task-agnostic fine-tuning gives SCR-FT an advantage (4%) over SCR on Franka-Kitchen while making no difference on Meta-World. Note that the other high-performing baselines (R3M and Voltron) have been developed for downstream control usage with training objectives that take temporal information into account, while VC-1 has been trained on a diverse curation of robotics-relevant data. In this context, SCR\u2019s comparable performance shows that generative foundation models hold promise for providing useful representations for control, even with relatively minimal fine-tuning on non-task-specific data. 5.3 Image-Goal Navigation We now assess SCR in more realistic visual environments, surpassing the simple tabletop scenes in manipulation benchmarks. In these complex settings, the representations derived from pre-trained foundational models are particularly effective, benefiting from their large-scale training. We study Image-Goal Navigation (ImageNav), an indoor visual navigation task that evaluates an agent\u2019s ability to navigate to the viewpoint of a provided goal image [Zhu+17]. The position reached by the agent must be within a 1-meter distance from the goal image\u2019s camera position. This requires the ability to differentiate between nearby or similar-looking views within a home environment. This task, along with the semantic object navigation task that we study in Section 5.4, allows for a comprehensive evaluation of a representation\u2019s ability to code both semantic and visual appearance-related features in completely novel evaluation environments. We follow the protocol for the ImageNav task used by Majumdar et al. [Maj+23] and input the pre-trained representations to an LSTM-based policy trained with DD-PPO [Wij+19] for 500 million steps on 16 A40 GPUs (further details in Appendix C.3). Given the large training requirements, we only run SCR-FT and directly compare to the results provided in Majumdar et al. [Maj+23]. Results. We evaluate our agent on 4200 episodes in 14 held-out scenes from the Gibson dataset and report the success rate in Table 1b. We find that SCR outperforms MVP and CLIP (ViT-B), and is almost on par with VC-1 (69.5% vs 70.3%), the SOTA visual representation from prior work. We also see that R3M, the best model for few-shot manipulation from Table 1a performs very poorly (30.6%) in this domain, showing its limited transferability to navigation tasks. 5.4 Open Vocabulary Mobile Manipulation We now shift our focus to evaluating how Stable Diffusion\u2019s web-scale training can enhance policy learning in open-ended domains. We consider the Open Vocabulary Mobile 8 Train Val Figure 4: Sample scenes from the Habitat environments for the ImageNav (left) and OVMM (center) tasks. Instances from training and validation datasets of the OVMM object set are shown on the right. Manipulation (OVMM) benchmark [Yen+23] that requires an agent to find, pick up, and place objects in unfamiliar environments. One of the primary challenges here is locating previously unseen object categories in novel scenes (illustrated in Figure 4 (left)). To manage this complex sparse-reward task, existing solutions [Yen+23] divide the problem into sub-tasks and design modular pipelines that use open-vocabulary object detectors such as Detic [Zho+22]. We study a modified version of the Gaze sub-task (detailed in Appendix C.2), which focuses on locating a specified object category for an abstracted grasping action. The task\u2019s success is measured by the agent\u2019s ability to precisely focus on the target object category. This category is provided as an input to the policy through its CLIP text encoder embedding. The evaluation environments cover both novel instances of object categories seen during policy learning, as well as entirely unseen categories. We compare to VC-1, the best model from Section 5.3 and CLIP , since prior work has studied it for openvocab navigation [Kha+22; Maj+22]. We also incorporate a baseline that trains a policy with ground truth object masks, evaluated using either the ground truth or Detic-generated masks (labeled as Oracle/Detic). Results. Table 1c shows SCR matches the performance of CLIP and SCR-FT surpasses VC-1 by 1.3%, beating CLIP and SCR by 3.2%. Surprisingly, VC-1\u2019s visual representation does better than CLIP\u2019s image encoder representation, given that the downstream policy has to fuse these with the CLIP text embedding of the target object category. Compared to these baselines, we can see the benefit of providing intermediate outputs in the form of textaligned attention maps to the downstream policy (+1.7%). These word-level cross-attention maps simultaneously improve policy performance and also aid explainability, allowing us to diagnose successes and failures. Samples of attention maps overlaid on evaluation episode images can be found in Appendix C. Interestingly, the foundation model representations (CLIP , VC-1, SCR) perform better than Detic. While object detections serve as a category-agnostic output that downstream pickand-place policies can work with, noisy detections can often lead to degraded downstream performance, as we see in this case. Nonetheless, there is still a sizeable gap to \u2018Oracle\u2019 which benefits from ground truth object masks. 5.5 Fine-Grained Visual Prediction In Sections 5.2 to 5.4, our analysis focused on the performance of various representations across an array of control tasks. We now turn our attention to two downstream tasks involving fine-grained visual prediction. The first task, Referring Expressions Grounding, is detailed within this section, while the second task, Grasp Affordance Prediction, is discussed in Appendix A.1. These tasks have been previously examined by Karamcheti et al. [Kar+23] as proxy measures to evaluate the efficacy of representations for control applications. The Referring Expressions Grounding task requires the identification and bounding box prediction of an object in an image based on its textual description. Similar to Karamcheti et al. [Kar+23], we use the OCID-Ref Dataset [Wan+21] for our experiments. We show a 9 The lemon on the rear left of the instant_noodles. Figure 5: Sample from the OCID-Ref dataset used for the Referring Expressions task. Model Average Maximum clutter Medium clutter Minimum clutter CLIP 68.1 60.3 76.6 67.0 R3M 63.3 55.3 68.3 63.3 Voltron 92.5 96.9 91.8 90.2 VC-1 94.6 93.7 96.5 93.7 SD-VAE 94.3 93.2 96.3 93.4 SCR 92.9 91.1 95.9 91.8 SCR-FT 91.8 90.1 94.8 90.8 Table 2: Referring Expression Grounding (Accuracy at threshold IoU of 0.25 with label.). sample image-text pair from the dataset to showcase the complexity of the task in Figure 5. The frozen visual representation is concatenated with a text embedding and passed to a 4-layer MLP , which predicts the bounding box coordinates. We report the bounding box accuracy at a 25% Intersection-over-Union (IoU) threshold across different scene clutter levels for SCR-variants and baselines in Table 2. Results. We see that SCR is tied with Voltron and that VC-1 and SD-VAE perform the best with a 1.5% lead. The better performance of these vision-encoder-only methods highlights that on this task, it is not a challenge for the downstream decoder to learn to associate the visual embeddings with the (CLIP) text encoding of the language specification. Since the training budget is fixed, we observed that some of the runs could potentially improve over extended training. However, we were primarily interested in this task not just to compare the downstream visual prediction performance, but to use it as a testbed for exploring the following two questions: (1) Do the performance differences between the representations we evaluated in Sections 5.2 to 5.4, stem from the absence of fine-grained spatial information encoded within the representations? We refute this claim in Section 6.4, where we present the impact of the representations\u2019 spatial aggregation method on prediction performance. (2) Additionally, we explore to what extent language prompting influences the representations from SCR on language-conditioned tasks in Section 6.3. 6 Deconstructing Stable Control Representations In this section, we deconstruct Stable Control Representations to explain which design choices are most determinative of model robustness and downstream performance. 6.1 Layer Selection We begin our investigation by examining how the performance of SCR is influenced by the selection of layers from which we extract feature maps. We previously chose outputs from the mid and downsampling layers of the U-Net (Figure 2), because their aggregate size closely matches the representation sizes from the ViT-based models (VC-1, MVP , and CLIP). Appendix B.2 details the feature map sizes obtained for all the models we study. Table 3a lists the success rates achieved on the Franka-Kitchen domain when we use different sets of block outputs in SCR. We see that utilizing outputs from multiple layers is instrumental to SCR\u2019s high performance. This finding underscores a broader principle applicable to the design of representations across different models: Leveraging a richer set of features from multi-layer outputs should enhance performance on downstream tasks. However, it is important to acknowledge the practical challenges in applying this strategy to ViT-based models. The high dimensionality of each layer\u2019s patch-wise embeddings (16\u00d716\u00d71024 for ViT-L for images of size 224\u00d7224), may complicate the integration of multi-layer outputs. 10 Table 3: We analyze the impact of varying the denoising timestep, layers selection, and input text prompt for the performance of SCR on the Franka-Kitchen benchmark. We report the mean and standard error over 3 random seeds. (a) Denoising timestep. Timestep Success Rate 0 49.9 \u00b1 3.4 10 48.2 \u00b1 3.1 100 42.0 \u00b1 3.7 110 42.0 \u00b1 3.4 200 35.1 \u00b1 3.2 (b) Layers selection. Layers Success Rate Down[1-3] + Mid 49.9 \u00b1 3.4 Down[1-3] 43.0 \u00b1 3.4 Mid 41.6 \u00b1 3.3 Mid + Up[0] 42.1 \u00b1 3.6 Mid + Up[0-1] 48.1 \u00b1 3.6 (c) Input text prompt. Prompt Type Success Rate None 49.9 \u00b1 3.4 Relevant 49.2 \u00b1 3.5 Irrelevant 48.7 \u00b1 3.3 6.2 Sensitivity to the Noising Timestep Next, we characterize the sensitivity of task performance to the denoising step values chosen during representation extraction on the Franka-Kitchen tasks in Table 3b. We see that the performance across nearby timesteps (0 and 10 or 100 and 110) is similar, and that there is a benefit to doing a coarse grid search up to a reasonable noising level (0 vs 100 vs 200) to get the best value for a given task. 6.3 How is Language Guiding the Representations? Recall that in the OVMM experiments (Section 5.4), we concatenated the target object\u2019s CLIP text embedding to the visual representations before feeding it to the policy. For SCR and SCR-FT, we also provided the category as the text prompt to the U-Net, and additionally extracted the generated cross-attention maps for SCR-FT-ATTN. In this subsection, we seek to more closely understand how the text prompts impact the representations in SCR. We first consider the Franka-Kitchen setup from Section 5.2, which includes manipulation tasks that do not originally come with a language specification. We experiment with providing variations of task-relevant and irrelevant prompts during the representation extraction in SCR. Table 3c shows the downstream policy success rates for irrelevant (\u201can elephant in the jungle\u201d) and relevant (\u201ca Franka robot arm opening a microwave door\u201d) prompts, compared to the default setting of not providing a text prompt We see that providing a prompt does not help with downstream policy performance and may even degrade performance as the prompt gets more irrelevant to the visual context of the input. We now move to the Referring Expressions Grounding task from Section 5.5, which requires grounding language in vision to do bounding box prediction. To study the role of the U-Net in shaping the visual representations guided by the text, we examine different text integration methods to generate SCR representations and compare them to the Voltron baseline in Table 4. We compared the following approaches for providing the task\u2019s text specification to the task decoder (also depicted in Figure 6): (a) No text input: Exclude text prompt from both SCR and the task decoder by passing an empty prompt to the U-Net and using only the resulting SCR output for the decoder. (b) Prompt only: Pass text prompt only to the U-Net. (c) Concat only: Concatenate the CLIP embedding of the text prompt with the visual representation, feeding an empty prompt to the U-Net. (d) Prompt + Concat: Combine \u201cPrompt Only\u201d and \u201cConcat Only\u201d. (e) Only text encoding: Remove visual representations completely and rely only on CLIP text embeddings. Investigating the results of (a) and (b) in Table 4, it is evident that incorporating the text prompt into the U-Net significantly enhances accuracy compared to ignoring the text 11 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0002\r\u000b\u0003\u0002\u0001\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 \u001e\r\u000b\u0004,\u0018\u0014*\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 1 <\r\u000b\u0004,\u0018\u0014*\u0000\u000b\u0013\u0010<\u000b\u0007\u0018\u00102\u0013\u0000\u0005\u0014\u0013\u0015\u0002 \u0003\u0002\u0001\u0000\u0007\u0006\u0005\u0004 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 1 2\r\u000b\u0007\u0018\u00102\u0013\u0000\u000b\b\u0010\f\u000e \u0005\u0014\u0013\u0015\u0002 \u0000\u0013\u001e\f\u001d \u001c\"\u001b\u001b\u001a\u0019\"\u0018\u0010 \u0013\r\u000bF\u0018\u000b\u0003\u0002\u0001\u0000\u000b\u0005\u0010*\u001a\u0000Figure 6: Illustration of different approaches to providing relevant visionlanguage inputs to a downstream task-decoder. Configuration Score (a) No text input 14.8 (b) Prompt only 82.7 (c) Concat only 92.2 (d) Prompt + Concat 92.9 (e) Only text encoding 37.5 Table 4: Ablating text input to SCR on referring expressions task. altogether. The difference in scores between (b) and (c) indicates that directly providing text embeddings to the decoder improves performance, suggesting that certain crucial aspects of object localization are not fully captured by the representation alone. Comparing (c) to (d), we see that with concatenated text embeddings, further modulation of the visual representations does not provide significant benefits. Finally, the significant decrease in the score for (e) reveals the extent to which the task relies on text-based guesswork. These findings align with both intuition and recent research on controllable generation with diffusion models [Zha+23] that highlights the challenges associated with using long-form text guidance. There are ongoing research efforts, focused on training models with more detailed image descriptions or leveraging approaches to encode and integrate sub-phrases of long texts, that seek to address these challenges. 6.4 The Effect of Spatial Aggregation In this study, we refine the approach for extracting representations by integrating a convolutional layer that downsamples the spatial grid of pre-trained representations. This adjustment, referred to as a \u201ccompression layer\u201d by Yadav et al. [Yad+23], aims to reduce the high channel dimension of pre-trained model outputs without losing spatial details, facilitating more effective input processing by downstream task-specific decoders. We explore the effect of spatial aggregation methods by comparing the convolutional downsampling layer method to multi-headed attention pooling (MAP) used for CLIP embeddings in Karamcheti et al. [Kar+23]. We find that using a compression layer significantly improves performance on the fine-grained visual prediction tasks described in Section 5.5 as reported in Table 5 (columns 3-4). This result challenges the conjecture made in prior work that CLIP representations are limited in their ability to provide accurate low-level spatial information [Kar+23] and emphasizes the critical role of appropriate representation aggregation. Table 5: We ablate the spatial aggregation method for VC-1 and CLIP . On the fine-grained visual prediction tasks, we compare the average precision between using multi-head attention pooling (MAP) and the compression layer. On the Meta-World & Franka-Kitchen tasks, we compare the average success rates (\u00b1 one standard error) between the CLS token and compression layer embeddings. Model Aggregation Refer Exp. Grasp Affordance Meta-World Franka-Kitchen Method Grounding Prediction VC-1 MAP/CLS 93.2 24.7 88.8 \u00b1 2.2 52.0 \u00b1 3.4 VC-1 Compression 94.6 83.9 92.3 \u00b1 2.5 47.5 \u00b1 3.4 CLIP MAP/CLS 68.1 60.3 88.8 \u00b1 3.9 35.3 \u00b1 3.4 CLIP Compression 94.3 72.9 90.1 \u00b1 3.6 36.3 \u00b1 3.2 12 Building on this result, we assess whether better spatial aggregation can improve the performance of CLIP representations on downstream control tasks. We present these results in Table 5 (columns 5-6) for VC-1 and CLIP on the MuJoCo tasks. We see that the compression layer often outperforms the use of CLS token embeddings (by 1-2%), but CLIP representations still fail to match the best-performing models. This result provides evidence that the underperformance of CLIP representations on control tasks is unlikely due to a lack of sufficiently fine-grained visual information. Finally, we note that the compression layer aggregation technique was used for all baselines in Tables 1b and 1c to ensure a strong baseline comparison. We recommend that future studies adopt this methodology to enable a fairer comparison of representations. 7 Discussion In Section 6, we deconstructed Stable Control Representations and highlighted techniques used in our approach can be applied to other foundational control models. Our analysis in Sections 6.1 and 6.4 revealed that using multi-layer features and appropriate spatial aggregation significantly affects performance, and overlooking these factors can lead to misleading conclusions about the capabilities of previously used representations. Next, our investigation into how language shapes diffusion model representations uncovered nuanced results and showed that text influence on representations does not consistently increase downstream utility. This is particularly evident in tasks where text specification is not required and where training and test environments are congruent, minimizing the need for semantic generalization. In contrast, tasks like referring expressions grounding demonstrate the necessity of direct access to text embeddings for accurate object localization, even when representations are modulated to considerable success. For the OVMM task, we identified a scenario where multimodal alignment is essential and proposed a method to explicitly utilize the latent knowledge of the Stable Diffusion model through text-aligned attention maps, which is not straightforward to do for other multimodal models. Future research could design methods to derive precise text-associated attribution maps for other models. Finally, we contrasted the simplicity of fine-tuning diffusion models with that of the contrastive learning objective required to fine-tune CLIP representations. The former only requires image-text or image-only samples for conditional and unconditional generation objectives, respectively, whereas the latter requires a sophisticated negative label sampling pipeline along with large batch sizes to prevent the model from collapsing to a degenerate solution [Rad+21]. We demonstrated this phenomenon empirically on the Franka-Kitchen environment by fine-tuning CLIP similarly to SCR-FT in Appendix A.2. 8 Conclusion In this paper, we proposed Stable Control Representations, a method for leveraging representations of general-purpose, pre-trained diffusion models for control. We showed that using representations extracted from text-to-image diffusion models for policy learning can improve generalization across a wide range of tasks including manipulation, image-goal and object-goal based navigation, grasp point prediction, and referring expressions grounding. We also demonstrated the interpretability benefits of incorporating attention maps extracted from pre-trained text-to-image diffusion models, which we showed can improve performance and help identify downstream failures of the policy during development. Finally, we discussed ways in which the insights presented in this paper, for example, regarding feature aggregation and fine-tuning, may be applicable to other foundation models used for control. We hope that Stable Control Representations will help advance data-efficient control and enable open-vocabulary generalization in challenging control domains as the capabilities of diffusion models continue to improve. 13 Acknowledgments GG is funded by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems (EP/S024050/1) and Toyota Europe. We gratefully acknowledge donations of computing resources by the Alan Turing Institute. The Georgia Tech effort was supported in part by ONR YIP and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor."
21
+ }
intro_8K/test_introduction_short_2405.05945v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05945v1",
3
+ "title": "Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers",
4
+ "abstract": "Sora unveils the potential of scaling Diffusion Transformer for generating\nphotorealistic images and videos at arbitrary resolutions, aspect ratios, and\ndurations, yet it still lacks sufficient implementation details. In this\ntechnical report, we introduce the Lumina-T2X family - a series of Flow-based\nLarge Diffusion Transformers (Flag-DiT) equipped with zero-initialized\nattention, as a unified framework designed to transform noise into images,\nvideos, multi-view 3D objects, and audio clips conditioned on text\ninstructions. By tokenizing the latent spatial-temporal space and incorporating\nlearnable placeholders such as [nextline] and [nextframe] tokens, Lumina-T2X\nseamlessly unifies the representations of different modalities across various\nspatial-temporal resolutions. This unified approach enables training within a\nsingle framework for different modalities and allows for flexible generation of\nmultimodal data at any resolution, aspect ratio, and length during inference.\nAdvanced techniques like RoPE, RMSNorm, and flow matching enhance the\nstability, flexibility, and scalability of Flag-DiT, enabling models of\nLumina-T2X to scale up to 7 billion parameters and extend the context window to\n128K tokens. This is particularly beneficial for creating ultra-high-definition\nimages with our Lumina-T2I model and long 720p videos with our Lumina-T2V\nmodel. Remarkably, Lumina-T2I, powered by a 5-billion-parameter Flag-DiT,\nrequires only 35% of the training computational costs of a\n600-million-parameter naive DiT. Our further comprehensive analysis underscores\nLumina-T2X's preliminary capability in resolution extrapolation,\nhigh-resolution editing, generating consistent 3D views, and synthesizing\nvideos with seamless transitions. We expect that the open-sourcing of\nLumina-T2X will further foster creativity, transparency, and diversity in the\ngenerative AI community.",
5
+ "authors": "Peng Gao, Le Zhuo, Ziyi Lin, Chris Liu, Junsong Chen, Ruoyi Du, Enze Xie, Xu Luo, Longtian Qiu, Yuhang Zhang, Chen Lin, Rongjie Huang, Shijie Geng, Renrui Zhang, Junlin Xi, Wenqi Shao, Zhengkai Jiang, Tianshuo Yang, Weicai Ye, He Tong, Jingwen He, Yu Qiao, Hongsheng Li",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Recent advancements in foundational diffusion models, such as Sora [108], Stable Diffusion 3 [44], PixArt-\u03b1 [24], and PixArt-\u03a3 [25], have yielded remarkable success in generating photorealistic im- ages and videos. These models demonstrate a paradigm shift from the classic U-Net architecture [61] to a transformer-based architecture [110] for diffusion backbones. Notably, with this improved architecture, Sora and Stable Diffusion 3 can generate samples at arbitrary resolutions and exhibit strong adherence to scaling laws, achieving significantly better results with increased parameter sizes. However, they only provide limited guidance on the design choices of their models and lack detailed implementation instructions and publicly available pre-trained checkpoints, limiting their utility for community usage and replication. Moreover, these methods are tailored to specific tasks such as image or video generation tasks, and are formulated from varying perspectives, which hinders potential cross-modality adaptation. To bridge these gaps, we present Lumina-T2X, a family of Flow-based Large Diffusion Transformers (Flag-DiT) designed to transform noise into images [114, 123], videos [14, 108], multi-views of 3D objects [131, 130], and audio clips [138] based on textual instructions. The largest model within the Lumina-T2X family comprises a Flag-DiT with 7 billion parameters and a multi-modal large language model, SPHINX [46, 85], as the text encoder, with 13 billion parameters, capable of handling 128K tokens. Specifically, the foundational text-to-image model, Lumina-T2I, utilizes the flow matching framework [92, 86, 4] and is trained on a meticulously curated dataset of high- resolution photorealistic image-text pairs, achieving remarkably realistic results with merely a small proportion of computational resources. As shown in Figure 1, Lumina-T2I can generate high- quality images at arbitrary resolutions and aspect ratios, and further enables advanced functionalities including resolution extrapolation [43, 55], high-resolution editing [57, 18, 78, 129], compositional generation [12, 162], and style-consistent generation [58, 143], all of which are seamlessly integrated into the framework in a training-free manner. In addition, to empower the generation capabilities across various modalities, Lumina-T2X is independently trained from scratch on video-text, multi- view-text, and speech-text pairs to synthesize videos, multi-view images of 3D objects, and speech from text instructions. For instance, Lumina-T2V, trained with only limited resources and time, can produce 720p videos of any aspect ratio and duration, significantly narrowing the gap between Sora and open-source models. The core contributions of Lumina-T2X are summarized as follows: Flow-based Large Diffusion Transformers (Flag-DiT): Lumina-T2X utilizes the Flag-DiT ar- chitecture inspired by the core design principles from Large Language Models (LLMs) [145, 146, 19, 117, 122, 141, 166], such as scalable architecture [19, 150, 56, 163, 136, 36] and context window extension [112, 136, 30, 3] for increasing parameter size and sequence length. The modifications, including RoPE [136], RMSNorm [163], and KQ-norm [56], over the original DiT, significantly enhance the training stability and model scalability, supporting up to 7 billion parameters and se- quences of 128K tokens. Moreover, Flag-DiT improves upon the original DiT by adopting the flow matching formulation [98, 86], which builds continuous-time diffusion paths via linear interpolation between noise and data. We have thoroughly ablated these architecture improvements over the label-conditioned generation on ImageNet [38], demonstrating faster training convergence, stable training dynamics, and a simplified training and inference pipeline. Any Modalities, Resolution, and Duration within One Framework: Lumina-T2X tokenizes images, videos, multi-views of 3D objects, and spectrograms into one-dimensional sequences, similar to the way LLMs [117, 26, 19, 116] process natural language. By incorporating learnable placeholders such as [nextline] and [nextframe] tokens, Lumina-T2X can seamlessly encode any modality - regardless of resolution, aspect ratio, or even temporal duration - into a unified 1-D token sequence. The model then utilizes Flag-DiT with text conditioning to progressively transform noise into clean data across all modalities, resolutions, and durations by explicitly specifying the positions of [nextline] and [nextframe] tokens during inference. Remarkably, this flexibility even allows for resolution extrapolation, enabling the generation of resolutions surpassing those encountered during training. For instance, Lumina-T2I trained at a resolution of 1024 \u00d7 1024 pixels can generate images ranging from 768 \u00d7 768 to 1792 \u00d7 1792 pixels by simply adding more [nextline] tokens, which significantly broadens the potential applications of Lumina-T2X. 3 Table 1: We compare the training setups of Lumina-T2I with PixArt-\u03b1. Lumina-T2I is trained purely on 14 million high-quality (HQ) image-text pairs, whereas PixArt-\u03b1 benefits from an additional 11 million high-quality natural image-text pairs. Remarkably, despite having 8.3 times more parameters, Lumina-T2I only incurs 35% of the computational costs compared to PixArt-\u03b1-0.6B. PixArt-\u03b1-0.6B with T5-3B Lumina-T2I-5B with LLaMa-7B Res. #images Batch Size Learning Rate GPU days Res. #images Batch Size Learning Rate GPU days (A100) (A100) 256 1M ImageNet 1024 2\u00d710\u22125 44 - - - - - 256 10M SAM 11392 2\u00d710\u22125 336 - - - - - 256 14M HQ 11392 2\u00d710\u22125 208 256 14M HQ 512 1\u00d710\u22124 96 512 14M HQ 2560 2\u00d710\u22125 160 512 14M HQ 256 1\u00d710\u22124 96 1024 14M HQ 384 2\u00d710\u22125 80 1024 14M HQ 128 1\u00d710\u22124 96 Low Training Resources: Our empirical observations indicate that employing larger models, high-resolution images, and longer-duration video clips can significantly accelerate the convergence speed of diffusion transformers. Although increasing the token length prolongs the time of each iteration due to the quadratic complexity of transformers, it substantially reduces the overall training time before convergence by lowering the required number of iterations. Moreover, by utilizing meticulously curated text-image and text-video pairs featuring high aesthetic quality frames and detailed captions [13, 24, 25], our Lumina-T2X model is able to generate high-resolution images and coherent videos with minimal computational demands. It is worth noting that the default Lumina-T2I configuration, equipped with a 5 billion Flag-DiT and a 7 billion LLaMA [145, 146] as its text encoder, requires only 35% of the computational resources compared to PixArt-\u03b1, which builds upon a 600 million DiT backbone and 3 billion T5 [120] as its text encoder. A detailed comparison of computational resources between the default Lumina-T2I and PixArt-\u03b1 is provided in Table 1. In this technical report, we first introduce the architecture of Flag-DiT and its overall pipeline. We then introduce the Lumina-T2X system, which applies Flag-DiT over various modalities. Additionally, we discuss advanced inference techniques that unlock the full potential of the pretrained Lumina-T2I. Finally, we showcase the results from models in the Lumina-T2X family, accompanied by in-depth analyses. To support future research in the generative AI community, all training, inference codes, and pre-trained models of Lumina-T2X will be released.",
15
+ "main_content": "In this section, we revisit preliminary research that lays the foundation for Lumina-T2X. Building on these insights, we introduce the core architecture, Flag-DiT, along with the overall pipeline. Next, we delve into diverse configurations and discuss the application of Lumina-T2X across various modalities including images, videos, multi-view 3D objects, and speech. The discussion then extends to the advanced applications of the pretrained Lumina-T2I on resolution extrapolation, style-consistent generation, high-resolution editing, and compositional generation. 2.1 Revisiting RoPE, DiT, SiT, PixArt-\u03b1 and Sora Before introducing Lumina-T2X, we first revisit several milestone studies on leveraging diffusion transformers for text-to-image and text-to-video generation, as well as seminal research on large language models (LLMs). Rotary Position Embedding (RoPE) RoPE [136] is a type of position embedding that can encode relative positions within self-attention operations. It can be regarded as a multiplicative bias based on position \u2013 given a sequence of the query/key vectors, the n-th query and the m-th key after RoPE can be expressed as: \u02dc qm = f(qm, m) = qmeim\u0398, \u02dc kn = f(kn, n) = knein\u0398, (1) where \u0398 is the frequency matrix. Equipping with RoPE, the calculation of attention scores can be considered as taking the real part of the standard Hermitian inner product: Re[f(qm, m)f \u2217(kn, n)] = Re[qmk\u2217 nei\u0398(m\u2212n)]. (2) 4 In this way, the relative position m \u2212n between the m-th and n-th tokens can be explicitly encoded. Compared to absolute positional encoding, RoPE offers translational invariance, which can enhance the context window extrapolation potential of LLMs. Many subsequent techniques further explore and unlock this potential, e.g., position interpolation [30], NTK-aware scaled RoPE [3], Yarn [112], etc. In this work, Flag-DiT applies RoPE to the keys and queries of diffusion transformer. Notably, this simple technique endows Lumina-T2X with superior resolution extrapolation potential (i.e., generating images at out-of-domain resolutions unseen during training), as demonstrated in Section 3.2, compared to its competitors. DiT, Scalable Interpolant Transformer (SiT) and Flow Matching U-Net has been the de-facto diffusion backbone in previous Denoising Diffusion Probabilistic Models [61] (DDPM). DiT [110] explores using transformers trained on latent patches as an alternative to U-Net, achieving state-ofthe-art FID scores on class-conditional ImageNet benchmarks and demonstrating superior scaling potentials in terms of training and inference FLOPs. Furthermore, SiT [98] utilizes the stochastic interpolant framework (or flow matching) to connect different distributions in a more flexible manner than DDPM. Extensive ablation studies by SiT reveal that linearly connecting two distributions, predicting velocity fields, and employing a stochastic solver can enhance sample quality with the same DiT architecture. However, both DiT and SiT are limited in model sizes, up to 600 million parameters, and suffer from training instability when scaling up. Therefore, we borrow design choices from LLMs and validate that simple modifications can train a 7-billion-parameter diffusion transformer in mixed precision training. PixArt-\u03b1 and -\u03a3 DiT explores the potential of transformers for label-conditioned generation. Built on DiT, PixArt-\u03b1 [24] unleashes this potential for generating images based on arbitrary textual instructions. PixArt-\u03b1 significantly reduces training costs compared with SDXL [114] and Raphael [159], while maintaining high sample quality. This is achieved through multi-stage progressive training, efficient text-to-image conditioning with DiT, and the use of carefully curated high-aesthetic datasets. PixArt-\u03a3 extends this approach by increasing the image generation resolution to 4K, facilitated by the collection of 4K training image-text pairs. Lumina-T2I is highly motivated by PixArt-\u03b1 and -\u03a3 yet it incorporates several key differences. Firstly, Lumina-T2I utilizes Flag-DiT with 5B parameters as the backbone, which is 8.3 times larger than the 0.6B-parameter backbone used by PixArt-\u03b1 and -\u03a3. According to studies on class-conditional ImageNet generation in Section 3.1, larger diffusion models tend to converge much faster than their smaller counterparts and excel at capturing details on high-resolution images. Secondly, unlike PixArt-\u03b1 and -\u03a3 that were pretrained on ImageNet [38] and SAM-HD [80] images, Lumina-T2I is trained directly on high-aesthetic synthetic datasets without being interfered by the domain gap between images from different domains. Thirdly, while PixArt-\u03b1 and -\u03a3 excel at generating images with the same resolution as training stages, our Lumina-T2I, through the introduction of RoPE and [nextline] token, possesses a resolution extrapolation capability, enabling generating images at a lower or higher resolution unseen during training, which offers a significant advantage in generating and transferring images across various scales. Sora Sora [108] demonstrates remarkable improvements in text-to-video generation that can create 1-minute videos with realistic or imaginative scenes spanning different durations, resolutions, and aspect ratios. In comparison, Lumina-T2V can also generate 720p videos at arbitrary aspect ratios. Although there still exists a noticeable gap in terms of video length and quality between Lumian-T2V and Sora, video samples from Lumina-T2V exhibit considerable improvements over open-source models on scene transitions and alignment with complex text instructions. We have released all codes of Lumina-T2V and believe training with more computational resources, carefully designed spatial-temporal video encoder, and meticulously curated video-text pairs will further elevate the video quality. 2.2 Architecture of Flag-DiT Flag-DiT serves as the backbone of the Lumina-T2X framework. We will introduce the architecture of Flag-DiT and present the stability, flexibility, and scalability of our framework. 5 Text\u00a0 Embedding Global Avg Time\u00a0 Embedding Zero-init.\u00a0 Gating Zero-init.\u00a0 attention LN LN N RoPE RMSNorm Noisy input Proj. Proj. Linear Proj. \\f \\f \\f \\n \\f \\n \\n \\f Next-frame token tanh \\n \\n \\n \\n \\n \\n \\n \\n \\f Next-line token An astronaut riding a horse on Mars.... Label\u00a0 Embedding Time\u00a0 Embedding scale\u00a0 RMSNorm N RoPE RMSNorm Noisy input Predicted Velocity Linear Proj. \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n Predicted Noise or Next-line token Predicted Velocity Predicted Noise or KV Proj. Proj. Proj. scale\u00a0 scale & shift\u00a0 Feedforward scale\u00a0 scale & shift\u00a0 Proj. Proj. Proj. Proj. scale & shift\u00a0 Feedforward scale\u00a0 Dog LN scale & shift\u00a0 QKV Proj. RMSNorm QKV Proj. LN LN Prediction head Prediction head Patch Embed Patch Embed Figure 2: A comparison of Flag-DiT with label and text conditioning. (Left) Flag-DiT with label conditioning. (Right) Text conditioning with a zero-initialized attention mechanism. Flow-based Large Diffusion Transformers (Flag-DiT) DiT is rising to be a popular generative modeling approach with great scaling potential. It operates over latent patches extracted from a pretrained VAE [79, 14], then utilizes a transformer [150, 111] as denoising backbone to predict the mean and variance according to DDPM formulation [134, 135, 61, 105] from different levels of noised latent patches conditioned on time steps and class labels. However, the largest parameter size of DiT is only limited at 600M which is far less than LLMs (e.g., PaLM-540B [35, 7], Grok-1-300B, LLaMa3-400B [145, 146]). Besides, DiT requires full precision training which doubles the GPU memory costs and training speed compared with mixed precision training [99]. Last, the design choice of DiT lacks the flexibility to generate an arbitrary number of images (i.e., videos or multiview images) with various resolutions and aspect ratios, using the fixed DDPM formulation. To remedy the mentioned problems of DiT, Flag-DiT keeps the overall framework of DiT unchanged while introducing the following modifications to improve scalability, stability, and flexibility. \u2780Stability Flag-DiT builds on top of DiT [111] and incorporates modifications from ViT-22B [36] and LLaMa [145, 146] to improve the training stability. Specifically, Flag-DiT substitutes all LayerNorm [9] with RMSNorm [163] to improve training stability. Moreover, it incorporates keyquery normalization (KQ-Norm) [36, 56, 96] before key-query dot product attention computation. The introduction of KQ-Norm aims to prevent loss divergence by eliminating extremely large values within attention logits [36]. Such simple modifications can prevent divergent loss under mixedprecision training and facilitate optimization with a substantially higher learning rate. The detailed computational flow of Flag-DiT is shown in Figure 2. \u2781Flexibility DiT only supports fixed resolution generation of a single image with simple label conditions and fixed DDPM formulation. To tackle these issues, we first examine why DiT lacks 6 the flexibility to generate samples at arbitrary resolutions and scales. We find that this stems from the design choice that DiT leverages absolute positional embedding (APE) [42, 144] and adds it to latent tokens in the first layer following vision transformers. However, APE, designed for vision recognition tasks, struggles to generalize to unseen resolutions and scales beyond training. Motivated by recent LLMs exhibiting strong context extrapolation capabilities [112, 136, 30, 3], we replace APE with RoPE [136] which injects relative position information in a layerwise manner, following Equations 1 and 2. Since the original DiT can only handle a single image at a fixed size, we further introduce learnable special tokens including the [nextline] and [nextframe] tokens to transform training samples with different scales and durations into a unified one-dimensional sequence. Besides, we add [PAD] tokens to transform 1-D sequences into the same length for better parallelism. This is the key modifications that can significantly improve training and inference flexibility with the support of training or generating samples with arbitrary modality, resolution, aspect ratios, and durations, leading to the final design of Lumina-T2X. Next, we switch from the DDPM setting in DiT to the flow matching formulation [98, 92, 86], offering another flexibility to Flag-DiT. It is well known the schedule defining how to corrupt data to noise has great impacts on both the training and sampling of standard diffusion models. Thus plenty of diffusion schedules are carefully designed and used, including VE [135], VP [61], and EDM [77]. In contrast, flow matching [86, 5] emerges as a simple alternative that linearly interpolates between noise and data in a straight line. More specifically, given the data x \u223cp(x) and Gaussian noise \u03f5 \u223cN(0, I), we define an interpolation-based forward process xt = \u03b1tx + \u03b2t\u03f5, (3) where \u03b10 = 0, \u03b2t = 1, \u03b11 = 1, and \u03b21 = 0 to satisfy this interpolation on t \u2208[0, 1] is defined between x0 = \u03f5 and x1 = x. Similar to the diffusion schedule, this interpolation schedule also offers a flexible choice of \u03b1t and \u03b2t. For example, we can incorporate the original diffusion schedules, such as \u03b1t = sin( \u03c0 2 t), \u03b2t = cos( \u03c0 2 t) for VP cosine schedule. In our framework, we adopt the linear interpolation schedule between noise and data for its simplicity, i.e., xt = tx + (1 \u2212t)\u03f5. (4) This formulation indicates a uniform transformation with constant velocity between data and noise. The corresponding time-dependent velocity field is given by vt(xt) = \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5 (5) = x \u2212\u03f5, (6) where \u02d9 \u03b1 and \u02d9 \u03b2 denote time derivative of \u03b1 and \u03b2. This time-dependent velocity field v : [0, 1]\u00d7Rd \u2192 Rd defines an ordinary differential equation named Flow ODE dx = vt(xt)dt. (7) We use \u03d5t(x) to represent the solution of the Flow ODE with the init condition \u03d50(x) = x. By solving this Flow ODE from t = 0 to t = 1, we can transform noise into data sample using the approximated velocity fields v\u03b8(xt, t). During training, the flow matching objective directly regresses the target velocity Lv = Z 1 0 E[\u2225v\u03b8(xt, t) \u2212\u02d9 \u03b1tx \u2212\u02d9 \u03b2t\u03f5 \u22252]dt, (8) which is named Conditional Flow Matching loss [86], sharing similarity with the noise prediction or score prediction losses in diffusion models. Besides simple label conditioning for class-conditioned generation, Flag-DiT can flexibly support arbitrary text instruction with zero-initialized attention [165, 45, 164, 10]. As shown in Figure 2 (b), Flag-DiT-T2I, a variant of Flag-DiT, leverages the queries of latent image tokens to aggregate information from keys and values of text embeddings. Then we propose a zero-initialized gating mechanism to gradually inject conditional information into the token sequences. The final attention output is formulated as A = softmax \u02dc Iq \u02dc IT k \u221adk ! Iv + tanh(\u03b1) softmax \u02dc IqT T k \u221adk ! Tv, (9) 7 Frame-wise Encoding Image Video Multi-view Audio Frame Sequence Frame Encoder SD1.5 VAE or Identity Latent Frames [H x W x T x C] Text Encoding H T W Text Prompt Text Encoder A person in a linen dress stands at a kitchen counter with a recipe book open, preparing to bake. Image related Video related Multi-view related Audio related A puffin stands amid green foliage on a cliff's edge. A painting of a mountain on a wooden easel. I have played the flute to the hurricane. CLIP or Llama or Text Embedding SPHINX or Phone Flattened Patch Sequences x Noise \u03f5 Zero-init Attention Noisy Input \u03b1tx + \u03b2t\u03f5 \u03b1t \u03b2t Sample Loss Predicted Velocity \u02c6 v\u03b8 Flag-DiT Block 1 Flag-DiT Block 2 Flag-DiT Block N \u2026\u2026 Text Conditioning Time Conditioning Gaussian Sample Transform Input & Target Construction Network Architecture & Loss Timestep t \\n: next line token \\f: next frame token sigmoid Timestep t Flow-based Large DiT ||\u02c6 v\u03b8 \u2212( \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5)||2 Target Velocity \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5 Target Velocity \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5 \u02d9 \u03b2t \u02d9 \u03b1t \u03b1t \u03b2t \u02d9 \u03b1t \u02d9 \u03b2t Figure 3: Our Lumina-T2X framework consists of four components: frame-wise encoding, input & target construction, text encoding, and prediction based on Flag-DiT. where \u02dc Iq and \u02dc Ik stand for applying RoPE defined in Equation 1 to image queries and values, dk is the dimension of queries and keys, and \u03b1 indicates the zero-initialized learnable parameter in gated cross-attention. In the experiment session, we discovered that zero-initialized attention induces sparsity gating which can turn off 90% text embedding conditions across layers and heads. This indicates the potential for designing more efficient T2I models in the future. Equipped with the above improvements, our Flag-DiT supports arbitrary resolution generation of multiple images with arbitrary conditioning using a unified flow matching paradigm. \u2782Scalability After alleviating the training stability of DiT and increasing flexibility for supporting arbitrary resolutions conditioned on text instructions, we empirically scale up Flag-DiT with larger parameters and more training samples. Specifically, we explore scaling up the parameter size from 600M to 7B on the label-conditioned ImageNet generation benchmark. The detailed configurations of Flag-DiT with different parameter sizes are discussed in Appendix B. Flag-DiT can be stably trained under mixed-precision configuration and achieve fast convergence compared with vanilla DiT as shown in the experiment section. After verifying the scalability of our Flag-DiT model, we scale up the token length to 4K and expand the dataset from label-conditioned 1M ImageNet to more challenging 17M high-resolution image-text pairs. We further successfully verified that Flag-DiT can support the generation of long videos up to 128 frames, equivalent to 128K tokens. As Flag-DiT is a pure transformer-based architecture, it can borrow the well-validated parallel strategies [132, 121, 169, 88, 87, 89, 72] designed for LLMs, including FSDP [169] and sequence parallel [88, 87, 89, 72] to support large parameter scales and longer sequences. Therefore, we can conclude that Flag-DiT is a scalable generative model with respect to model parameters, sequence length, and dataset size. 2.3 The Overall Pipeline of Lumina-T2X As illustrated in Figure 3, the pipeline of Lumina-T2X consists of four main components during training, which will be described below. Frame-wise Encoding of Different Modalities The key ingredient for unifying different modalities within our framework is treating images, videos, multi-view images, and speech spectrograms as frame sequences of length T. We can then utilize modality-specific encoders, to transform these inputs into latent frames of shape [H, W, T, C]. Specifically, for images (T = 1), videos (T = numframes), 8 and multiview images (T = numviews), we use SD 1.5 VAE to independently encode each image frame into latent space and concatenate all latent frames together, while we leave speech spectrograms unchanged using identity mapping. Our approach establishes a universal data representation that supports diverse modalities, enabling our Flag-DiT to effectively model. Text Encoding with Diverse Text Encoders For text-conditional generation, we encode the text prompts using pre-trained language models. Specifically, we incorporate a variety of diverse text encoders with varying sizes, including CLIP, LLaMA, SPHINX, and Phone encoders, tailored for various needs and modalities, to optimize text conditioning. We provided a series of Lumina-T2X trained with different text encoders mentioned above in our model zoo as shown in Figure 17. Input & Target Construction As described in Section 2.2, latent frames are first flattened using 2 \u00d7 2 patches into a 1-D sequence, then added with [nextline] and [nextframe] tokens as identifiers. Lumina-T2X adopts the linear interpolation schedule in flow-matching to construct the input and target following Equations 4 and 6 for its simplicity and flexibility. Inspired by the observation that intermediate timesteps are critical for both diffusion models [77] and flow-based models [44], we adopt the time resampling strategy to sample timestep from a log-norm distribution during training. Specifically, we first sample a timestep from a normal distribution N(0, 1) and map it to [0, 1] using the logistic function in order to emphasize the learning of intermediate timesteps. Network Architecture & Loss We use Flag-DiT as our denoising backbone. The detailed architecture of each Flag-DiT block is depicted in Figure 2. Given the noisy input, the Flag-DiT Blocks inject diffusion timestep added with global text embedding via the modulation mechanism and further integrate text conditioning via zero-initialized attention using Equation 9 mentioned in Section 2.2. We add RMSNorm at the start of each attention and MLP block to prevent the absolute values grow uncontrollably causing numerical instability. Finally, we compute the regression loss between predicted velocity \u02c6 v\u03b8 and ground-truth velocity \u02d9 \u03b1tx + \u02d9 \u03b2t\u03f5 using the Conditional Flow Matching loss in Equation 8. 2.4 Lumina-T2X System In this section, we introduce the family of Lumina-T2X, including Lumina-T2I, Lumina-T2V, LuminaT2MV, and Lumina-T2Speech. For each modality, Lumina-T2X is independently trained with diverse configurations optimized for varying scenarios, such as different text encoders, VAE latent spaces, and parameter sizes. The detailed configurations are provided in Appendix B. Lumina-T2I is the key component of our Lumina-T2X system, where we utilize the T2I task as a testbed for validating the effectiveness of each component discussed in Section 3.2. Notably, our most advanced Lumina-T2I model with a 5B Flag-DiT, 7B LLaMa text encoder, and SDXL latent space demonstrates superior visual quality and accurate text-to-image alignment. Then, we can extend the explored architecture, hyper-parameters, and other training details to videos, multi-views, and speech generation. Since videos and multi-views of 3D objects usually contain up to 1 million tokens, Lumina-T2V and Lumina-T2MV adopt a 2B Flag-DiT, CLIP-L/G text encoder, and SD-1.5 latent space. Although this configuration slightly reduces visual quality, it provides an effective balance for processing long sequences and a joint latent space for images and videos. Motivated by previous approaches [62, 24], Lumina-T2I, Lumina-T2V, and Lumina-T2MV employ a multi-stage training approach, starting from low-resolution, short-duration data while ending with high-resolution, long-duration data. Such a progressive training strategy significantly improves the convergence speed of Lumina-T2X. For Lumina-T2Speech, since the feature space of the spectrogram shows a completely different distribution than images, we directly tokenize the spectrogram without using a VAE encoder and train a randomly initialized Flag-DiT conditioned on a phoneme encoder for T2Speech generation. 2.5 Advanced Applications of Lumina-T2I Beyond its basic text-to-image generation capabilities, the text-to-image Lumina-T2I supports more complex visual creations and produces innovative visual effects as a foundational model. This includes resolution extrapolation, style-consistent generation, high-resolution image editing, and compositional generation \u2013 all in a tuning-free manner. Unlike previous methods that solve these tasks with varied approaches, Lumina-T2I can uniformly tackle these problems through token operations, as illustrated in Figure 4. 9 Content VAE \\n \\n 1 2 4 5 3 6 \\n \\n \\n \\n 1 2 6 7 3 4 8 9 5 10 11 12 16 17 13 14 18 19 15 20 \\n \\n \\n \\n 1 2 4 5 3 6 7 8 10 11 9 12 0 1 \\n \\n 1 2 4 5 3 6 \\n \\n [ ] Channel-wise normalization Resolution\u00a0 Extrapolation Style-consistent\u00a0 Generation High-Res. Editing Text 1 Text 2 Mask 1 Conditioning ODE/SDE 17922 Gaussian noise Coordinate (RoPE) Text-to-image Generation \\n \\n 1 2 7 8 Text 3 10242 3 4 9 10 5 11 6 12 Text 4 Compositional\u00a0 Generation Mask 2 Figure 4: Lumina-T2I supports text-to-image generation, resolution extrapolation, style-consistent generation, compositional generation, and high-resolution editing in a unified and training-free framework. Tuning-Free Resolution Extrapolation Due to exponential growth in computational demand and data scarcity, existing T2I models are generally limited to 1K resolution. Thus, there is a significant demand for low-cost and high-resolution extrapolation approaches [55, 43, 33]. The translational invariance of RoPE enhances Lumina-T2X\u2019s potential for resolution extrapolation, allowing it to generate images at out-of-domain resolutions. Inspired by the practices in previous arts, we adopt three techniques that can help unleash Lumina-T2X\u2019s potential of test-time resolution extrapolation: (1) NTK-aware scaled RoPE [3] that rescales the rotary base of RoPE to achieve a gradual position interpolation of the low-frequency components, (2) Time Shifting [44] that reschedules the timesteps to ensure consistent SNR across denoising processes of different resolutions, and (3) Proportional Attention [75] that rescales the attention score to ensure stable attention entropy across various sequence lengths. The visualization of resolution extrapolation can be found in Figure 7, and the details about the aforementioned techniques in our implementation can be found in Appendix A.1. In addition to generating images with large sizes, we observe that such resolution extrapolation can even improve the quality of the generated images, serving as a free lunch (refer to Section 3.2). Style-Consistent Generation The transformer-based diffusion model architecture makes LuminaT2I naturally suitable for self-attention manipulation applications like style-consistent generation. A representative approach is shared attention [58], which enables generating style-aligned batches without specific tuning of the model. Specifically, it uses the first image in a batch as the anchor/reference image, allowing the queries from other images in the batch to access the keys and values of the first image during the self-attention operation. This kind of information leakage effectively promotes a consistent style across the images in a batch. Typically, this can be achieved by concatenating the keys and values of the first image with those of other images before self-attention. However, in diffusion transformers, it is important to note that keys from two images contain duplicated positional embeddings, which can disrupt the model\u2019s awareness of spatial structures. Therefore, we need to ensure that key/value sharing occurs before RoPE, which can be regarded as appending a reference image sequence to the target image sequence. Compositional Generation Compositional, or multi-concepts text-to-image generation [74, 12, 162], which requires the model to generate multiple subjects at different regions of a single image, is seamlessly supported by our transformer-based framework. Users can define N different prompts and N bounding boxes as masks for corresponding prompts. Our key insight is to restrict the cross-attention operation of each prompt within the corresponding region during sampling. More specifically, at each timestep, we crop the noisy data xt using each mask and reshape the resulting sub-regions into a sub-region batch {x1 t, x2 t, . . . , xN t }, corresponding to the prompt batch {y1, y2, . . . , yN}. Then, we compute cross-attention using this sub-region batch and prompt batch and manipulate the output back to the complete data sample. We only apply this operation to cross10 attention layers to ensure the text information is injected into different regions while keeping the self-attention layers unchanged to ensure the final image is coherent and harmonic. We additionally set the global text condition as the embedding of the complete prompt, i.e., concatenation of all prompts, to enhance global coherence. High-Resolution Editing Beyond high-resolution generation, our Lumina-T2I can also perform image editing [57, 18], especially for high-resolution images. Considering the distinct features of different editing types, we first classify image editing into two major categories, namely style editing and subject editing. For style editing, we aim to change or enhance the overall visual style, such as color, environment, and texture, without modifying the main object of the image, while subject editing aims to modify the content of the main object, such as addition, replacement, and removal, without affecting the overall visual style. Then, we leverage a simple yet effective method to achieve this image editing within the Lumina-T2I framework. Specifically, given an input image, we first encode it into latent space using the VAE encoder and interpolate the image latent with noise to get the intermediate noisy latent at time \u03bb. Then, we can solve the Flow ODE from \u03bb to 1.0 with desired prompts for editing as text conditions. Due to the powerful generation capability of our model, it can faithfully perform the ideal editing while preserving the original details in high resolution. However, in style editing, we find that the mean and variance are highly correlated with image styles. Therefore, the above method still suffers from style leakage since the interpolated noisy data still retains the style of the original image in its mean and variance. To eliminate the influence of the original image styles, we perform channel-wise normalization on input images, transforming them to zero mean and unit variance. 3 Experiments 3.1 Validating Flag-DiT on ImageNet Training Setups We perform experiments on label-conditioned 256\u00d7256 and 512\u00d7512 ImageNet [38] generation to validate the advantages of Flag-DiT over DiT [111]. Large-DiT is a specialized version of Flag-DiT, incorporating the DDPM formulation [61, 105] to enable a fair comparison with the original DiT. We exactly follow the setups of DiT but with the following modifications, including, mixed precision training, large learning rate, and architecture modifications suite (e.g. QK-Norm, RoPE, and RMSNorm). By default, we report FID-50K [109, 39] using 250 DDPM sampling steps for Large-DiT and the adaptive Dopri-5 solver for Flag-DiT. We additionally report sFID [124], Inception Score [104], and Precision/Recall [83] for an extensive evaluation. Comparison with SOTA Approaches As shown in Table 2, Large-DiT-7B significantly surpasses all approaches on FID and IS score without using Classifier-free Guidance (CFG) [60], reducing the FID score from 8.60 to 6.09. This indicates increasing the parameters of diffusion models can significantly improve the sample quality without relying on extra tricks such as CFG. When CFG is employed, both Large-DiT-3B and Flag-DiT-3B achieve slightly better FID scores but much improved IS scores than DiT-600M and SiT-600M while only requiring 24% and 14% training iterations. For 512\u00d7512 label-conditioned ImageNet generation, Large-DiT with 3B parameters can significantly surpass other SOTA approaches by reducing FID from 3.04 to 2.52 and increasing IS from 240 to 303. This validates that increased parameter scale can better capture complex high-resolution details. By comparison with SOTA approaches on label-conditioned ImageNet generation, we can conclude that Large-DiT and Flag-DiT are good at generative modeling with fast convergence, stable scalability, and strong high-resolution modeling ability. This directly motivates Lumian-T2X to employ Flag-DiT with large parameters to model more complex generative tasks for any modality, resolution, and duration generation. Comparison between Flag-DiT, Large-DiT, and SiT We compared the performance of Flag-DiT, Large-DiT, and SiT on ImageNet-conditional generation, fixing the parameter size at 600M for a fair comparison. As demonstrated in Figure 5(a), Flag-DiT consistently outperforms Large-DiT across all epochs in FID evaluation. This indicates that the flow matching formulation can improve image generation compared to the standard diffusion setting. Moreover, Flag-DiT\u2019s lower FID scores compared to SiT suggest that meta-architecture modifications, including RMSNorm, RoPE, and K-Q norm, not only stabilize training but also boost performance. 11 ImageNet 256\u00d7256 Benchmark Models Images (M) FID \u2193 sFID \u2193 IS \u2191 P \u2191 R \u2191 BigGAN-deep [17] 6.95 7.36 171.40 0.87 0.28 MaskGIT [21] 355 6.18 182.1 0.80 0.51 StyleGAN-XL [125] 2.30 4.02 265.12 0.78 0.53 ADM [39] 507 10.94 6.02 100.98 0.69 0.63 ADM-U [39] 507 7.49 5.13 127.49 0.72 0.63 LDM-8 [123] 307 15.51 79.03 0.65 0.63 LDM-4 [123] 213 10.56 103.49 0.71 0.62 DiffuSSM-XL [160] 660 9.07 5.52 118.32 0.69 0.64 DiT-XL/2 [111] 1792 9.62 6.85 121.50 0.67 0.67 SiT-XL/2-G [98] 1792 8.60 Large-DiT-7B 256 6.09 5.59 153.32 0.70 0.68 Classifier-free Guidance ADM-G [39] 507 4.59 5.25 186.70 0.82 0.52 ADM-G, ADM-U [39] 507 3.60 247.67 0.87 0.48 LDM-8-G [123] 307 7.76 209.52 0.84 0.35 LDM-4-G [123] 213 3.95 178.2 2 0.81 0.55 U-ViT-H/2-G [11] 512 2.29 247.67 0.87 0.48 DiT-XL/2-G [111] 1792 2.27 4.60 278.24 0.83 0.57 DiffuSSM-XL-G [160] 660 2.28 4.49 259.13 0.86 0.56 SiT-XL/2-G [98] 1792 2.06 4.50 270.27 0.82 0.59 Large-DiT-3B-G 435 2.10 4.52 304.36 0.82 0.60 Flag-DiT-3B-G 256 1.96 4.43 284.80 0.82 0.61 ImageNet 512\u00d7512 Benchmark ADM [39] 1385 23.24 10.19 58.06 0.73 0.60 ADM-U [39] 1385 9.96 5.62 121.78 0.75 0.64 ADM-G [39] 1385 7.72 6.57 172.71 0.87 0.42 ADM-G, ADM-U [39] 1385 3.85 5.86 221.72 0.84 0.53 U-ViT/2-G [11] 512 4.05 8.44 261.13 0.84 0.48 DiT-XL/2-G [111] 768 3.04 5.02 240.82 0.84 0.54 DiffuSSM-XL-G [160] 302 3.41 5.84 255.06 0.85 0.49 Large-DiT-3B-G 472 2.52 5.01 303.70 0.82 0.57 Table 2: Comparison between Large-DiT and Flag-DiT with other models on ImageNet 256 \u00d7 256 and 512 \u00d7 512 label-conditional generation. P, R, and -G denote Precision, Recall, and results with classifier-free guidance, respectively. We also include the total number of images during the training stage to offer further insights into the convergence speed of different generative models. Faster Training Speed with Mixed Precision Training Flag-DiT not only improves performance but also enhances training efficiency as well as stability. Unlike DiT, which diverges under mixed precision training, Flag-DiT can be trained stably with mixed precision. Thus Flag-DiT leads to faster training speeds compared with DiT at the same parameter size. We measure the throughputs of 600M and 3B Flag-DiT and DiT on one A100 node with 256 batch size. As shown in Table 4. Flag-DiT can process 40% more images per second. Faster Convergence with LogNorm Sampling During training, Flag-DiT-600M uniformly samples time steps from 0 to 1. Previous works [77, 44] have pointed out that the learning of score function in diffusion models or velocity field in flow matching is more challenging in the middle of the schedule. To address this, we have replaced uniform sampling with log-normal sampling, which places greater emphasis on the central time steps, thereby accelerating convergence. We refer to the Flag-DiT-600M model using log-normal sampling as Flag-DiT-600M-LogNorm. As demonstrated in Figure 5(b), Flag-DiT-600M-LogNorm not only achieves faster loss convergence but also improves the FID score significantly. 12 200 400 600 800 1000 1200 Training Iterations (k) 4 6 8 10 FID Score (cfg = 1.5) SiT-600M Large-DiT-600M Flag-DiT-600M (a) 400 900 1500 2000 2400 2800 Training Iterations (k) 2 3 4 5 6 7 8 FID Score (cfg = 1.5) Flag-DiT-600M w/o LogNorm Flag-DiT-600M w/ LogNorm (b) 200 400 600 800 1000 1200 1400 Training Iterations (k) 2 3 4 5 6 7 8 9 FID Score (cfg = 1.5) Large-DiT-600M Large-DiT-3B Large-DiT-7B (c) 0 100 200 300 400 500 600 700 Training Iterations (k) 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 0.90 Loss Random-Init. ImageNet-Init. (d) Figure 5: Training dynamics of different configurations, to explore the effects of (a) flow matching formulation and architecture modifications, (b) using LogNorm sampling, (c) scaling up model size, and (d) using ImageNet initialization. Scaling Effects of Large-DiT DiT demonstrates that the quality of generated images improves with an increase in parameters. However, the largest DiT model tested is limited to 600M parameters, significantly fewer than those used in large language models. Previous experimental sessions have validated the stability, effectiveness, and rapid convergence of Large-DiT. Building on this foundation, we have scaled the parameters of Large-DiT from 600M to 7B while maintaining the same hyperparameters. As depicted in Figure 5(c), this substantial increase in parameters significantly enhances the convergence speed of Large-DiT, indicating that larger models are more computeefficient for training. Influence of ImageNet Initialization PixArt-\u03b1 [24, 25] utilizes ImageNet-pretrained DiT, which learns pixel dependency, as an initialization for the subsequent T2I model. To validate the influence of ImageNet initialization, we compare the velocity prediction loss of Lumina-T2I with a 600M parameter model using ImageNet initialization versus training from scratch. As illustrated in Figure 5(d), training from scratch consistently results in lower loss levels and faster convergence speeds. Moreover, starting from scratch allows for a more flexible choice of configurations and architectures, without the constraints of a pretrained network. This observation also leads to the design of simple and fast training recipes shown in Table 1. 3.2 Results for Lumina-T2I Basic Setups The Lumina-T2I series is a key component of the Lumina-T2X, providing a foundational framework for the design of Lumina-T2V, Lumina-T2MV and Lumina-T2Speech. By default, all images in this technical report are generated using a 5B Flag-DiT coupled with a 7B LLaMa text encoder [145, 146]. The Lumina-T2I model zoo also supports various text encoder sizes, DiT parameters, input and target construction, and latent spaces, as shown in Appendix B. Lumina-T2I models 13 A detailed paper cut craft and illustration of a cute anime bunny girl sitting in the woods. A realistic landscape shot of the Northern Lights dancing over a snowy mountain range in Iceland, with long exposure to capture the motion and vibrant colors. A beautiful Victorian-era botanical garden featuring a charming pond and lovely daisies. A serene mountain landscape in the style of a Chinese ink painting, with a waterfall cascading down into a crystal-clear lake surrounded by ancient pines. An impressionist painting of a bustling caf\u00e9 terrace at night, with vivid colors and lively brush strokes. Batman, cute modern Disney style, ultra-detailed, gorgeous, trending on dribble Detailed pen and ink drawing of a happy pig butcher selling meat in its shop. A young girl\u2018s face disintegrates while beautiful colors fill her features, depicted in fluid dynamic brushwork with colorful dream-like illustrations. A serene twilight beach scene with silhouetted palm trees and bioluminescent waves, digital oil painting. a watercolor portrait of a Terrier dog, smiling and making a cute facial expression while looking at the camera, in Pixar style. An old shaman woman adorned with feathers and leather, portrayed in a photorealistic illustration with soft lighting and sharp focus. An 80s anime still illustrated, featuring a man and a woman in a city park, wearing retro clothing with muted pastel colors. An anthropomorphic Hulk, wearing glasses and smiling, is depicted in a cute and funny character design Two cute penguins in a romantic Valentine's yarn setting under the moonlight with pastel colors. A photograph showcases the beauty of desert flowers and mirrors illuminated by the soft morning light. The image, extremely photorealistic and meticulously detailed, depicts a lonely desert atmosphere with stars shining overhead. A red-haired male elf hunter with a shy expression is standing in a mystical forest, surrounded by fairy tale-like elements and vibrant spectral colors. Figure 6: Lumina-T2I is capable of generating images with arbitrary aspect ratios, delivering superior visual quality and fidelity while adhering closely to given text instructions. 14 Training\u00a0 Resolution 1K \u00a0Resolution Extrapolation to 2K 16642 16642 16642 16642 14082 17922 17922 17922 17922 20482 20482 20482 20482 14082 14082 14082 12802 12802 12802 12802 10242 10242 10242 10242 5122 5122 5122 5122 down to 512px Figure 7: Resolution extrapolation samples of Lumina-T2I. Without any additional training, LuminaT2I is capable of directly generating images with various resolutions from 5122 to 17922. are progressively trained on images with resolutions of 256, 512, and 1024. Detailed information on batch size, learning rate, and computational costs for each stage is provided in Table 1. Fundamental Text-to-Image Generation Ability We showcase the fundamental text-to-image generation capability in Figure 6. The large capacity of the diffusion backbone and text encoder allows for the generation of photorealistic, high-resolution images with accurate text comprehension, utilizing just 288 A100 GPU days. By introducing the [nextline] token during the unified spatialtemporal encoding stage, Lumina-T2I can flexibly generate images from text instructions of various sizes. This flexibility is achieved by explicitly indicating the placement of [nextline] tokens during the inference stage. Free Lunch with Resolution-Extrapolation Resolution extrapolation brings not only larger-scale images but also higher image quality along with enhanced details. As shown in Figure 7, we observe the quality of generated images and text-to-image alignments can be significantly enhanced as we perform resolution extrapolation from 1K to 1.5K. Besides, Lumina-T2I is also capable of performing extrapolation to generate images with lower resolutions, such as 512 resolution, offering additional flexibility. Conversely, Pixart-\u03b1 [24], which uses standard positional embeddings instead of RoPE [136], does not show comparable generalization capabilities at test resolutions. Further enhancing the resolution from 1.5K to 2K can gradually lead to the failure of image generation due to the large domain gap between training and inference. The improvement of image quality and text-to-image alignment is a free lunch of Lumina-T2I as it can improve image generation without incurring any training costs. However, as expected, the free lunch is not without its shortcomings. The discrepancy between the training and inference domains can introduce minor artifacts. We believe the artifacts can be alleviated by collecting high-quality images larger than 1K resolution and performing few-shot parameter-efficient fine-tuning. 15 \u2026colorful, macro photo. Toy train\u2026 Toy airplane\u2026 Toy bicycle\u2026 Toy car\u2026 Candles and roses\u2026 A bottle\u2026 A pizza\u2026 A chef\u2026 \u2026in Japanese ukiyo-e style. A winter forest\u2026 A sleigh\u2026 A snowman\u2026 A snowy cabin\u2026 \u2026in Scandinavian folk art style. Figure 8: Style-consistent image generation samples produced by Lumina-T2I. Given a shared style description, Lumina-T2I can generate a batch of images with diverse style-consistent contents. Style-Consistent Generation Batch generation of style-consistent content holds immense value for practical application scenarios [58, 143]. Here, we demonstrate that through simple key/value information leakage, Lumina-T2I can generate impressive style-aligned batches. As shown in Figure 8, leveraging a naive attention-sharing operation, we can observe strong consistency within the generated batches. Thanks to the full-attention model architecture, we can obtain results comparable to those in [58] without using any tricks such as Adaptive Instance Normalization (AdaIN) [68]. Furthermore, we believe that, as previous arts [58, 143] illustrate, through appropriate inversion techniques, we can achieve style/concept personalization at zero cost, which is a promising direction for future exploration. Compositional Generation As illustrated in Figure 9, we present demos of compositional generation [162, 12] using the method described in Section 2.5. We can define an arbitrary number of prompts and assign each prompt an arbitrary region. Lumina-T2I successfully generates high-quality images in various resolutions that align with complex input prompts while retaining overall visual coherence. This demonstrates that the design choice of our Lumina-T2I offers a flexible and effective method that excels in generating complex high-resolution multi-concept images. 16 Upper Left: A serene countryside landscape dotted with quaint cottages and rolling hills, bathed in the warm glow of the setting sun. Upper Right: A post-apocalyptic world where nature has reclaimed the land and mutated creatures roam. Down Left: An old man, portrayed as a retro superhero. Down Right: A mysterious portal leading to another dimension, with swirling vortexes of light and energy. Left: A majestic castle perched atop a rocky cliff, overlooking a vast kingdom below. Right: A high-detail image of a wolf mutant deer in a winter landscape with rainbow-colored snow. Upper Left: An underwater city inhabited by aquatic creatures, with colorful coral reefs and schools of fish. Upper Right: A sprawling space station, bustling with activity and interstellar travelers. Down Left: A dystopian wasteland with ruins and debris. Down Right: A lone astronaut exploring the desolate surface of a distant planet, with the vast expanse of space stretching out behind them. Left: A samurai warrior holds a nice sword in a fantasy world, portrayed in an epic and adventurous painting mod. Right: A futuristic cityscape with towering skyscrapers and flying cars zooming through neon-lit streets. Up: A snowy mountain. Mid: A beautiful oil painting of a steamboat in a river. Down: A tranquil garden filled with blooming flowers 1792\u00d71792 1792\u00d71792 2048x1024 1024x2048 1024x2048 Figure 9: Compositional generation samples of Lumina-T2I. Our Lumina-T2I framework can generate high-quality images with intricate compositions based on a combination of prompts and designated regions. High-Resolution Editing Following the methods outlined in Section 2.5, we perform style and subject editing on high-resolution images [57, 18, 78, 129]. As depicted in Figure 10, Lumina-T2I can seamlessly modify global styles or add subjects without the need for additional training. Furthermore, we analyze various factors such as starting time and latent feature normalization in image editing, as shown in Figure 11. By varying the starting time from 0 to 1, we find that a starting time near 0 leads to complete spatial misalignment, while a starting time near 1 results in unchanged content. Setting the starting time to 0.2 provides a good balance between adhering to the editing instructions and preserving the structure of the original image. Compared with the generated image without normalization, it is clear that channel-wise normalization can effectively remove the original style of the input image while preserving its main content. By normalizing the latent features of the original image, our approach to image editing can better handle the editing instructions. Comparison with Pixart-\u03b1 Compared to PixArt-\u03b1 [24], Lumina-T2I can generate images at resolutions ranging from 5122 pixels to 17922 pixels. As demonstrated in Figure 12, PixArt-\u03b1 struggles to produce high-quality images at both lower and higher resolutions than the size of images used during training. Lumina-T2I utilizes RoPE, the [nextline] token, as well as layer-wise relative position injection, enabling it to effectively handle a broader spectrum of resolutions. In contrast, PixArt-\u03b1 relies on absolute position embedding and limits positional information to the initial layer, leading to a degradation in performance when generating images at out-of-distribution scales. Apart from resolution extrapolation, Lumina-T2I also adopts a simplified training pipeline, as shown in Table 1. Ablation studies conducted on ImageNet indicate that training with natural image domains such as ImageNet results in higher training losses in subsequent stages. This suggests that synthetic images from JourneyDB and natural images collected online (e.g., LAION [126, 127], COYO [20], SAM [80], and ImageNet [38]) belong to distinct distributions. Motivated by this observation, LuminaT2I trains directly on high-resolution synthetic domains to reduce computational costs and avoid suboptimal initialization. Additionally, inspired by the fast convergence of the FID score observed when training on ImageNet, Lumina-T2I adopts a 5 billion Flag-DiT, which has 8.3 times more 17 Style Edit: \u2026 in the storm. 1408\u00d71408 Style Edit: \u2026 in autumn. 1408\u00d71408 Style Edit: \u2026 in the desert. 1408\u00d71408 Base: A photorealistic image of a tree in spring. 1408\u00d71408 Subject Edit: \u2026 a rabbit\u2026. 1408\u00d71408 Base: A realistic image of a groundhog peeking out of its burrow in the snow-covered winter landscape. 1408\u00d71408 Subject Edit: \u2026 wearing boots. 1408\u00d71408 Base: A photorealistic and ultra cute 3D rendered flamingo stands in a room with a Christmas tree. 1408\u00d71408 Figure 10: Demonstrations of style editing and subject editing over high-resolution images in a training-free manner. w/ norm w/o norm t=0 t=1 t=0.2 t=0.4 t=0.6 t=0.8 Dissimilar Great No Difference Style Mismatch Input Image Figure 11: Qualitative effects of the starting time and latent feature normalization in style editing. A starting time near 0.2 yields a good balance between preserving the original content and incorporating the desired target style, while removing normalization greatly hinders the model\u2019s ability to effectively transform image styles. parameters than PixArt-\u03b1, yet incurs only 35% training costs (288 A100 GPU days compared to 828 A100 GPU days). Analysis of Gate Distribution in Zero-Initialized Attention Cross-attention [139, 14] is the defacto standard for text conditioning. Unlike previous methods, Lumina-T2I employs zero-initialized attention mechanism [45, 165], which incorporates a zero-initialized gating mechanism to adaptively control the influence of text-conditioning across various heads and layers. Surprisingly, we observe that zero-initialized attention can induce extremely high levels of sparsity in text conditioning. As shown in Figure13(a), we visualize the gating values across heads and layers, revealing that most gating values are close to zero, with only a small fraction exhibiting significant importance. Interestingly, the most crucial text-conditioning heads are predominantly found in the middle layers, suggesting that these layers play a key role in text conditioning. To consolidate this observation, we truncated gates below a certain threshold and found that 80% of the gates can be deactivated without affecting the quality of image generation, as demonstrated in Figure 13(b). This observation suggests the possibility of truncating most cross-attention operations during sampling, which can greatly reduce inference time. 18 John Wick, dressed in Mandalorian attire, is a skilled bounty hunter in a high-speed chase pixart-\u03b1 lumina-t2x 512px 1024px 1280px 1536px 1792px 2048px Figure 12: Qualitative comparison between Lumina-T2I and PixArt-\u03b1 in generating images at multiple resolutions. The samples from Lumina-T2I demonstrate better alignment with the given text and superior visual quality across all resolutions compared to those from PixArt-\u03b1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Head 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Layer Heatmap of Absolute T anh Values of Gates Across Layers and Heads 0.2 0.4 0.6 0.8 (a) A high-detail image of a wolf mutant deer in a winter landscape with rainbow-colored snow. A strong man with real facial wrinkles and pores casts an imposing figure near the bridge while a young woman stands in admiration in a dramatic and photorealistic scene with perfect background characters and silky smooth bokeh. An old man, portrayed as a retro superhero, stands in the streets of New York City at night Gate threshold: Active gates: 0.0 100% 0.2 3.61% 0.4 2.05% 0.6 0.97% 0.8 0.78% 1.0 0.0% (b) Figure 13: Gated cross-attention in Lumina-T2I. (a) Absolute tanh values of all gates across all layers and heads. (b) Qualitative results of generated images under different gate thresholds. 3.3 Results for Lumina-T2V Basic Setups Lumina-T2V shares the same architecture with Lumina-T2I except for the introduction of a [nextframe] token, which provides explicit information about temporal duration. By default, Lumina-T2V uses CLIP-L/G [118] as the text encoder and employs a Flag-DiT with 2 billion parameter as the diffusion backbone. Departing from previous approaches [51, 156, 73, 22, 23, 16, 172, 62, 15, 65, 29, 153, 167, 158, 52, 157] that rely on T2I checkpoints for T2V initialization and adopt decoupled spatial-temporal attention, Lumina-T2V takes a different route by initializing the Flag-DiT weights randomly and leveraging a full-attention mechanism that allows for interaction among all spatial-temporal tokens. Although this choice significantly slows down the training and overall inference speed, we believe that such an approach holds greater potential, particularly when ample computational resources are available. Lumina-T2V is independently trained on a subset of the Panda-70M dataset [31] and the collected Pexel dataset, comprising of 15 million and 40,000 videos, respectively. Similar to Lumina-T2I, Lumina-T2V employs a multi-stage training strategy that starts with shorter, low-resolution videos and subsequently advances to longer, higher-resolution videos. Specifically, in the initial stage, Lumina-T2V is trained on videos of a fixed size \u2013 such as 512 pixels in both height and width, and 32 frames in length for Pexel dataset, which collectively comprise approximately 32,000 tokens. During the second stage, it learns to handle videos of varying resolutions and durations, while imposing a limit of 128,000 tokens to maintain computational feasibility. 19 0 50000 100000 150000 200000 250000 300000 Steps 0.30 0.35 0.40 0.45 0.50 Loss 2B Lumina-T2V 8 GPUs 2B Lumina-T2V 128 GPUs (a) 0 500 1000 1500 2000 2500 3000 Steps 0.6 0.8 1.0 1.2 1.4 1.6 Loss Flag-DiT-600M Flag-DiT-2B Flag-DiT-5B (b) Figure 14: Training loss curve comparison between (a) 2B Flag-DiT trained on 8 GPUs and 128 GPUs, (b) different sizes of Large-DiTs. Frame 1-9 Frame 10-16 Scene Transition Figure 15: Short video generation samples of Lumina-T2V. Although the length and resolution of the generated videos are limited, these samples exhibit scene transition, indicating a promising way for long video generation. Observations of Lumina-T2V We observe that Lumina-T2V with large batch size can converge, while a small batch size struggles to converge. As shown in Figure 14(a), increasing the batch size from 32 to 1024 leads to loss convergence. On the other hand, similar to the observation in ImageNet experiments, increasing model parameters leads to faster convergence in video generation. As shown in Figure 14(b), as the parameter size increases from 600M to 5B, we consistently observe lower loss for the same number of training iterations. Samples for Video Generation As shown in Figure 15, the first stage of Lumina-T2V is able to generate short videos with scene dynamics such as scene transitions, although the generated videos are limited in terms of resolution and duration, with a maximum of 32K total tokens. After the second stage training on longer-duration and higher-resolution videos, Lumina-T2V can generate long videos with up to 128K tokens in various resolutions and durations. The generated videos, as illustrated in Figure 16, exhibit temporal consistency and richer scene dynamics, indicating a promising scaling trend when using more computational resources and data. 3.4 Results for Lumina-T2MV Please refer to Appendix C.2. 3.5 Results for Lumina-T2Speech Please refer to Appendix C.3. 4 Related Work AI-Generated Contents (AIGCs) Generating high-dimensional perceptual data content (e.g., images, videos, audio, etc) has long been a challenge in the field of artificial intelligence. In the era of deep learning, Generative Adversarial Networks (GANs) [48, 173, 70, 155, 17, 76] stand as a pioneering method in this field due to their efficient sampling capabilities, yet they face issues of 20 Frame 1-8 Frame 9-16 Frame 17-24 Frame 25-32 Frame 33-40 Frame 41-48 Frame 49-56 Frame 57-64 Fireworks over a Disney castle Figure 16: Long video generation samples of Lumina-T2V. Lumina-T2V enables the generation of long videos with temporal consistency and rich scene dynamics. training instability and mode collapse. Meanwhile, Variational Autoencoders (VAEs) [79, 82, 6, 147, 128] and flow-based models [40, 41] demonstrate better training stability and interpretability but lag behind GANs in terms of image quality. Following this, autoregressive models (ARMs) [149, 148, 34, 26] have shown exceptional performance but come with higher computational demands, and the sequential sampling mechanism is more suited to 1-D data. Nowadays, Diffusion Models (DMs) [133], learning to invert diffusion paths from real data towards random noise, have gradually become the de-facto approach of generative AI across multiple domains, with numerous practical applications [106, 8, 49, 107, 1, 114, 44, 2]. The success of diffusion models over the past four years can be attributed to the progress in several areas, including reformulating diffusion models to predict noise instead of pixels [61], improvements in sampling methods for better efficiency [134, 94, 95, 77], the introduction of classifier-free guidance that enables direct conversion of text to images [59], and cascaded/latent space models that reduce the computational cost of high-resolution generation [63, 123, 142]. Apart from generating high-quality images following text instruction, various applications, including high-resolution generation[55, 43, 69, 170, 33, 25], compositional generation [74, 12, 162], style-consistent generation [58, 143], image editing [57, 18, 78, 102], and controllable generation [164, 103, 168, 101], have been proposed to further extend the applicability of pretrained T2I models. Additionally, pre-trained T2I models are also applied with a decoupled temporal attention to generate videos [51, 156, 73, 22, 23, 16, 172, 62, 15, 65, 29, 153, 167, 158, 52, 157] and multi-views of 3D object [131, 84, 154, 174, 32, 151, 54, 93, 140, 91, 130]. The similar framework, with suitable adjustments, has also been applied to audio generation [67, 90, 47, 161]. Although this paradigm has achieved notable success at the current model scale [114, 113, 171], subsequent works have proven the better potential of diffusion models based on vision transformers (so-called Diffusion Transformer, DiT) [111]. Afterwards, SiT [98] and SD3 [44] further demonstrate that an interpolation or flow-matching framework [92, 86, 4, 5] can better enhance the stability and scalability of DiT \u2014 pointing the way for diffusion models to scale up to the next level. Very recently, Sora [108] has demonstrated the potential for scaling DiT with its powerful joint image and video generation capabilities. However, the detailed implementations have yet to be released. Therefore, inspired by Sora, we introduce Lumina-T2X to push the boundaries of open-source generative models by scaling the flow-based Diffusion Transformer to generate contents across any modalities, resolutions, and durations. 21 5 Conclusion In this paper, we present Lumina-T2X, a unified framework designed to transform text instructions into any modality at arbitrary resolution and duration, including images, videos, multi-views of 3D objects, and speech. At the core of Lumina-T2X is a series of Flow-based Large Diffusion Transformers (Flag-DiT) carefully designed for scalable conditional generation. Equipped with key modifications including RoPE, RNSNorm, KQ-Norm, and zero-initialized attention for model architecture, [nextline] and [nextframe] tokens for data representation, and switching from diffusion to flow matching formulation, our Flag-DiT showcases great improvements in stability, flexibility, and scalability compared to the origin diffusion transformer. We first validate the generative capability of Flag-DiT on the ImageNet benchmark, which demonstrates superior performance and faster convergence in line with scaling-up model parameters. Given these promising findings, we further instantiate Flag-DiT in various modalities and provide a unified recipe for text-to-image, video, multiview, and speech generation. We demonstrate this framework can not only generate photorealistic images or videos at arbitrary resolutions but also unlock the potential for more complex generative tasks, such as resolution extrapolation, high-resolution editing, and compositional generation, all in a training-free manner. Overall, we hope that our attempts, findings, and open-sources of Lumina-T2X can help clarify the roadmap of generative AI and serve as a new starting point for further research into developing effective large-scale multi-modal generative models. 6 Limitations and Future Work Unified Framework but Independent Training Due to the imbalance of data quantity for different modalities and diverse latent space distribution, the current version of Lumina-T2X is separately trained to tackle the generation of images, videos, multi-views of 3D objects and speech. Therefore, without leveraging the pre-trained weights on 2D images, Lumina-T2V and Lumina-T2MV achieve preliminary results on temporalor view-consistent generation but show inferior sample qualities compared with their counterparts. Currently, we propose Lumina-T2X as a unified framework for scaling up models across any modality. In the future, we will further explore the joint training of images, videos, multi-views and audio for better generation quality and fast convergence. Fast Convergence but Inadequate Data Coverage Although the large model size enables LuminaT2X to achieve generative capabilities comparable to its counterparts with fast convergence, there remains a limitation in the inadequate coverage of the diverse data spectrum by the collected data. This leads to incomplete learning of the complex patterns and nuances of the real physical world, which can result in less robust model performance in real-world scenarios. Therefore, Lumina-T2X also faces common issues of current generative models, such as struggling with generating detailed human structures like hands or encountering artificial noises and background blurring in complex scenes, leading to less realistic images. We believe that higher-quality real-world data, combined with Lumina-T2X\u2019s powerful convergence capabilities, will be an effective solution to address this issue. 22"
16
+ }
intro_8K/test_introduction_short_2405.05949v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05949v1",
3
+ "title": "CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts",
4
+ "abstract": "Recent advancements in Multimodal Large Language Models (LLMs) have focused\nprimarily on scaling by increasing text-image pair data and enhancing LLMs to\nimprove performance on multimodal tasks. However, these scaling approaches are\ncomputationally expensive and overlook the significance of improving model\ncapabilities from the vision side. Inspired by the successful applications of\nMixture-of-Experts (MoE) in LLMs, which improves model scalability during\ntraining while keeping inference costs similar to those of smaller models, we\npropose CuMo. CuMo incorporates Co-upcycled Top-K sparsely-gated\nMixture-of-experts blocks into both the vision encoder and the MLP connector,\nthereby enhancing the multimodal LLMs with minimal additional activated\nparameters during inference. CuMo first pre-trains the MLP blocks and then\ninitializes each expert in the MoE block from the pre-trained MLP block during\nthe visual instruction tuning stage. Auxiliary losses are used to ensure a\nbalanced loading of experts. CuMo outperforms state-of-the-art multimodal LLMs\nacross various VQA and visual-instruction-following benchmarks using models\nwithin each model size group, all while training exclusively on open-sourced\ndatasets. The code and model weights for CuMo are open-sourced at\nhttps://github.com/SHI-Labs/CuMo.",
5
+ "authors": "Jiachen Li, Xinyao Wang, Sijie Zhu, Chia-Wen Kuo, Lu Xu, Fan Chen, Jitesh Jain, Humphrey Shi, Longyin Wen",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Mixture AND of AND Experts",
14
+ "gt": "The advent of GPT-4V [56] has sparked excitement within open-source communities to transform large language mod- els (LLM) into multimodal LLMs. Recent multimodal LLMs [3, 13, 47] typically integrate pre-trained vision en- coders and LLMs with visual instruction tuning data to fine-tune the pre-trained LLMs, enhancing their visual un- derstanding capabilities. To further scale up multimodal * Work done during an internship at ByteDance Inc., San Jose, CA. Correspondence to X. Wang ([email protected]) and H. Shi. MM-Vet LLaVA-Wild SEED-IMG MMMU MMBench SQA-IMG 52 49 46 43 85 82 79 76 73 71 69 67 40 38 36 34 80 70 60 50 74 73 72 71 MM1 7B (Private) LLaVA-NeXT Vicuna-7B CuMo Mistral-7B (Ours) LLaVA-NeXT Mistral-7B GQA MME 65 64 63 62 1550 1525 1500 1475 Mini-Gemini Vicuna-7B Figure 1. Comparisons of CuMo Mistral-7B with state-of-the-art 7B multimodal LLMs. CuMo outperforms strong open-sourced models such as Mini-Gemini and LLaVA-NeXT, as well as the private MM1 model. LLMs, previous efforts [8, 42, 44, 46, 48, 54] primarily focus on training the model with a more extensive col- lection of text-image paired data and employing stronger LLMs, significantly increasing training efforts. On the vi- sion side, recent work concentrates on leveraging multiple vision encoders [20, 45] to enrich visual content, employ- ing larger vision encoders [10], and using advanced vision- language connectors [6] to improve performance on mul- timodal tasks. However, these techniques result in an in- creased number of additional parameters and generate addi- tional visual tokens for LLMs to process, making it ineffi- cient to scale. In terms of efficiently scaling up models, Mixture-of- Experts (MoE) has become the de-facto framework in modern large-scale neural networks, particularly in natu- 1 arXiv:2405.05949v1 [cs.CV] 9 May 2024 Top-K Router MLP 1 MLP 2 MLP 3 MLP 4 Weighted-Sum CLIP-MoE MLP-MoE LLM (dense/MoE) What is the dog doing ? Word Embedding The dog is engaging in the activity of surfing. MoE block Figure 2. Architecture of CuMo. CuMo incorporates sparse Top-K MoE blocks into the CLIP vision encoder and vision-language MLP connector, thereby improving the multimodal LLM capabilities from the vision side. Skip connections are omitted for simplicity. Further implementation details are provided in Section 3.2. ral language processing (NLP). Most large language mod- els (LLM) are built upon the transformer [68] architec- ture, wherein sparse MoE is used to replace the dense MLP block with the Top-K sparsely-gated MoE block [60]. Re- cent state-of-the-art open-sourced [30, 65] and private [58] LLMs have predominantly adopted the sparse MoE archi- tecture. These models are scaled up using the MoE design during training while maintaining relatively lower inference costs as only selected MLP experts are activated during the feed-forward process. Nevertheless, the development and optimization of MoE-based models have been largely tai- lored to LLMs, and the exploration of scaling multimodal LLMs with MoE, especially on the vision side, remains largely unexplored. Motivated by these observations, we introduce CuMo, which integrates Top-K sparsely-gated MoE blocks into the vision encoder and the MLP connector of multimodal LLMs, as depicted in Figure 2. We also explore the associ- ated training recipe and methodology for CuMo. Firstly, we pre-train the MLP connector and perform pre-finetuning to warm up the whole model without introducing the MoE ar- chitecture, which stabilizes the following visual instruction tuning stage with newly incorporated sparse MoE blocks. Then, we replace each MLP block with the sparse MoE block in the MLP connector and the vision encoder through co-upcycling. Each expert within the sparse MoE block is initialized from the corresponding MLP block after the pre- training and the pre-finetuning stages. Additionally, each MoE block contains a Top-K router trained from scratch to select experts during the visual instruction tuning stage with auxiliary losses on the router to maintain a balanced load- ing of experts. We conduct further comparisons between co-upcycled LLMs and pre-trained MoE-based LLMs. The results show that the pre-trained MoE-based LLMs signifi- cantly outperform the co-upcycled LLMs. As a result, the upcycling of LLMs is not included in CuMo. Our mod- els are trained fully on open-sourced datasets that are con- verted to visual instruction following formats. Experimen- tal results demonstrate that CuMo outperforms other state- of-the-art multimodal LLMs on various VQA and multi- modal instruction-following benchmarks within the same model size group, as illustrated in Figure 1. Our contri- butions can be summarized as follows: \u2022 We introduce CuMo, which integrates co-upcycled sparsely-gated MoE layers into both the MLP connector and the vision encoder, enhancing the multimodal LLM with only slightly additional activated parameters. \u2022 We outline the training methodology for CuMo, including a three-stage training process with auxiliary losses to sta- bilize training and ensure a balanced loading of experts. \u2022 We train CuMo exclusively on open-sourced datasets and pre-trained models. It outperforms state-of-the-art open- sourced and private multimodal LLMs across multiple competitive benchmarks within each model size group.",
15
+ "main_content": "2.1. Multimodal LLM While the ultimate goal for mulitmodal models may be generative across various modalities [4, 63, 70], mod2 ern multimodal LLMs primarily focus on integrating additional modalities, such as vision, into LLMs. InstructBLIP [13] adopts Q-Former [38] to sample from visual tokens for LLM to feed-forward and follow the instructions. Flamingo [1] and IDEFICS [25, 34] use shared decoder for visual-language understanding. Qwen-VL [3] uses three-stage training to convert QwenLM to Qwen-VL. LLaVA series [46\u201348] adopt visual instruction tuning that uses instruction-following data to convert LLM into multimodal LLM. ShareGPT4V [8] collects detailed image caption data from GPT4V to augment the LLaVA models. HoneyBee [6] investigates different designs of the MLP connector for better alignment. VILA [44] unfreezes the LLM during pre-training with interleaved image-text data. MoE-LLaVA [43] adopts the MoE design in small LLMs and reaches comparable performance to LLaVA with large LLMs. VCoder [28] adopts various vision adapters to enhance visual perception abilities. SPHINX [20, 45] adopts multiple visual encoders to enrich the visual features with scaled data and models. InternLM-Xcomposer [14, 73] is trained with interleaved text-image composition data and achieves state-of-the-art performance. InternVL [10] scales up the vision encoder to a 6B ViT model. MM1 [54] summarizes the essential steps towards building a strong multimodal LLM from a pre-trained LLM. Mini-Gemini [42] further collects guided generation into the pipeline. 2.2. Mixture-of-Experts Mixture-of-Experts [26] is proposed to utilize a set of expert networks to address specific tasks by employing a gating network to determine the selection of these experts. Recently, it has gained popularity in the design of large language models [17]. The mainstream practice [60] is to replace the dense MLP layers with Top-K sparsely-gated mixture-of-experts (MoE) layers in the transformer [68]. MoE in Language Subsequent works [18, 35] have further scaled up MoE-based large language models with improved stability and load balancing of experts. The design of gating networks often involves selecting the top-k experts for each token [35, 60]. Various routing strategies have been explored, such as choosing top-k tokens by experts [75], oneto-one matching between experts and tokens [36]. Besides routing strategies, maintaining the load balance of experts is crucial for training MoE models. ST-MoE [77] adopts loading balancing loss and router-z loss to ensure a balanced distribution of the experts. Upcycling [33] proposes training sparse experts from dense checkpoints to stabilize training and lower the cost. Recent large language models like Gemini-Pro [58] and DBRX [65] are also based on the MoE design. MoE in Vision The success of MoE extends to the vision community, particularly following the popularity of vision transformers [5, 15, 22, 23, 27, 39, 76]. V-MoE [59] reaches Top-K Router Weighted Sum Layer Norm Co-Upcycled MoE block MLP MLP 1 MLP 2 MLP 3 MLP 4 N copies Layer Norm Figure 3. Initialization of MoE blocks via Co-Upcycling. Each MLP expert within the MoE block during the visual instruction tuning stage is initialized from the corresponding pre-trained MLP. comparable performance to dense ViT while only requiring half of the compute. LIMoE [55] replaces dense MLP layers with MoE layers in CLIP and observes improvements in zero-shot image classification. Residual MoE [69] corporates residual design into MoE transformer and saves over 30% training cost. AdaMV-MoE [9] proposes an adaptive MoE framework for multi-task learning. 3. Method In this section, we first review the sparse MoE block structure and the upcycling strategy utilized in previous studies. Subsequently, we describe how these sparsely-gated MoE blocks are integrated into each module of multimodal LLMs using co-upcycling strategies. Then, we introduce the threestage training process and auxiliary loss functions employed to stabilize training and balance the loads of experts. 3.1. Revisit Sparse MoE Sparse MoE Structure Previous mainstream practice [60] is to replace the dense MLP blocks with sparsely-gated mixture-of-experts blocks. Given input X \u2208RN\u00d7Cin and a MLP block, Xout = MLP(X) \u2208RN\u00d7Cout (1) To scale up the model with multiple MLP blocks in parallel, a sparse MoE block includes a router network to select TopK experts out of S total experts. This router network has a linear layer to compute the normalized weight matrix based on the inputs X for voting, resulting in W = Softmax(Linear(X)) \u2208RN\u00d7S (2) The Top-K experts are selected for each token based on W, and the re-normalized weights WK \u2208RN\u00d7K are computed 3 Pre-Training CLIP MLP LLM Pre-FineTuning CLIP MLP LLM Visual Instruction Tuning CLIP-MoE MLP-MoE LLM Co-Upcycle Figure 4. Training Stages of CuMo. The first stage involves pre-training the MLP for better alignment. Subsequently, the pre-finetuning stage trains all parameters as a warm-up before the next stage. Finally, the MLP experts within each MoE block are initialized from the weights of the corresponding MLP block, followed by training all parameters in the visual instruction tuning stage. using WK = Softmax(TopK(W)) \u2208RN\u00d7K (3) Each selected expert is represented by an MLP block, and the final output is obtained through a re-weighted sum Xout = K X i W i K \u25e6MLPi(X) \u2208RN\u00d7Cout (4) the output Xout maintains the same dimension as the output of a single dense MLP block. Sparse Upcycling Training MoE-based designs from scratch can be unstable and costly. Sparse Upcycling [33] addresses this challenge by initializing the experts in each MoE block from the corresponding MLP block in pretrained dense checkpoints. This initialization approach provides a better starting point for training MoE-based models and reduces training costs compared to training from scratch. 3.2. CuMo Architecture Sparse MoE in MLP Connector The MLP connector converts visual tokens into word embedding space, aligning dimensions between visual and text tokens. An effective architecture for the vision-language connector is an MLP block [46] that contains two linear layers. We start from a single MLP block and replace it with a Top-K sparse MoE block, incorporating a Top-K router and a set of experts for projecting visual tokens into word embedding space. Sparse MoE in Vision Encoder Vision encoders extract image features as sequences of visual tokens for reasoning in LLMs. CLIP [57] is one the most popular pre-trained vision encoders for multimodal LLM since it is pre-trained on large-scale image-text pairs, which makes it suitable for processing images for multimodal usage. The visual encoding part of CLIP is a ViT [15] model, which has consecutive MLP blocks in the transformer encoder. We substitute each MLP block with a Top-K sparse MoE block, retaining skip connections alongside MoE block outputs. Sparse MoE in LLM In terms of using MoE in LLM, we compare the co-upcycled LLM with pre-trained MoEbased LLM. We start from Mistral-7B and the upcycled Mistral-7B-MoE slightly outperforms Mistral-7B on certain benchmarks. However, considering the constrained knowledge base of upcycled experts from Mistral-7B, we compare it with the pre-trained Mixtral 8x7B with pre-trained experts of a diverse knowledge base. Experimental results reveal that pre-trained Mixtral 8x7B significantly outperforms Mistral-7B-MoE. As a result, LLM is not co-upcycled with CLIP and MLP connectors since it brings marginal improvements with great additional parameters. 3.3. Training Recipe Co-Upcycling MoE blocks We start with training the added MoE blocks from scratch while the model is struggling to converge. Attempts to address this issue with lower learning rates perform worse compared to the baseline. As a result, we adopt a co-upcycling approach, initializing each module that integrates sparsely-gated MoE blocks with pretrained MLPs to replace corresponding MLP blocks, as shown in Figure 3. This strategy consistently improves training stability and model performance. Three-Stage Training To further enhance training stability, we adopt a three-stage training strategy for CuMo models, as illustrated in Figure 4. In the first stage, we only pretrain the MLP connector, given that the vision encoder and LLM have already undergone pre-training on large-scale data. During the second pre-finetuning stage, we train all parameters using high-quality caption data to warm up the entire model before introducing MoE blocks in the subsequent stage. The third stage involves visual instruction finetuning, where the multimodal LLM is scaled up with upcycled MoE blocks and trained on visual instruction tuning 4 SQA Text MMB MM VQA LLaVA SEED MMMU Math Method LLM Act. IMG VQA GQA POPE MME EN CN Vet v2 Wild IMG val Vista 7B to 13B Models InstructBLIP [13] Vicuna-7B 7.9B 60.5 50.1 49.2 36.0 23.7 26.2 60.9 60.5 Qwen-VL-Chat [3] Qwen-7B 68.2 61.5 57.5 1487.5 60.6 56.7 78.2 58.2 35.9 LLaVA-v1.5 [46] Vicuna-7B 7.1B 66.8 58.2 62.0 85.9 1510.7 64.3 58.3 30.5 78.5 63.4 66.1 LLaMA-VID [41] Vicuna-7B 68.3 64.3 86.0 1521.4 65.1 79.3 59.9 VILA [44] Vicuna-7B 7.1B 68.2 64.4 62.3 85.5 1533.0 68.9 61.7 34.9 79.9 69.7 61.1 SPHINX-Intern2 [20] InternLM2-7B 70.4 58.1 56.2 86.9 1260.4 57.9 36.5 75.5 57.6 68.8 35.5 LLaVA-NeXT [48] Mistral-7B 7.6B 72.8 65.7 64.8 86.7 1498 68.7 61.2 47.3 82.2 83.2 72.2 35.3 37.7 LLaVA-NeXT [48] Vicuna-7B 7.1B 70.1 64.9 64.2 86.5 1519 67.4 60.6 43.9 81.8 81.6 70.2 35.8 34.6 LLaVA-LLaMA3 [12] LLaMA3-8B-IT 8.4B 72.9 59.0 62.6 86.4 1469 72.3 66.4 70.1 36.8 Mini-Gemini [42] Vicuna-7B 7.3B 65.2 1523 69.3 40.8 36.1 31.4 MM1 [54] MM1-7B 72.6 72.8 86.6 1529.3 79.0 42.1 82.8 81.5 69.9 37.0 35.9 InstructBLIP [13] Vicuna-13B 14.2B 63.1 50.7 49.5 78.9 1212.8 25.6 58.2 63.1 LLaVA-v1.5 [46] Vicuna-13B 13.4B 71.6 61.3 63.3 85.9 1531.3 67.7 63.6 35.4 80.0 70.7 68.2 36.4 27.6 VILA [44] Vicuna-13B 13.4B 73.7 66.6 63.3 84.2 1570.1 70.3 64.3 38.8 80.8 73.0 62.8 LLaMA-VID [41] Vicuna-13B 70.0 65.0 86.0 1542.3 66.6 80.0 62.3 SPHINX-Plus [20] LLaMA2-13B 74.2 65.7 89.1 1457.7 71.0 47.9 71.7 74.8 36.8 Mini-Gemini[42] Vicuna-13B 13.6B 65.9 1565 68.5 46.0 38.1 37.0 InternVL-Chat [10] Vicuna-13B 19B 61.5 66.6 87.6 1586.4 81.2 LLaVA-NeXT [48] Vicuna-13B 13.4B 73.6 67.1 65.4 86.2 1575 70 64.4 48.4 82.8 87.3 71.9 36.2 35.3 CuMo Mistral-7B 7.8B 73.9 67.0 64.9 86.7 1548.6 73.0 66.6 51.0\u2020 82.2 85.7\u2020 72.1 39.1 35.1\u2020 7B MoE Models SPHINX-MoE [20] Mixtral-8\u00d77B 74.5 68.0 63.8 89.6 1485.3 71.3 40.9 81.1 70.2 73.0 31.1 42.7 MM1 [54] MM1-7B-MoE 75.3 72.8 87.6 1629.0 79.7 47.0 83.4 82.0 70.4 40.9 40.9 Mini-Gemini [42] Mixtral-8\u00d77B 13.5B 69.2 1639 75.6 45.8 41.8 41.8 CuMo Mixtral-8\u00d77B 13.5B 77.9 66.0 63.8 85.7 1639.5 75.3 68.0 48.7\u2020 81.8 84.7\u2020 73.2 45.0 38.2\u2020 Private Models GPT4V [56] 78.0 77.0 74.4 60.2 56.8 49.9 Gemini 1.5 Pro [58] 73.5 73.6 74.3 64.3 73.2 58.5 52.1 Claude 3 Opus [2] 63.3 59.2 58.1 59.4 50.5 Qwen-VL-Max [64] 79.5 1790.1 77.6 75.1 66.6 51.4 51.0 Table 1. Comparisons between CuMo and other state-of-the-art multimodal LLMs on competitive benchmarks. These models are grouped by the size of the base LLM. The benchmarks are double-rowed due to limited space: SQA-IMG [50]; TextVQA [62]; GQA [24]; POPE [40]; MME [19]; MMBench [49]; MMVet [71]; VQAv2 [21]; LLaVA-Wild [47]; SEED-IMG [37]; MMMU [72]; MathVista [51]. Act.: Activated Parameters. Numbers\u2020 are averaged by three inference runs of querying GPT API. data. Loss Function To maintain a load balance between experts in each MoE block, we adopt auxiliary losses based on the language modeling cross-entropy loss. The auxiliary losses comprise loading balance loss and router z-loss [77]. Hence, the total loss is L = Lce + \u03b1bLb + \u03b1zLz (5) Here, Lce represents the language modeling loss, which computes the cross-entropy of next-token predictions. \u03b1b and \u03b1z denote coefficients for loading balance loss Lb and router z-loss Lz, set to 0.1 and 0.01, respectively, across all experiments. These auxiliary losses, abbreviated as bzloss in Section 4, are individually applied to the MLP connector, vision encoder, and LLM for simplicity. 4. Experiments We train the CuMo models on a mixture of open-sourced datasets, which are converted into the visual instruction tuning format. Then, we conduct comprehensive evaluations of the performance of CuMo models across various competitive VQA-based and instruction-following-based benchmarks. Additionally, we perform ablation studies on each module with upcycled MoE blocks with qualitative analysis of the results. 4.1. Implementation Details Training Datasets During pre-training, we only utilize LLaVA-558K [47] to train the MLP connector for better alignment. In the subsequent pre-finetuning stage, detailed image caption data from ALLaVA [7] is employed to warm up all parameters of the multimodal LLM. For the final visual instruction tuning stage, a mixture of datasets including LLaVA-665K [46], ShareGPT4V [8], LAION-GPT-V [16], DocVQA [66], ChartQA [52], AI2D [31], InfoVQA [53], SynDog-EN [32], ALLaVA [7], and LIMA [74] is utilized to train the CuMo models with upcycled MoE blocks. The total data size for visual instruction tuning is approximately 1.65 million, and all training data are publicly accessible. 5 SQA Text MMBench MM VQA LLaVA SEED Method LLM PT IT IMG VQA GQA POPE MME EN CN Vet v2 Wild IMG InstructBLIP [13] Vicuna-7B 129M 1.2M 60.5 50.1 49.2 36.0 23.7 26.2 60.9 60.5 InstructBLIP [13] Vicuna-13B 129M 1.2M 63.1 50.7 49.5 78.9 1212.8 25.6 58.2 63.1 IDEFICS-9B [25] LLaMA-7B 353M 1M 25.9 38.4 48.2 25.2 50.9 IDEFICS-80B [25] LLaMA-65B 353M 1M 30.9 45.2 54.5 38.1 60.0 Qwen-VL [3] Qwen-7B 1.4B 50M 67.1 63.8 59.3 38.2 7.4 78.8 56.3 Qwen-VL-Chat [3] Qwen-7B 1.4B 50M 68.2 61.5 57.5 1487.5 60.6 56.7 78.2 58.2 LLaVA-v1.5 [46] Vicuna-7B 558K 665K 66.8 58.2 62.0 85.9 1510.7 64.3 58.3 30.5 78.5 63.4 66.1 LLaVA-v1.5 [46] Vicuna-13B 558K 665K 71.6 61.3 63.3 85.9 1531.3 67.7 63.6 35.4 80.0 70.7 68.2 CuMo Mistral-7B 558K 665K 71.7 59.3 63.2 87.1 1428.6 69.6 62.6 34.3 80.6 68.8 69.6 Table 2. Comparisons between CuMo Mistral-7B and other multimodal LMM models with limited training data. Method SQA VQAT MMVet SEED Baseline on Mistral-7B 72.8 57.6 32.1 66.4 + Top 2-in-4 & Scratch 68.1 55.6 29.3 65.1 \u21ccTop 2-in-4 & Upcycle 73.7 57.2 32.3 67.1 + bzloss 73.5 57.4 33.1 67.4 \u21ccTop 2-in-8 & Upcycle 73.4 57.6 32.4 67.2 Table 3. Ablation study on the MLP-MoE module. Each row represents a different configuration, with changes or additions marked using \u21ccand + symbols, respectively. Settings highlighted with a light blue background are those adapted for the MLP-MoE module in Table 1. Method SQA VQAT MMVet SEED MLP-MoE 73.5 57.4 33.1 67.4 + Unfreeze CLIP 72.0 58.9 34.7 69.0 + Top 2-in-4 & bzloss 72.8 59.7 35.4 69.8 \u21ccTop 2-in-8 & bzloss 71.0 59.0 33.6 69.2 Table 4. Ablation study on the CLIP-MoE module. All MoE blocks in CLIP are initialized with upcycling. Method SQA VQAT MMVet SEED MLP-MoE & CLIP-MoE 71.7 59.3 34.3 69.6 + Mistral 4\u00d77B & Upcycle 72.8 57.0 35.2 69.9 \u21ccMistral 8\u00d77B & Upcycle 73.2 56.4 35.7 70.5 \u21ccMixtral 8\u00d77B 74.2 60.6 40.0 72.6 Table 5. Ablation study on the LLM-MoE module. Mixtral 8\u00d77B outperforms upcycled Mistral MoE models significantly. The detailed breakdown of the training dataset is listed in Appendix A. Evaluation Benchmarks Evaluation of CuMo models primarily focuses on academic VQA-based datasets such as VQAv2 [21], GQA [24], Science-QA [50], and TextVQA [62], as well as instruction-following-based LMM benchmarks including POPE [40], MME [19], MMBench [49], SEED-Bench [37], LLaVA-Wild [47], and MM-Vet [71]. Additionally, the challenging MMMU [72] and MathVista [51] datasets are evaluated to assess the visual reasoning abilities of the multimodal LLMs. Training Settings We employ the pre-trained CLIP ViTL [57] as the vision encoder, a two-layer MLP as the visionlanguage connector, and Mistral-7B [29] as the LLM to establish the baseline model following LLaVA v1.5 [46]. We only use LLaVA-558K [46] as pre-training data and LLaVA-665K [46] as visual instruction tuning data to train the baseline model and make ablation studies for comparisons. The learning rate is set to 1e-3 for pre-training the MLP connector and reduced to 2e-5 for visual instruction tuning of both the MLP connector and CLIP. To further stabilize the visual instruction tuning process after scaling up with additional data, the learning rate is lowered to 2e-6 for all parameters of the CuMo models in the final results. More hyperparameters of the training process is listed in Appendix B. Evaluation Settings During evaluation, we adhere to the settings outlined in the LLaVA series [46], employing a greedy decoding strategy for all benchmarks. The data and questions are converted into visual instructions to prompt the multimodal LLMs. For benchmarks that utilize GPT API for evaluation, we adopt gpt-4-0613 for LLaVAWild [47] and gpt-3.5-turbo for MathVista [51]. 4.2. Main Results Comparison with SoTA Multimodal LLMs In Table 1, we present a comparison of CuMo models with other stateof-the-art instruction-following-based multimodal LLMs. We categorize the models based on the size of the base LLMs, including 7B models, 13B models, and 7B MoE models. CuMo Mistral-7B outperforms other 7B-based state-of-the-art multimodal LLMs across multiple benchmarks. Moreover, the performance of the CuMo Mistral7B model is comparable to many 13B-based multimodal LLMs. In the case of Mixtral-8\u00d77B models, CuMo achieves results on par with SPHINX-MoE, MM1, and Mini-Gemini. LLaMA-based LLMs [11, 67] are not utilized in our experiments due to license constraints. Comparison under limited training data To further evaluate the effectiveness of the co-upcycled MoE blocks, we 6 1\u00d7 2\u00d7 3\u00d7 SQA VQAT MMVet SEED \u2713 71.7 59.3 34.3 69.6 \u2713 \u2713 71.7 60.6 35.0 69.7 \u2713 \u2713 72.9 61.0 37.0 69.7 \u2713 \u2713 \u2713 72.2 60.5 36.9 70.1 Table 6. Ablation study on multi-resolution image features. The combination of 3\u00d7 and 1\u00d7 is adopted for the final models in Table 1. Method SQA VQAT MMVet SEED No PFT 71.7 59.3 34.3 69.6 + ShareGPT4V 72.4 61.7 36.5 70.0 \u21ccALLaVA 73.0 62.8 37.2 70.9 Table 7. Ablation study on the pre-finetuning stage. ALLaVA is chosen for pre-finetuning due to its provision of high-quality image caption data. train the vanilla CuMo mistral-7B under limited training data in Table 2. It shows that CuMo outperforms other 7B models and reaches comparable performance to LLaVAv1.5 Vicuna-13B under the same training data. 4.3. Ablation Study Upcycle MLP connector to MLP-MoE We initiate the ablation study by replacing the MLP connector with upcycled MLP-MoE, as depicted in Table 3. We start with a Top 2-in4 router and train the MoE blocks from scratch, which leads to a clear performance drop on all benchmarks. Then, we adopt the upcycling strategy to initialize the MLP experts. We observe marginal improvements over the baseline, considering each expert comprises only two linear layers. Subsequently, the incorporation of bzloss to ensure a balanced loading of experts in the MLP-MoE yields noticeable enhancements on MMVet. However, employing a Top 2-in-8 router with upcycling and bzloss results in a slight performance decline, possibly due to the limited visual instruction tuning data to train robust and well-balanced eight experts. Empower CLIP with CLIP-MoE In Table 4, initially unfreezing CLIP based on MLP-MoE leads to noticeable improvements on TextVQA and MMVet benchmarks. However, training the added Top2-in-4 MoE blocks in CLIP from scratch proves unsuccessful, as the model fails to converge even with reduced learning rates. Consequently, adopting upcycled MoE blocks during the visual instruction tuning stage yields further enhancements on TextVQA, MMVet, and SEED benchmarks. Upcycle LLM vs Pre-trained LLM-MoE Upon replacing all MLP blocks with sparsely-gated MoE blocks in the visual part, we further investigate the utilization of the MoE architecture in the LLM. Starting from the MistralLayer 1 Layer 4 Layer 7 Layer 10 Layer 13 Layer 16 Layer 19 Layer 22 0% 25% 50% 75% 100% Expert 1 Expert 2 Expert 3 Expert 4 Figure 5. Expert distributions of MoE blocks in CLIP. We select layers from CLIP and summarize the activated experts during the feed-forward process on the MME test set. 7B model, we first lower the learning rate to 2e-6 to set the baseline and the following experiments since a learning rate of 2e-5 induces training instabilities. Then, we upcycle each MLP block with a sparsely-gated MoE block, initializing the weight of each expert from the pre-trained MLP block. As demonstrated in Table 5, the upcycled Mistral4\u00d77B and 8\u00d77B outperform the Mistral-7B model slightly except for TextVQA. However, considering that the upcycled experts significantly increase parameters without introducing new knowledge, we replace the upcycled Mistral 8\u00d77B with Mixtral 8\u00d77B [30]. In Mixtral 8\u00d77B, all expert layers are pre-trained on large-scale language data, providing superior initialization compared to upcycling. The results indicate that CuMo Mixtral-8x7B outperforms its upcycled counterparts significantly and is employed in the final models with bzloss to maintain a balanced loading of experts. Multi-Resolution Visual Features Incorporating multiresolution inputs is crucial for enhancing the understanding of image content in multimodal LLMs. Following the approach outlined in S2[61], we introduce multi-resolution inputs to CLIP and concatenate the feature maps channelwise to maintain the total number of visual tokens consistent with low-resolution inputs. As illustrated in Table 6, an empirical combination of 3\u00d7 and 1\u00d7 yields optimal performance and we adopt this configuration for the final CuMo models. Pre-FineTuning Stage Previous ablation studies were conducted directly after the pre-training of the MLP connector, leading to observed training instabilities during visual instruction tuning. To address this, we introduce a pre-finetuning stage using high-quality image caption data, wherein all parameters are unfrozen. In Table 7, we leverage caption data from ALLaVA for this stage. Results indicate that ALLaVA data proves to be a superior option, 7 can you describe what groceries are presented in this fridge CuMo-Mistral-7B The groceries presented in the fridge include a variety of fresh produce such as bell peppers, lettuce, broccoli, and cucumbers. There are also eggs, milk, and cheese, which are dairy products. Additionally, there are cherry tomatoes and a jar of red sauce, which could be used for cooking or as a condiment. The presence of these items suggests that the fridge is well-stocked with a balance of fresh vegetables, dairy, and condiments, indicating a household that values a healthy and varied diet. LLaVA-Yi-34B The refrigerator in the image is well-stocked with a variety of groceries. Here's a list of what I can see: on the top shelf, there are several items including what appears to be a head of lettuce, a bunch of green onions, a container of strawberries, and a few blue plastic containers that could be used for food storage or serving; the middle shelf has a few bottles, which might contain milk or juice, and a carton of eggs.; the bottom shelf is filled with fresh produce. There are bell peppers in different colors (red, yellow, and green), a bunch of spring onions, and a head of lettuce; the drawer at the bottom contains more fresh produce, including what looks like a bunch of green onions and possibly some other leafy greens. The refrigerator is well-organized and appears to be stocked with fresh and healthy food options. MiniGemini-Yi-34B The refrigerator is filled with a variety of groceries, including fruits, vegetables, dairy products, and beverages. There are several bottles, likely containing milk or other dairy products, and a carton of eggs. The fridge also contains a selection of fresh produce, such as broccoli, peppers, and lettuce. Additionally, there are some fruits, including apples and oranges. The presence of these items suggests that the person who owns the fridge prefers a balanced diet with a mix of healthy foods and beverages. how many people are in this image CuMo-Mistral-7B In the image, there are a total of 14 characters. This includes 12 characters seated around the table and 2 characlters standing on the table. The characters are a mix of human and skeletal figures, with the skeleton figures being part of the Day of the Dead celebration. LLaVA-Yi-34B The image you've provided is from the animated movie \"Coco.\" It features a variety of characters, both human and fantastical, which are not real people but animated characters. The central figure is a young boy named Miguel, who is holding a guitar. Surrounding him are other characters, including his family members and other characters from the film. The characters are depicted in a stylized, animated manner, and the image is a still from the movie. MiniGemini-Yi-34B In the image, there are a total of 12 people. This includes the boy in the foreground, the two skeletons, and the nine characters seated around the table. It's important to note that the skeletons are representations of people and are included in the count. can you introduce this movie based on this poster CuMo-Mistral-7B The poster is for the movie \"The Avengers,\" which is part of the Marvel Cinematic Universe. It features a team of superheroes, each with unique abilities and costumes, coming together to face a significant threat. The central figure, with the iconic red and gold suit, is likely the leader of the group. The presence of the Hulk and Thor suggests that the film includes a mix of science fiction and fantasy elements. The cityscape and the explosion in the background imply that the movie's plot involves a large-scale conflict that impacts the urban environment. The release date and the mention of \"3D and IMAX 3D\" indicate that the movie was available in various formats, catering to different viewing preferences. The poster is designed to convey the epic scale and the high stakes of the film, as well as the diverse team of heroes that viewers can expect to see on screen. LLaVA-Yi-34B This poster is for the movie \"Avengers: Age of Ultron,\" which is a 2015 American superhero film based on the Marvel Comics superhero team the Avengers. The film is the sequel to \"The Avengers\" (2012) and the 11th film in the Marvel Cinematic Universe (MCU). It was directed by Joss Whedon and produced by Marvel Studios. The poster features several of the main characters from the film, including Iron Man, Thor, Captain America, the Hulk, Black Widow, and Hawkeye, all in their superhero costumes. The setting appears to be a cityscape with a sense of destruction, suggesting a battle or conflict. The title \"Avengers: Age of Ultron\" is prominently displayed at the top of the poster, and the Marvel logo is visible at the bottom. The film's plot revolves around the Avengers attempting to stop the villain Ultron from enacting his plans for human extinction. The movie is known for its action-packed sequences, special effects, and ensemble cast of superheroes. It was a critical and commercial success, contributing to the ongoing popularity of the Marvel Cinematic Universe. MiniGemini-Yi-34B The movie poster is for \"The Avengers,\" a 2012 American superhero film produced by Marvel Studios and distributed by Walt Disney Pictures. It is the sixth film in the Marvel Cinematic Universe (MCU). The film features an ensemble cast including Robert Downey Jr., Chris Evans, Mark Ruffalo, Chris Hemsworth, Scarlett Johansson, Jeremy Renner, Tom Hiddleston, and Samuel L. Jackson. The plot revolves around Nick Fury, director of the peacekeeping organization S.H.I.E.L.D., recruiting a team of superheroes to help save the world from Loki, the adoptive brother of Thor, who is leading an alien invasion. The poster is designed to showcase the main characters and the action-packed nature of the film, with the characters in their iconic costumes and the cityscape in the background suggesting the scale of the conflict. The release date is prominently displayed, indicating the film's availability in theaters. The poster also highlights the film's availability in various formats, including 3D, which was a significant selling point at the time of its release. Figure 6. Dialogues between the user and multimodal LLMs on challenging images. We highlight the correct answers and hallucinations from the responses of the multimodal LLMs. providing fewer but higher-quality captions for training, ultimately leading to improved performance. 4.4. Qualitative Analysis Expert Distribution As shown in Figure 5, we visualize the expert distributions in the MoE block from selected layers at CLIP-MoE. The dataset analyzed is the test set of the MME 8 benchmark. The distribution indicates that the selected experts during inference are evenly spread across layers, providing further evidence of the effectiveness of the auxiliary losses in maintaining load balance. Dialogue Comparisons Presented in Figure 6, we contrast the responses from CuMo-Mistral-7B, LLaVA-Yi-34B, and MiniGemini-Yi-34B. It demonstrates that CuMo-Mistral7B can effectively follow instructions and predominantly provide correct answers to challenging questions derived from complex scenes. However, CuMo also exhibits instances of hallucinations, such as responding with \u201c2 characters standing on the table,\u201d highlighting the need for further investigation to mitigate hallucinations in CuMo. 5. Conclusion In this study, we introduce the sparse mixture-of-experts design into multimodal LLMs. Specifically, we replace each MLP block with a Top-K sparse MoE block in the MLP connector and the vision encoder. To enhance training stability, we employ a three-stage training approach, incorporating upcycled MoE blocks during the visual instruction tuning stage, along with auxiliary bzloss to maintain a balanced loading of experts. All CuMo models are trained and evaluated on fully open-sourced datasets and benchmarks. Through extensive experiments and ablation studies, we validate the effectiveness of the upcycled MoE blocks in each module. CuMo outperforms state-of-the-art models across multiple competitive benchmarks within the same group of model sizes. Acknowledgments We extend our gratitude to Chunyuan Li, Lei Chen, and Haibin Lin for their insightful and valuable discussions throughout this project."
16
+ }
intro_8K/test_introduction_short_2405.05953v1.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05953v1",
3
+ "title": "Frame Interpolation with Consecutive Brownian Bridge Diffusion",
4
+ "abstract": "Recent work in Video Frame Interpolation (VFI) tries to formulate VFI as a\ndiffusion-based conditional image generation problem, synthesizing the\nintermediate frame given a random noise and neighboring frames. Due to the\nrelatively high resolution of videos, Latent Diffusion Models (LDMs) are\nemployed as the conditional generation model, where the autoencoder compresses\nimages into latent representations for diffusion and then reconstructs images\nfrom these latent representations. Such a formulation poses a crucial\nchallenge: VFI expects that the output is deterministically equal to the ground\ntruth intermediate frame, but LDMs randomly generate a diverse set of different\nimages when the model runs multiple times. The reason for the diverse\ngeneration is that the cumulative variance (variance accumulated at each step\nof generation) of generated latent representations in LDMs is large. This makes\nthe sampling trajectory random, resulting in diverse rather than deterministic\ngenerations. To address this problem, we propose our unique solution: Frame\nInterpolation with Consecutive Brownian Bridge Diffusion. Specifically, we\npropose consecutive Brownian Bridge diffusion that takes a deterministic\ninitial value as input, resulting in a much smaller cumulative variance of\ngenerated latent representations. Our experiments suggest that our method can\nimprove together with the improvement of the autoencoder and achieve\nstate-of-the-art performance in VFI, leaving strong potential for further\nenhancement.",
5
+ "authors": "Zonglin Lyu, Ming Li, Jianbo Jiao, Chen Chen",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.CV",
9
+ "cats": [
10
+ "cs.CV"
11
+ ],
12
+ "label": "Original Paper",
13
+ "paper_cat": "Diffusion AND Model",
14
+ "gt": "Video Frame Interpolation (VFI) aims to generate high frame-per- second (fps) videos from low fps videos by estimating the inter- mediate frame given its neighboring frames. High-quality frame interpolation contributes to other practical applications such as novel view synthesis [14], video compression [58], and high-fps cartoon synthesis [47]. Current works in VFI can be divided into two folds in terms of methodologies: flow-based methods [1, 7, 12, 18, 20, 24, 29, 32, 34, 39, 42, 47, 60] and kernel-based methods [4, 5, 9, 27, 36, 37, 46]. Flow-based methods compute flows in the neighboring frames and forward warp neighboring images and features [18, 24, 34, 35, 47] or estimate flows from the intermediate frame to neighboring frames and backward warp neighboring frames and features [1, 7, 12, 20, 29, 32, 39, 42, 60]. Instead of relying on optical flows, kernel-based methods predict convolution kernels for pixels in the neighboring frames. Recent advances in flow estimation [19, 21\u201323, 51, 52, 57] make it more popular to adopt flow-based methods in VFI. arXiv:2405.05953v1 [cs.CV] 9 May 2024 Conference\u201917, July 2017, Washington, DC, USA Zonglin Lyu1, Ming Li2, Jianbo Jiao3, and Chen Chen2 Other than these two folds of methods, MCVD [55] and LD- MVFI [11] start formulating VFI as a diffusion-based image gen- eration problem. LDMVFI considers VFI as a conditional genera- tion task with Latent Diffusion Models (LDMs) [43], where LDMs contain an autoencoder that compresses images into latent rep- resentations and reconstructs images from latent representations. Diffusion models [17] run in the latent space of the autoencoder. Though diffusion models achieve excellent performance in image generation, there remain challenges in applying them to VFI. (1) The formulation of diffusion models results in a large cumula- tive variance (the variance accumulated during sampling) of generated latent representations. The sampling process starts with standard Gaussian noise and adds small Gaussian noise to the denoised output at each step based on a pre-defined distri- bution. After the sampling process, images are generated, but these noises also add up to a large cumulative variance. Though such a variance is beneficial to diversity (i.e. repeated sampling results in different outputs), VFI requires that repeated sampling returns identical results, which is the ground truth intermediate frame. Therefore, a small cumulative variance is preferred in VFI. The relation of the cumulative variance and diversity is supported by the fact that DDIM [48] tends to generate rela- tively deterministic images than DDPM [17]. DDIM removes small noises at each sampling step, so the cumulative variance in DDIM is lower. LDMVFI [11] uses conditional generation as guidance, but this does not change the nature of large cumu- lative variance. In Section 3.4, we show that our method has a much lower cumulative variance than conditional generation. (2) Videos usually have high resolution, which can be up to 4K [41], resulting in practical constraints to apply diffusion models [17] in pixel spaces. It is natural to apply Latent Diffusion Models (LDMs) [43] to sample latent representations and reconstruct them back to images. LDMs apply VQModels in VQGAN [13] to compress images into latent representations and reconstruct images from latent representations. However, it does not take advantage of neighboring frames, which can be a good guide to reconstruction. LDMVFI designs reconstruction models that leverage neighboring frames, but it tends to reconstruct over- laid images when there is a relatively large motion between neighboring frames, possibly due to the cross-attention with features of neighboring frames, which is shown in Figure 1. To tackle these challenges, we propose a consecutive Brownian Bridge diffusion model (in latent space) that transits among three deterministic endpoints for VFI. This method results in a much smaller cumulative variance, achieving a better estimation of the ground truth inputs. We can separate LDM-based VFI methods into two parts: autoencoder and ground truth estimation (with diffusion). It is different from the original LDMs [43] because the latent rep- resentation generated by diffusion does not aim to estimate some ground truth. It is also different from LDMVFI [11] because LD- MVFI does not consider the performance of autoencoder separately from the interpolation method. With such a two-stage separation, we evaluate them separately for specific directions of improvement. Moreover, we take advantage of flow estimation and refinement methods in recent literature [32] to improve the autoencoder. The feature pyramids from neighboring frames are warped based on estimated optical flows, aiming to alleviate the issues of reconstruct- ing overlaid images. In experiments, our method improves by a large margin when the autoencoder is improved and achieves state- of-the-art performance. Our contribution can be summarized in three parts: \u2022 We propose a new consecutive Brownian Bridge diffusion model for VFI and justify its advantages over traditional diffusion mod- els: lower cumulative variance and better ground truth estima- tion capability. Additionally, we provide a cleaner formulation of Brownian Bridges and also propose the loss weights among different times in Brownian Bridges. \u2022 We formulate the diffusion-based VFI as two stages: autoencoder and ground truth estimation. This is a novel interpretation of LDM-based VFI, which can provide specific directions for im- provements. \u2022 Through extensive experiments, we validate the effectiveness of our method. Our method estimates the ground truth better than traditional diffusion with conditional generation. Moreover, the performance of our method improves when the autoencoder improves and achieves state-of-the-art performance with a simple yet effective autoencoder, indicating its strong potential in VFI.",
15
+ "main_content": "Video Frame Interpolation can be roughly divided into two categories in terms of methodologies: flow-based methods [1, 7, 12, 18, 20, 24, 29, 32, 34, 39, 42, 47, 60] and kernel-based methods [4, 5, 9, 27, 36, 37, 46]. Flow-based methods assume certain motion types, where a few works assume non-linear types [7, 12] while others assume linear. Via such assumptions, flow-based methods estimate flows in two ways. Some estimate flows from the intermediate frame to neighboring frames (or the reverse way) and apply backward warping to neighboring frames and their features [1, 7, 12, 20, 29, 32, 39, 42, 60]. Others compute flows among the neighboring frames and apply forward splatting [18, 24, 34, 35, 47]. In addition to the basic framework, advanced details such as recurrence of inputs with different resolution level [24], cross-frame attention [60], and 4D-correlations [29] are proposed to improve performance. Kernel-based methods, introduced by [36], aim to predict the convolution kernel applied to neighboring frames to generate the intermediate frame, but it has difficulty in dealing with large displacement. Following works [5, 9, 27] alleviate such issues by introducing deformable convolution. LDMVFI [11] recently introduced a method based on Latent Diffusion Models (LDMs) [43], formulating VFI as a conditional generation task. LDMVFI uses an autoencoder introduced by LDMs to compress images into latent representations, efficiently run the diffusion process, and then reconstruct images from latent space. Instead of directly predicting image pixels during reconstruction, it takes upsampled latent representations in the autoencoder as inputs to predict convolution kernels in kernel-based methods to complete the VFI task. 2.2 Diffusion Models The diffusion model is introduced by DDPM [17] to image generation task and achieves excellent performance in high-fidelity and high-diversity image generation. The whole diffusion model Frame Interpolation with Consecutive Brownian Bridge Diffusion Conference\u201917, July 2017, Washington, DC, USA can be split into a forward diffusion process and a backward sampling process. The forward diffusion process is defined as a Markov Chain with steps \ud835\udc61= 1, ...,\ud835\udc47, and the backward sampling process aims to estimate the distribution of the reversed Markov chain. The variance of the reversed Markov chain has a closed-form solution, and and expected value of the reversed Markov chain is estimated with a deep neural network. Though achieving strong performance in image generation tasks, DDPM [17] requires \ud835\udc47= 1000 iterative steps to generate images, resulting in inefficient generation. Sampling steps cannot be skipped without largely degrading performance because the conditional distribution at step \ud835\udc61\u22122 needs to be computed with the conditional distribution at time \ud835\udc61\u22121 and \ud835\udc61due to its Markov property. To enable efficient and high-quality generation, DDIM [48] proposes a non-Markov formulation of diffusion models, where the conditional distribution at time \ud835\udc61\u2212\ud835\udc58(\ud835\udc58> 0) can be directly computed with the conditional distribution at time \ud835\udc61. Therefore, skipping steps does not largely degrade performance. Score-based SDEs [3, 49, 63] are also proposed as an alternative formulation of diffusion models by writing the diffusion process in terms of Stochastic Differential Equations [38], where the reversed process has a closed-form continuous time formulation and can be solved with Eluer\u2019s method with a few steps [49]. In addition, Probability Flow ODE is proposed as the deterministic process that shares the same marginal distribution with the reversed SDE [49]. Following score-based SDEs, some works propose efficient methods to estimate the solution Probability Flow ODE [30, 31]. Instead of focusing on the nature of the diffusion process, DeepCache [33] proposes a feature caching and sharing mechanism in the denoising UNet, enabling parallel and skipping computation and further improving efficiency. To deal with high-resolution images, the Latent Diffusion Model [43] proposes an autoencoder with a Vector Quantization Layer (VQ Layer) that compresses and reconstructs images, and diffusion models run with compressed images. With such an autoencoder, high-resolution images can be generated efficiently. Other than accelerating generation, diffusion models are applied to conditional generation tasks [3, 6, 28, 43, 45, 61, 63] such as generation based on poses or skeletons, image inpainting, etc. 3 METHODOLOGY In this section, we will first go through preliminaries on the Diffusion Model (DDPM) [17] and Brownian Bridge Diffusion Model (BBDM) [28] and introduce the overview of our two-stage formulation: autoencoder and ground truth estimation (with consecutive Brownian Bridge diffusion). Then, we will discuss the details of our autoencoder method. Finally, we propose our solution to the frame interpolation task: consecutive Brownian Bridge diffusion. 3.1 Preliminaries Diffusion Model. The forward diffusion process of Diffsuion Model [17] is defined as: \ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121) = N (x\ud835\udc61; \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61x\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I). (1) When \ud835\udc61= 1, x\ud835\udc61\u22121 = x0 is a sampled from the data (images). By iterating Eq. (1), we get the conditional marginal distribution of x\ud835\udc61[17]: \ud835\udc5e(x\ud835\udc61|\ud835\udc650) = N (\ud835\udc65\ud835\udc61; \u221a\ud835\udefc\ud835\udc61x0, (1 \u2212\ud835\udefc\ud835\udc61)I), (2) where \ud835\udefc\ud835\udc61= \ud835\udc61 \u00d6 \ud835\udc60=1 (1 \u2212\ud835\udefd\ud835\udc60). The sampling process can be derived with the Bayes\u2019 theorem [17]: \ud835\udc5d\ud835\udf03(x\ud835\udc61\u22121|x\ud835\udc61) = \ud835\udc5e(x\ud835\udc61\u22121|x0, x\ud835\udc61) = N (\ud835\udc65\ud835\udc61\u22121; \u02dc \ud835\udf07\ud835\udc61, \u02dc \ud835\udefd\ud835\udc61), (3) where \u02dc \ud835\udf07\ud835\udc61= \u221a\ud835\udefc\ud835\udc61\u22121\ud835\udefd\ud835\udc61 1 \u2212\ud835\udefc\ud835\udc61 x0 + \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61(1 \u2212\ud835\udefc\ud835\udc61\u22121) 1 \u2212\ud835\udefc\ud835\udc61 x\ud835\udc61, (4) and \u02dc \ud835\udefd\ud835\udc61= 1 \u2212\ud835\udefc\ud835\udc61\u22121 1 \u2212\ud835\udefc\ud835\udc61 \ud835\udefd\ud835\udc61. (5) Eq. (4) can be rewritten with Eq. (2) via reparameterization: \u02dc \ud835\udf07\ud835\udc61= 1 1 \u2212\ud835\udefd\ud835\udc61 \u0012 x\ud835\udc61\u2212 \ud835\udefd\ud835\udc61 \u221a1 \u2212\ud835\udefc\ud835\udc61 \ud835\udf16 \u0013 , where \ud835\udf16\u223cN (0, I). (6) By Eq. (4) and (6), we only need to estimate\ud835\udf16to estimate\ud835\udc5d\ud835\udf03(x\ud835\udc61\u22121|x\ud835\udc61). Therefore, the training objective is: Ex0,\ud835\udf16 \u0002 ||\ud835\udf16\ud835\udf03(xt,\ud835\udc61) \u2212\ud835\udf16||2 2 \u0003 . (7) It suffices to train a neural network \ud835\udf16\ud835\udf03(x\ud835\udc61,\ud835\udc61) predicting \ud835\udf16. Brownian Bridge Diffusion Model. Brownian Bridge [44] is a stochastic process that transits between two fixed endpoints, which is formulated as \ud835\udc4b\ud835\udc61= \ud835\udc4a\ud835\udc61|(\ud835\udc4a\ud835\udc611,\ud835\udc4a\ud835\udc612), where \ud835\udc4a\ud835\udc61is a standard Wiener process with distribution N (0,\ud835\udc61). We can write a Brownian Bridge as \ud835\udc4b\ud835\udc61= \ud835\udc4a\ud835\udc61|(\ud835\udc4a0,\ud835\udc4a\ud835\udc47) to define a diffusion process. When \ud835\udc4a0 = \ud835\udc4e,\ud835\udc4a\ud835\udc47= \ud835\udc4f, it follows a normal distribution: \ud835\udc4b\ud835\udc61\u223cN \u0012 (1 \u2212\ud835\udc61 \ud835\udc47)\ud835\udc4e+ \ud835\udc61 \ud835\udc47\ud835\udc4f, \ud835\udc61\ud835\udc47\u2212\ud835\udc612 \ud835\udc47 \u0013 . (8) BBDM [28] develops an image-to-image translation method based on the Brownian Bridge process by treating \ud835\udc4eand \ud835\udc4fas two images. The forward diffusion process is defined as: \ud835\udc5e(x\ud835\udc61|x0, y) = N (x\ud835\udc61; (1 \u2212\ud835\udc5a\ud835\udc61)x0 + \ud835\udc5a\ud835\udc61y,\ud835\udeff\ud835\udc61) , (9) where \ud835\udc5a\ud835\udc61= \ud835\udc61 \ud835\udc47and \ud835\udeff\ud835\udc61= 2\ud835\udc60(\ud835\udc5a\ud835\udc61\u2212\ud835\udc5a2 \ud835\udc61). (10) x0 and y are two images, and \ud835\udc60is a constant that controls the maximum variance in the Brownian Bridge. The sampling process is derived based on Bayes\u2019 theorem [28]: \ud835\udc5d\ud835\udf03(x\ud835\udc61\u22121|x\ud835\udc61, y) = \ud835\udc5e(x\ud835\udc61\u22121|x0, x\ud835\udc61, y) = \ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121, y)\ud835\udc5e(x\ud835\udc61\u22121|x0, y) \ud835\udc5e(x\ud835\udc61|x0, y) = N ( \u02dc \ud835\udf07\ud835\udc61, \u02dc \ud835\udeff\ud835\udc61I). (11) where \u02dc \ud835\udf07\ud835\udc61= \ud835\udc50\ud835\udc65\ud835\udc61x\ud835\udc61+ \ud835\udc50\ud835\udc66\ud835\udc61\ud835\udc66+ \ud835\udc50\ud835\udf16\ud835\udc61(\ud835\udc5a\ud835\udc61(y \u2212x0) + \u221a\ufe01 \ud835\udeff\ud835\udc61\ud835\udf16), \ud835\udc50\ud835\udc65\ud835\udc61= \ud835\udeff\ud835\udc61\u22121 \ud835\udeff\ud835\udc61 1 \u2212\ud835\udc5a\ud835\udc61 1 \u2212\ud835\udc5a\ud835\udc61\u22121 + \ud835\udeff\ud835\udc61|\ud835\udc61\u22121 \ud835\udeff\ud835\udc61 (1 \u2212\ud835\udc5a\ud835\udc61), \ud835\udc50\ud835\udc66\ud835\udc61= \ud835\udc5a\ud835\udc61\u22121 \u2212\ud835\udc5a\ud835\udc61 1 \u2212\ud835\udc5a\ud835\udc61 1 \u2212\ud835\udc5a\ud835\udc61\u22121 \ud835\udeff\ud835\udc61\u22121 \ud835\udeff\ud835\udc61 , \ud835\udc50\ud835\udf16\ud835\udc61= (1 \u2212\ud835\udc5a\ud835\udc61\u22121) \ud835\udeff\ud835\udc61|\ud835\udc61\u22121 \ud835\udeff\ud835\udc61 , \ud835\udeff\ud835\udc61|\ud835\udc61\u22121 = \ud835\udeff\ud835\udc61\u2212\ud835\udeff\ud835\udc61\u22121 (1 \u2212\ud835\udc5a\ud835\udc61)2 (1 \u2212\ud835\udc5a\ud835\udc61\u22121)2 . It suffices to train a deep neural network \ud835\udf16\ud835\udf03to estimate the term \ud835\udc50\ud835\udf16\ud835\udc61(\ud835\udc5a\ud835\udc61(y \u2212x0) + \u221a\ud835\udeff\ud835\udc61\ud835\udf16), and therefore the training objective is Ex0,y,\ud835\udf16[\ud835\udc50\ud835\udf16\ud835\udc61||\ud835\udc5a\ud835\udc61(y \u2212x0) + \u221a\ud835\udeff\ud835\udc61\ud835\udf16\u2212\ud835\udf16\ud835\udf03(xt,\ud835\udc61)||2 2]. Conference\u201917, July 2017, Washington, DC, USA Zonglin Lyu1, Ming Li2, Jianbo Jiao3, and Chen Chen2 (b) Ground Truth Estimation (stage 2) Shared Encoder UNet Sampling process Consecutive Brownian Bridge Encoded latent representation \ud835\udc31\ud835\udc2c\ud835\udfcf= \ud835\udc0d \ud835\udfcf\u2212\ud835\udc2c \ud835\udc13\ud835\udc31+ \ud835\udc2c \ud835\udc13\ud835\udc32, \ud835\udc2c\ud835\udc13\u2212\ud835\udc2c\ud835\udfd0 \ud835\udc13 \ud835\udc31\ud835\udc2c\ud835\udfd0= \ud835\udc0d \ud835\udfcf\u2212\ud835\udc2c \ud835\udc13\ud835\udc31+ \ud835\udc2c \ud835\udc13\ud835\udc33, \ud835\udc2c\ud835\udc13\u2212\ud835\udc2c\ud835\udfd0 \ud835\udc13 Time step s \u2026 y x z x\ud835\udc601 \u2026 \u2026 x\ud835\udc602 \u2026 Time step s Predict xs1 \u2212x Predict x\ud835\udc602 \u2212x (c) Inference y z \u2026 \u0ddc x \u2026 Input \ud835\udc3c0 Output \u1218 \ud835\udc3c\ud835\udc5b Input \ud835\udc3c1 Decoder \ud835\udc3c0 \ud835\udc3c1 Freeze weights Input \ud835\udc3c0 (a) Autoencoder (stage 1) y x z Input \ud835\udc3c1 Output \u1218 \ud835\udc3c\ud835\udc5b \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60= |\ud835\udc3c\ud835\udc5b\u2212\u1218 \ud835\udc3c\ud835\udc5b| Input \ud835\udc3c\ud835\udc5b Figure 2: The illustration of our two-stage method. The encoder is shared for all frames. (a) The autoencoder stage. In this stage, previous frame \ud835\udc3c0, intermediate frame \ud835\udc3c\ud835\udc5b, and next frame \ud835\udc3c1 are encoded by the encoder to y, x, z respectively. Then x is fed to the decoder, together with the encoder feature of \ud835\udc3c0, \ud835\udc3c1 at different down-sampling factors. The decoder predicts the intermediate frame as \u02c6 \ud835\udc3c\ud835\udc5b. The encoder and decoder are trained in this stage. (b) The ground truth estimation stage. In this stage, y, x, z will be fed to the consecutive Brownian Bridge diffusion as three endpoints, where we sample two states that move time step \ud835\udc60from x in both directions. The UNet predicts the difference between the current state and x. The autoencoder is well-trained and frozen in this stage. (c) Inference. \u02c6 x is sampled from y, z to estimate x (details in Section 3.4). The decoder receives \u02c6 x and encoder features of \ud835\udc3c0, \ud835\udc3c1 at different down-sampling factors to interpolate the intermediate frame. 3.2 Formulation of Diffusion-based VFI The goal of video frame interpolation is to estimate the intermediate frame \ud835\udc3c\ud835\udc5bgiven the previous frame \ud835\udc3c0 and the next frame \ud835\udc3c1. n is set to 0.5 to interpolate the frame in the middle of \ud835\udc3c0 and \ud835\udc3c1. In latent diffusion models [43], there is an autoencoder that encodes images to latent representations and decodes images from latent representations. The diffusion model is given a standard Gaussian noise, denoises it according to the sampling process, and decodes the denoised latent representation back to an image. Since the initial noise is random, the decoded images are diverse images when they are sampled repetitively with the same conditions such as poses. Instead of diversity, VFI looks for a deterministic ground truth, which is the intermediate frame. Such a ground truth frame is encoded to a ground truth latent representation by the encoder, and only the ground truth latent representation needs to be estimated since the decoder will decode it back to the frame. Therefore, LDMbased VFI can be split into two stages: autoencoder and ground truth estimation. The two stages are defined as: (1) Autoencoder. The primary function of the autoencoder is similar to image compression: compressing images to latent representations so that the diffusion model can be efficiently implemented. We denote x, y, z as encoded latent representations of \ud835\udc3c\ud835\udc5b, \ud835\udc3c0, \ud835\udc3c1. In this stage, the goal is to compress \ud835\udc3c\ud835\udc5bto x with an encoder and then reconstruct \ud835\udc3c\ud835\udc5bfrom x with a decoder. x is provided to the decoder together with neighboring frames \ud835\udc3c0, \ud835\udc3c1 and their features in the encoder at different down-sampling factors. The overview of this stage is shown in Figure 2 (a). However, to interpolate the intermediate frame, x is unknown, so we need to estimate this ground truth. (2) Ground truth estimation. In this stage, the goal is to accurately estimate x with a diffusion model. The diffusion model converts x to y, z with the diffusion process, and we train a UNet to predict the difference between the current diffusion state and x, shown in Figure 2 (b). The sampling process of the diffusion model will convert y, z to x with the UNet output. The autoencoder is modeled with VQModel [43] in Section 3.3, and the ground truth estimation is accomplished by our proposed (latent) consecutive Brownian Bridge diffusion in Section 3.4. During inference, both stages are combined as shown in Figure 2 (c), where we decode diffusion-generated latent representation \u02c6 x. Via such formulation, we can have a more specific direction to improve VFI quality. If images decoded from x (Figure 2 (a)) have similar visual quality to images decoded from \u02c6 x (Figure 2 (c)), then the diffusion model achieves a strong performance in ground truth estimation, so it will be good to develop a good autoencoder. On the other way round, the performance of ground truth estimation can be potentially improved by redesigning the diffusion model. 3.3 Autoencoder Diffusion models running in pixel space are extremely inefficient in video interpolation because videos can be up to 4K in real life [41]. Therefore, we can encode images into a latent space with encoder E and decode images from the latent space with decoder D. Features of \ud835\udc3c0, \ud835\udc3c1 are included because detailed information may be lost when images are encoded to latent representations [11]. We incorporate feature pyramids of neighboring frames into the decoder stage as guidance because neighboring frames contain a large number of shared details. Given \ud835\udc3c\ud835\udc5b, \ud835\udc3c0, \ud835\udc3c1, the encoder E will output encoded Frame Interpolation with Consecutive Brownian Bridge Diffusion Conference\u201917, July 2017, Washington, DC, USA Conv3x3 RAR Block VQ Layer Conv3x3 Conv3x3 H Head Flow estimation Back Warp \ud835\udeff Head Self Attention Down Sample Block x5 Up Sample Block x5 Back Warp Encoder Decoder ResNet Block Conv3x3 Sride = 2 Down Sample Block Conv3x3+ ReLU x3 Conv3x3 \ud835\udeff: 2*Sigmoid 1 \ud835\udc3b: Sigmoid H head and \ud835\udeff Head ResNet Block Self attention ResNet Block ResNet-Attention-ResNet (RAR) block Up Sample Block ResNet Block Self/Cross attention Transpose Conv \ud835\udc3c0,\ud835\udc3c1 \ud835\udc3c\ud835\udc5b \u1218 \ud835\udc3c\ud835\udc5b details of different blocks Figure 3: Architecture of the autoencoder. The encoder is in green dashed boxes, and the decoder contains all remaining parts. The output of consecutive Brownian Bridge diffusion will be fed to the VQ layer. The features of \ud835\udc3c0, \ud835\udc3c1 at different down-sampling rate will be sent to the cross-attention module at Up Sample Block in the Decoder. latent representation x, y, z for diffusion models and feature pyramids of \ud835\udc3c0, \ud835\udc3c1 in different down-sampling rates, denoted {\ud835\udc53\ud835\udc58 \ud835\udc66}, {\ud835\udc53\ud835\udc58 \ud835\udc67}, where \ud835\udc58is down-sampling factor. When \ud835\udc58= 1, {\ud835\udc53\ud835\udc58 \ud835\udc66} and {\ud835\udc53\ud835\udc58 \ud835\udc67} represent original images. The decoder D will take sampled latent representation \u02c6 x (output of diffusion model that estimates x) and feature pyramids {\ud835\udc53\ud835\udc58 \ud835\udc66}, {\ud835\udc53\ud835\udc58 \ud835\udc67} to reconstruct \ud835\udc3c\ud835\udc5b. In lines of equations, these can be expressed as: x, y, {\ud835\udc53\ud835\udc58 \ud835\udc66}, z, {\ud835\udc53\ud835\udc58 \ud835\udc67} = E(\ud835\udc3c\ud835\udc5b, \ud835\udc3c0, \ud835\udc3c1), \u02c6 \ud835\udc3c\ud835\udc5b= D \u0010 x, {\ud835\udc53\ud835\udc58 \ud835\udc66}, {\ud835\udc53\ud835\udc58 \ud835\udc67} \u0011 . (12) Our encoder shares an identical structure with that in LDMVFI [11], and we slightly modify the decoder to better fit the VFI task. Decoding with Warped Features. LDMVFI [11] apply crossattention [54] to up-sampled \u02c6 x and \ud835\udc53\ud835\udc58 \ud835\udc65, \ud835\udc53\ud835\udc58 \ud835\udc66, but keeping feature of neighboring frames may preserve their original information (i.e. motion in previous and next frames). This is problematic since motion changes may be drastic in different frames. Therefore, we estimate optical flows from \ud835\udc3c\ud835\udc5bto \ud835\udc3c0, \ud835\udc3c1 with a flow estimation module and apply backward warping to the feature pyramids. Suppose \u02c6 \ud835\udc65is generated by our consecutive Brownian Bridge diffusion, and it is up-sampled to \u210e\ud835\udc58where \ud835\udc58denotes the downsampling factor compared to the original image. Then, we apply \ud835\udc36\ud835\udc34 \u0010 \u210e\ud835\udc58,\ud835\udc36\ud835\udc4e\ud835\udc61(\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\ud835\udc53\ud835\udc58 \ud835\udc66),\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\ud835\udc53\ud835\udc58 \ud835\udc67)) \u0011 for \ud835\udc58> 1 to fuse the latent representation \u210e\ud835\udc58and feature pyramids \ud835\udc53\ud835\udc58 \ud835\udc66and \ud835\udc53\ud835\udc58 \ud835\udc67, where \ud835\udc36\ud835\udc34(\u00b7, \u00b7), \ud835\udc36\ud835\udc4e\ud835\udc61(\u00b7, \u00b7), and \ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\u00b7) denotes cross attention, channelwise concatenation, and backward warping with estimated optical flows respectively. Finally, we apply convolution layers to \u210e1 to predict soft mask \ud835\udc3band residual \ud835\udeff. The interpolation output is \u02c6 \ud835\udc3c\ud835\udc5b= \ud835\udc3b\u2217\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\ud835\udc3c0) + (1 \u2212\ud835\udc3b) \u2217\ud835\udc64\ud835\udc4e\ud835\udc5f\ud835\udc5d(\ud835\udc3c1) + \ud835\udeff, where \u2217holds for Hadamard product, and \u02c6 \ud835\udc3c\ud835\udc5bis the reconstructed image. The detailed illustration of the architecture is shown in Figure 3. The VQ layer is connected with the encoder during training, but it is disconnected from the encoder and receives the sampled latent representation from the diffusion model. 3.4 Consecutive Brownian Bridge Diffusion Brownian Bridge diffusion model (BBDM) [28] is designed for translation between image pairs, connecting two deterministic points, which seems to be a good solution to estimate the ground truth intermediate frame. However, it does not fit the VFI task. In VFI, images are provided as triplets because we aim to reconstruct intermediate frames giving neighboring frames, resulting in three points that need to be connected. If we construct a Brownian Bridge between the intermediate frame and the next frame, then the previous frame is ignored, and so is the other way round. This is problematic because we do not know what \"intermediate\" is if we lose one of its neighbors. Therefore, we need a process that transits among three images. Given two neighboring images \ud835\udc3c0, \ud835\udc3c1, we aim to construct a Brownian Bridge process with endpoints \ud835\udc3c0, \ud835\udc3c1 and additionally condition its middle stage on the intermediate frame \ud835\udc3c\ud835\udc5b(\ud835\udc5b= 0.5 for 2\u00d7 interpolation). To achieve this, the process starts at \ud835\udc61= 0 with value y, passes \ud835\udc61= \ud835\udc47with value x, and ends at \ud835\udc61= 2\ud835\udc47with value z. To be consistent with the notation in diffusion models, x, y, z are used to represent latent representations of \ud835\udc3c\ud835\udc5b, \ud835\udc3c0, \ud835\udc3c1 respectively. It is therefore defined as \ud835\udc4b\ud835\udc61= \ud835\udc4a\ud835\udc61|\ud835\udc4a0 = y,\ud835\udc4a\ud835\udc47= x,\ud835\udc4a2\ud835\udc47= z. The sampling process starts from time 0 and 2\ud835\udc47and goes to time \ud835\udc47. Such a process indeed consists of two Brownian Bridges, where the first one ends at x and the second one starts at x. We can easily verify that for 0 < \ud835\udc61< \u210e: \ud835\udc4a\ud835\udc60|(\ud835\udc4a0,\ud835\udc4a\ud835\udc61,\ud835\udc4a\u210e) = ( \ud835\udc4a\ud835\udc60|(\ud835\udc4a0,\ud835\udc4a\ud835\udc61) if \ud835\udc60< \ud835\udc61 \ud835\udc4a\ud835\udc60|(\ud835\udc4a\ud835\udc61,\ud835\udc4a\u210e) if \ud835\udc60> \ud835\udc61. (13) According to Eq. (13), we can derive the distribution of our consecutive Brownian Bridge diffusion (details shown in Appendix A.1): \ud835\udc5e(x\ud835\udc61|y, x, z) = ( N ( \ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)y, \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 I) \ud835\udc60= \ud835\udc47\u2212\ud835\udc61, \ud835\udc61< \ud835\udc47 N ( \ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)z, \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 I) \ud835\udc60= \ud835\udc61\u2212\ud835\udc47, \ud835\udc61> \ud835\udc47 . (14) Conference\u201917, July 2017, Washington, DC, USA Zonglin Lyu1, Ming Li2, Jianbo Jiao3, and Chen Chen2 Algorithm 1 Training 1: repeat 2: sample triplet x, y, z from dataset 3: \ud835\udc60\u2190\ud835\udc48\ud835\udc5b\ud835\udc56\ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc5a(0,\ud835\udc47) 4: \ud835\udc64\ud835\udc60\u2190\ud835\udc5a\ud835\udc56\ud835\udc5b{ 1 \ud835\udeff\ud835\udc61,\ud835\udefe} \u22b2\ud835\udefeis a pre-defined constant 5: \ud835\udf16\u2190N (0, I) 6: xs1 \u2190\ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)y + \u221a\ufe03 \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 \ud835\udf16 7: xs2 \u2190\ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)z + \u221a\ufe03 \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 \ud835\udf16 8: r \u2190\ud835\udc48\ud835\udc5b\ud835\udc56\ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc5a(0, 1) 9: if r < 0.5 then take a gradient step on 10: \u2207\ud835\udf03||\ud835\udf16\ud835\udf03(x\ud835\udc601,\ud835\udc47\u2212\ud835\udc60, y, z) \u2212(x\ud835\udc601 \u2212x)||2 2 11: else take a gradient step on 12: \u2207\ud835\udf03||\ud835\udf16\ud835\udf03(x\ud835\udc602,\ud835\udc47+ \ud835\udc60, y, z) \u2212(x\ud835\udc602 \u2212x)||2 2 13: end if 14: until convergence Algorithm 2 Sampling 1: \ud835\udc611,\ud835\udc612 \u2190\ud835\udc47, \u0394\ud835\udc61\u2190 \ud835\udc47 sampling steps, x\ud835\udc471 = y, x\ud835\udc472 = z 2: repeat 3: \ud835\udc601,\ud835\udc602 \u2190\ud835\udc611 \u2212\u0394\ud835\udc61,\ud835\udc612 \u2212\u0394\ud835\udc61 4: \ud835\udf16\u2190N (0, I) 5: xs1 \u2190\ud835\udc65\ud835\udc611 \u2212\u0394\ud835\udc61 \ud835\udc611 \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc611,\ud835\udc47\u2212\ud835\udc611, y, z) + \u221a\ufe03 \ud835\udc601\u0394\ud835\udc61 \ud835\udc611 \ud835\udf16 6: xs2 \u2190\ud835\udc65\ud835\udc612 \u2212\u0394\ud835\udc61 \ud835\udc612 \ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc612,\ud835\udc47\u2212\ud835\udc612, y, z) + \u221a\ufe03 \ud835\udc602\u0394\ud835\udc61 \ud835\udc612 \ud835\udf16 7: \ud835\udc611,\ud835\udc612 \u2190\ud835\udc601,\ud835\udc602 8: until \ud835\udc611,\ud835\udc612 = 0 Cleaner Formulation. Eq. (11) is in a discrete setup (i.e. time = 0, 1, ...,\ud835\udc47), and the sampling process is derived via Bayes\u2019 theorem, resulting in a complicated formulation. To preserve the maximum variance, it suffices to have \ud835\udc47= 2\ud835\udc60in Eq. (8) and discretize T for training and sampling. Our forward diffusion is defined as Eq. (14). To sample from time \ud835\udc60from \ud835\udc61(\ud835\udc60< \ud835\udc61), we rewrite Eq. (11) according to Eq. (13): \ud835\udc5d\ud835\udf03(x\ud835\udc60|x\ud835\udc61, y) = \ud835\udc5e(x\ud835\udc60|x, x\ud835\udc61, y) = \ud835\udc5e(x\ud835\udc60|x, x\ud835\udc61) = N \u0012 x\ud835\udc60; \ud835\udc60 \ud835\udc61x\ud835\udc61+ (1 \u2212\ud835\udc60 \ud835\udc61)x, \ud835\udc60(\ud835\udc61\u2212\ud835\udc60) \ud835\udc61 I \u0013 = N \u0012 x\ud835\udc60; x\ud835\udc61\u2212\ud835\udc61\u2212\ud835\udc60 \ud835\udc61 (x\ud835\udc61\u2212x), \ud835\udc60(\ud835\udc61\u2212\ud835\udc60) \ud835\udc61 I \u0013 . (15) Note that Eq. (11) is slightly different from ours in that it uses x0 to represent x, but we directly use x. Since we have a closed-form solution of \ud835\udc5d\ud835\udf03(x\ud835\udc60|x\ud835\udc61, y) for 0 < \ud835\udc60< \ud835\udc61< \ud835\udc47, our method does not need DDIM [48] sampling for acceleration. Training and Sampling. According to Eq. (15), it suffices to have a neural network \ud835\udf16\ud835\udf03estimating x\ud835\udc61\u2212x0. Moreover, based on Eq. (14), we can sample \ud835\udc60from \ud835\udc48\ud835\udc5b\ud835\udc56\ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc5a(0,\ud835\udc47) and compute \ud835\udc61= \ud835\udc47\u00b1 \ud835\udc60for \ud835\udc61> \ud835\udc47and \ud835\udc47< \ud835\udc61. With one sample of \ud835\udc60, we can obtain two samples at each side of our consecutive Brownian bridge diffusion symmetric at T. y, z are added to the denoising UNet as extra conditions. Therefore, the training objective becomes: E{y,x,z},\ud835\udf16[||\ud835\udf16\ud835\udf03(x\ud835\udc601,\ud835\udc47\u2212\ud835\udc60, y, z) \u2212(x\ud835\udc601 \u2212x)||2 2] + E{y,x,z},\ud835\udf16[||\ud835\udf16\ud835\udf03(x\ud835\udc602,\ud835\udc47+ \ud835\udc60, y, z) \u2212(x\ud835\udc602 \u2212x)||2 2]. (16) where xs1 = \ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)y + \u221a\ufe02 \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 \ud835\udf16, xs2 = \ud835\udc60 \ud835\udc47x + (1 \u2212\ud835\udc60 \ud835\udc47)z + \u221a\ufe02 \ud835\udc60(\ud835\udc47\u2212\ud835\udc60) \ud835\udc47 \ud835\udf16, \ud835\udf16\u223cN (0, I). (17) Optimizing Eq. (16) requires two forward calls of the denoising UNet, so to be more efficient in computation, we randomly select one of them to optimize during training. Moreover, [15] proposes \ud835\udc5a\ud835\udc56\ud835\udc5b\u2212\ud835\udc46\ud835\udc41\ud835\udc45\u2212\ud835\udefeweighting for different time steps during training based on the signal-to-noise ratio, defined as \ud835\udc5a\ud835\udc56\ud835\udc5b{\ud835\udc46\ud835\udc41\ud835\udc45(\ud835\udc61),\ud835\udefe}. In DDPM [17], we have \ud835\udc46\ud835\udc41\ud835\udc45(\ud835\udc61) = \ud835\udefc\ud835\udc61 1\u2212\ud835\udefc\ud835\udc61because the mean and standard deviation are scaled by \u221a\ud835\udefc\ud835\udc61and \u221a1 \u2212\ud835\udefc\ud835\udc61respectively in the diffusion process. However, in our formulation, consecutive frames \ud835\udc3c0, \ud835\udc3c1 share almost identical mean, and so as their encoded latent representations. Therefore, the mean is never scaled down. The SNR is defined as 1 \ud835\udeff\ud835\udc61, where \ud835\udeff\ud835\udc61is the standard deviation of the diffusion process at time \ud835\udc61. With the \ud835\udc5a\ud835\udc56\ud835\udc5b\u2212\ud835\udc46\ud835\udc41\ud835\udc45\u2212\ud835\udefeweighting, the weighting of loss is defined as \ud835\udc64\ud835\udc61= \ud835\udc5a\ud835\udc56\ud835\udc5b{ 1 \ud835\udeff\ud835\udc61,\ud835\udefe}. The training algorithm is shown in Algorithm 1. To sample from neighboring frames, we can sample from either of the two endpoints y, z with Eq. (14) and (15), shown in Algorithm 2. After sampling, we replace x in Eq (12) with the sampled latent representations to decode the interpolated frame. Cumulative Variance. As we claimed, diffusion model [17] with conditional generation has a large cumulative variance while ours is much smaller. The cumulative variance for traditional conditional generation is larger than 1 + \u00cd \ud835\udc61\u02c6 \ud835\udefd\ud835\udc61, which corresponds to 11.036 in experiments. However, in our method, such a cumulative variance is smaller than \ud835\udc47= 2 in our experiments, resulting in a more deterministic estimation of the ground truth latent representations. The detailed justification is in Appendix A.1. 4 EXPERIMENTS 4.1 Implementations Autoencoder. The down-sampling factor is set to be \ud835\udc53= 16 for our autoencoder, which follows the setup of LDMVFI [11]. The flow estimation and refinement modules are initialized from pretrained VFIformer [32] and frozen for better efficiency. The codebook size and embedding dimension of the VQ Layer are set to 16384 and 3 respectively. The number of channels in the compact latent space (encoder output) is set to 8. A self-attention [54] is applied at 32\u00d7 down-sampling latent representation (both encoder and decoder), and cross attentions [54] with warped features are applied on 2\u00d7, 4\u00d7, 8\u00d7, 16\u00d7, and 32\u00d7 down-sampling factors in the decoder. Following LDMVFI, max-attention [53] is applied in all attention layers for better efficiency. The model is trained with Adam optimizer [25] with a learning rate of 10\u22125 for 100 epochs with a batch size of 16. The autoencoder is still slowly converging after 100 epochs, but we stopped training to evaluate it. Consecutive Brownian Bridge Diffusion. We set \ud835\udc47= 2 (corresponding to maximum variance 1 2) and discretize 1000 steps for Frame Interpolation with Consecutive Brownian Bridge Diffusion Conference\u201917, July 2017, Washington, DC, USA Table 1: Quantitative results (LPIPS/FloLPIPS/FID, the lower the better) on test datasets. \u2020 means we evaluate our consecutive Brownian Bridge diffusion (trained on Vimeo 90K triplets [59]) with autoencoder provided by LDMVFI [11]. The best performances are boldfaced, and the second best performances are underlined. Methods Middlebury UCF-101 DAVIS SNU-FILM easy medium hard extreme LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID ABME\u201921 [40] 0.027/0.040/11.393 0.058/0.069/37.066 0.151/0.209/16.931 0.022/0.034/6.363 0.042/0.076/15.159 0.092/0.168/34.236 0.182/0.300/63.561 MCVD\u201922 [55] 0.123/0.138/41.053 0.155/0.169/102.054 0.247/0.293/28.002 0.199/0.230/32.246 0.213/0.243/37.474 0.250/0.292/51.529 0.320/0.385/83.156 VFIformer\u201922 [32] 0.015/0.024/9.439 0.033/0.040/22.513 0.127/0.184/14.407 0.018/0.029/5.918 0.033/0.053/11.271 0.061/0.100/22.775 0.119/0.185/40.586 IFRNet\u201922 [26] 0.015/0.030/10.029 0.029/0.034/20.589 0.106/0.156/12.422 0.021/0.031/6.863 0.034/0.050/12.197 0.059/0.093/23.254 0.116/0.182/42.824 AMT\u201923 [29] 0.015/0.023/7.895 0.032/0.039/21.915 0.109/0.145/13.018 0.022/0.034/6.139 0.035/0.055/11.039 0.060/0.092/20.810 0.112/0.177/40.075 UPR-Net\u201923 [24] 0.015/0.024/7.935 0.032/0.039/21.970 0.134/0.172/15.002 0.018/0.029/5.669 0.034/0.052/10.983 0.062/0.097/22.127 0.112/0.176/40.098 EMA-VFI\u201923 [60] 0.015/0.025/8.358 0.032/0.038/21.395 0.132/0.166/15.186 0.019/0.038/5.882 0.033/0.053/11.051 0.060/0.091/20.679 0.114/0.170/39.051 LDMVFI\u201924 [11] 0.019/0.044/16.167 0.026/0.035/26.301 0.107 0.153/12.554 0.014/0.024/5.752 0.028/0.053/12.485 0.060/0.114/26.520 0.123/0.204/47.042 Ours\u2020 0.012/0.011/14.447 0.030/0.029/15.335 0.097/0.145/12.623 0.011/0.011/5.737 0.028/0.028/12.569 0.051/0.053/25.567 0.099/0.103/46.088 Ours 0.005/0.007/7.470 0.019/0.024/14.000 0.050/0.085/9.220 0.011/0.009/4.791 0.027/0.023/9.039 0.043/0.038/18.589 0.087/0.079/36.631 training and 50 steps for sampling. The denoising UNet takes the concatenation of x\ud835\udc61, y, z as input and is trained with Adam optimizer [25] with 10\u22124 learning rate for 30 epochs with a batch size of 64. \ud835\udefeis set to be 5 in the \ud835\udc5a\ud835\udc56\ud835\udc5b\u2212\ud835\udc46\ud835\udc41\ud835\udc45\u2212\ud835\udefeweighting. 4.2 Datasets and Evaluation Metrics Training Sets. To ensure a fair comparison with most recent works [1, 7, 12, 18, 20, 24, 32, 34, 42, 47], we train our models in Vimeo 90K triplets dataset [59], which contains 51,312 triplets. We apply random flipping, random cropping to 256 \u00d7 256, temporal order reversing, and random rotation with multiples of 90 degrees as data augmentation. Test Sets. We select UCF-101 [50], DAVIS [41], SNU-FILM [8], and Middlebury [2] to evaluate our method. UCF-101 and Middlebury consist of relatively low-resolution videos (less than 1K), whereas DAVIS and SNU-FILM consist of relatively high-resolution videos (up to 4K). SNU-FILM consists of four categories with increasing levels of difficulties (i.e. larger motion changes): easy, medium, hard, and extreme. Evaluation Metrics. Recent works [10, 11, 62] reveal that PSNR and SSIM [56] are sometimes unreliable because they have relatively low correlation with humans\u2019 visual judgments. However, deep-learning-based metrics such as FID [16], LPIPS [62], and FloLPIPS [10] are shown to have a higher correlation with humans\u2019 visual judgments in [11, 62]. Moreover, in our experiments, we also find such inconsistencies between PSNR/SSIM and visual quality, which will be discussed in Section 4.3. Therefore, we select FID, LPIPS, and FloLPIPS as our main evaluation metrics. LPIPS and FID measure distances in the space of deep learning features. FloLPIPS is based on LPIPS but takes the motion in the frames into consideration. Our methods evaluated with PSNR and SSIM will be included in Appendix C.1. 4.3 Experimental Results Quantitative Results. Our method is compared with recent opensource state-of-the-art VFI methods, including ABME [40], MCVD [55], VFIformer [32], IFRNet [26], AMT [29], UPR-Net [24], EMA-VFI [60], and LDMVFI [11]. The evaluation is reported in LPIPS/FloLPIPS/FID (lower the better), shown in Table 1. We evaluate VFIformer, IFRNet, AMT, UPR-Net, and EMA-VFI with their trained weights, and other results are provided in the appendix of LDMVFI [11]. Models with different versions in the number of parameters are all chosen to be the largest ones. With the same autoencoder as LDMVFI [11], our method (denoted as ours\u2020) generally achieves better performance than LDMVFI, indicating the effectiveness of our consecutive Brownian Bridge diffusion. Moreover, with an improved autoencoder, our method (denoted as ours) generally achieves state-of-the-art performance. It is important to note that we achieve much better FloLPIPS than other SOTAs, indicating our interpolated results achieve stronger motion consistency. In a few datasets, our method does not achieve the best performance in FID or LPIPS because our autoencoder is still converging. Ours LDMVFI GT Figure 4: The reconstruction quality of our autoencoder and LDMVFI\u2019s autoencoder (decoding with ground truth latent representation x). Images are cropped within green boxes for detailed comparisons. Red circles highlight the details that we have better reconstruction quality. LDMVFI usually outputs overlaid images while ours does not. Qualitative Results. In Table 1, our consecutive Brownian Bridge diffusion with the autoencoder in LDMVFI [11] (denoted as our\u2020) generally achieves better quantitative results than LDMVFI, showing our method is effective. We include qualitative visualization in Figure 5 to support this result. Moreover, as mentioned in Section 1, we find that the autoencoder in [11] usually reconstructs overlaid Conference\u201917, July 2017, Washington, DC, USA Zonglin Lyu1, Ming Li2, Jianbo Jiao3, and Chen Chen2 Table 2: Ablation studies of autoencoder and ground truth estimation. + GT means we input ground truth x to the decoder part of autoencoder. + BB indicates our consecutive Brownian Bridge diffusion trained with autoencoder of LDMVFI. With our consecutive Brownian Bridge diffusion, the interpolated frame has almost the same performance as the interpolated frame with ground truth latent representation, indicating the strong ground truth estimation capability. Our autoencoder also has better performance than LDMVFI [11]. Methods Middlebury UCF-101 DAVIS SNU-FILM easy medium hard extreme LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LPIPS/FloLPIPS/FID LDMVFI\u201924 [11] 0.019/0.044/16.167 0.026/0.035/26.301 0.107 0.153/12.554 0.014/0.024/5.752 0.028/0.053/12.485 0.060/0.114/26.520 0.123 0.204/47.042 LDMVFI\u201924 [11] + BB 0.012/0.011/14.447 0.030/0.029/15.335 0.097/0.145/12.623 0.011/0.011/5.737 0.028/0.028/12.569 0.051/0.053/25.567 0.099/0.103/46.088 LDMVFI\u201924 [11] + GT 0.012/0.011/14.492 0.030/0.029/15.338 0.097/0.145/12.670 0.011/0.011/5.738 0.028/0.028/12.574 0.051/0.053/25.655 0.099/0.103/46.080 Ours 0.005/0.007/7.470 0.019/0.024/14.000 0.050/0.085/9.220 0.011/0.009/4.791 0.027/0.023/9.039 0.043/0.038/18.589 0.087/0.079/36.631 Ours + GT 0.005/0.007/7.468 0.019/0.024/14.000 0.050/0.085/9.220 0.011/0.009/4.791 0.027/0.023/9.039 0.043/0.038/18.591 0.087/0.079/36.633 Ours LDMVFI GT Figure 5: The visual comparison of interpolated results of LDMVFI [11] vs our method with the same autoencoder in LDMVFI (LDMVFI vs our\u2020 in Table 1). With the same autoencoder, our method can still achieve better visual quality than LDMVFI, demonstrating the superiority of our proposed consecutive Brownian Bridge diffusion. images, and therefore we propose a new method of reconstruction. We provide examples to visualize the reconstruction results with our autoencoder and LDMVFI\u2019s autoencoder for comparison, shown in Figure 4. All examples are from SNU-FILM extreme [8], which contains relatively large motion changes in neighboring frames. We have provided some visual comparisons of our method and recent SOTAs in Figure 1. Our method achieves better visual quality because we have clearer details such as dog skins, cloth with folds, and fences with nets. However, UPR-Net [24] achieves better PSNR/SSIM in all the cropped regions (5 \u221210% better) than ours, which is highly inconsistent with the visual quality. More qualitative results are provided in Appendix C.2. 4.4 Ablation Studies As we discussed in Section 3.2, latent-diffusion-based VFI can be broken down into two stages, so we conduct an ablation study on the ground truth estimation capability of our consecutive Brownian Bridge diffusion. We compare the LPIPS/FloLPIPS/FID of decoded images with diffusion-generated latent representation \u02c6 x and ground truth x, which is encoded \ud835\udc3c\ud835\udc5b. The results are shown in Table 2. It is important to note that, fixing inputs as the ground truth, our autoencoder achieves a stronger performance than the autoencoder in LDMVFI [11], indicating the effectiveness of our autoencoder. Also, fixing the autoencoder, our consecutive Brownian Bridge diffusion achieves almost identical performance with the ground truth, indicating its strong capability of ground truth estimation. However, the conditional generation model in LDMVFI [11] usually underperforms the autoencoder with ground truth inputs. Therefore, our method has a stronger ability in both the autoencoder and ground truth estimation stages. More ablation study is provided in Appendix C.3. 5 CONCLUSION In this study, we formulate the latent-diffusion-based VFI as a twostage problem: autoencoder ground truth estimation. With this formulation, it is easy to figure out which part needs enhancements, guiding future research. We propose our consecutive Brownian Bridge diffusion that better estimates the ground truth latent representation due to its low cumulative variance. This method improves when the autoencoder is improved and achieves state-of-the-art performance with a simple yet effective design of the autoencoder, demonstrating its strong potential in VFI as a carefully designed autoencoder could potentially boost the performance by a large margin. Therefore, we believe our work will provide a unique research direction for diffusion-based frame interpolation. Limitations and Future Research. Our method uses a bisectionlike method to conduct multi-frame interpolation: we can interpolate \ud835\udc61= 0.5 between \ud835\udc61= 0, 1 and then interpolate \ud835\udc61= 0.25, 0.75. However, our method cannot directly interpolate \ud835\udc61= 0.1 from \ud835\udc61= 0, 1. Future research can be conducted to resolve the limitations mentioned above or to improve autoencoders or diffusion models for better interpolation quality. Frame Interpolation with Consecutive Brownian Bridge Diffusion Conference\u201917, July 2017, Washington, DC, USA"
16
+ }
intro_8K/test_introduction_short_2405.05959v1.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "url": "http://arxiv.org/abs/2405.05959v1",
3
+ "title": "Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask",
4
+ "abstract": "Time Series Representation Learning (TSRL) focuses on generating informative\nrepresentations for various Time Series (TS) modeling tasks. Traditional\nSelf-Supervised Learning (SSL) methods in TSRL fall into four main categories:\nreconstructive, adversarial, contrastive, and predictive, each with a common\nchallenge of sensitivity to noise and intricate data nuances. Recently,\ndiffusion-based methods have shown advanced generative capabilities. However,\nthey primarily target specific application scenarios like imputation and\nforecasting, leaving a gap in leveraging diffusion models for generic TSRL. Our\nwork, Time Series Diffusion Embedding (TSDE), bridges this gap as the first\ndiffusion-based SSL TSRL approach. TSDE segments TS data into observed and\nmasked parts using an Imputation-Interpolation-Forecasting (IIF) mask. It\napplies a trainable embedding function, featuring dual-orthogonal Transformer\nencoders with a crossover mechanism, to the observed part. We train a reverse\ndiffusion process conditioned on the embeddings, designed to predict noise\nadded to the masked part. Extensive experiments demonstrate TSDE's superiority\nin imputation, interpolation, forecasting, anomaly detection, classification,\nand clustering. We also conduct an ablation study, present embedding\nvisualizations, and compare inference speed, further substantiating TSDE's\nefficiency and validity in learning representations of TS data.",
5
+ "authors": "Zineb Senane, Lele Cao, Valentin Leonhard Buchner, Yusuke Tashiro, Lei You, Pawel Herman, Mats Nordahl, Ruibo Tu, Vilhelm von Ehrenheim",
6
+ "published": "2024-05-09",
7
+ "updated": "2024-05-09",
8
+ "primary_cat": "cs.LG",
9
+ "cats": [
10
+ "cs.LG",
11
+ "cs.AI",
12
+ "G.3; I.6.5; I.2.4"
13
+ ],
14
+ "label": "Original Paper",
15
+ "paper_cat": "Diffusion AND Model",
16
+ "gt": "Time Series (TS) data is a sequence of data points collected at regular time intervals. It is prevalent in various real-world applications, such as understanding human behavioral patterns [9], conducting in-depth financial market analyses [5], predicting meteorological \u2217Zineb Senane and Lele Cao contributed equally. For correspondence, please reach out to either of them. The source code and models for reproduction purposes are available at https://github.com/EQTPartners/tsde 1 EQT Group, Stockholm, Sweden 2 KTH Royal Institute of Technology, Stockholm, Sweden 3 T\u00e9l\u00e9com Paris, Palaiseau, France 4 Eurecom, Biot, France 5 Mitsubishi UFJ Trust Investment Technology Institute, Tokyo, Japan 6 Technical University of Denmark, Ballerup, Denmark 7 QA.Tech, Stockholm, Sweden phenomena [34], and enhancing healthcare diagnostics [46]. In this work, we focus on Multivariate TS (MTS) data, which refers to a TS with multiple variables or features recorded at each time point, where these variables may have inter-dependencies. This is in contrast to Univariate TS (UTS), which only involves a single variable. It should be noted that Multiple TS (Multi-TS) differs from MTS as it pertains to the simultaneous monitoring of several UTSs, each operating independently without any interrelation among them. While this paper primarily concentrates on MTS data, our methodology and insights are also applicable to UTS and Multi-TS, ensuring the versatility and broad applicability of our approach. To effectively extract and interpret valuable information from intricate raw MTS data, the field of Time Series Representation Learning (TSRL) has become increasingly pivotal. TSRL focuses on learning latent representations that encapsulate critical information within the time series, thereby uncovering the intrinsic dynamics of the associated systems or phenomena [52]. Furthermore, the learned representations are crucial for a variety of downstream ap- plications, such as time series imputation, interpolation, forecasting, classification, clustering and anomaly detection. TSRL can be con- ducted in a supervised manner; however, the need for extensive and accurate labeling of vast time series data presents a significant bot- tleneck, often resulting in inefficiencies and potential inaccuracies. Consequently, our focus lies in unsupervised learning techniques, which excel in extracting high-quality MTS representations without the constraints of manual labeling. Self-Supervised Learning (SSL), a subset of unsupervised learn- ing, has emerged as a highly effective methodology for TSRL. SSL utilizes innovative pretext tasks1 to generate supervision signals from unlabeled TS data, thereby facilitating the model\u2019s ability to autonomously learn valuable representations without relying on external labels. The four main designs of SSL pretext tasks \u2013 recon- structive, adversarial, contrastive, and predictive [18, 42, 52, 101] \u2013 will be elaborated in Section 2. These designs have demonstrated notable success in addressing TSRL across a diverse range of appli- cations, yet they often struggle with capturing the full complexity of MTS data, particularly in modeling intricate long-term depen- dencies and handling high-dimensional, noisy datasets. 1A pretext task in SSL is a self-generated learning challenge designed to facilitate the extraction of informative representations for downstream tasks, encompassing various methods such as transformation prediction, masked prediction, instance discrimination, and clustering, tailored to the specific data modality involved [21, 33, 101]. arXiv:2405.05959v1 [cs.LG] 9 May 2024 Senane and Cao, et al. Due to their advanced generative capabilities, diffusion models [28, 71, 73\u201376] have emerged as a promising solution for TS model- ing, adept at handling the complexities and long-term dependencies often found in MTS data. While these methods have shown success in specific tasks like forecasting [63] and imputation [80], their adoption in SSL TSRL remains largely unexplored, leaving a gap in the related research literature. Our work, TSDE (Time Series Diffusion Embedding), pioneers in this area by integrating condi- tional diffusion processes with crossover Transformer encoders and introducing a Imputation-Interpolation-Forecasting (IIF) mask strat- egy. This unique combination allows TSDE to generate versatile representations that are applicable to a wide range of tasks, includ- ing imputation, interpolation, forecasting, classification, anomaly detection, and clustering. Our main contributions are: \u2022 We propose a novel SSL TSRL framework named TSDE, which optimizes a denoising (reverse diffusion) process, conditioned on a learnable MTS embedding function. \u2022 We develop dual-orthogonal Transformer encoders integrated with a crossover mechanism, which learns MTS embeddings by capturing temporal dynamics and feature-specific dependencies. \u2022 We design a novel SSL pretext task, the IIF masking strategy, which creates pseudo observation masks designed to simulate the typical imputation, interpolation, and forecasting tasks. \u2022 We experimentally show that TSDE achieves superior perfor- mance over existing methods across a wide range of MTS tasks, thereby validating the universality of the learned embeddings.",
17
+ "main_content": "This research addresses the problem of TSRL using a SSL approach. Inspired by the taxonomies adopted by [18, 42, 52, 101], we structure our review of SSL-based TSRL around four primary methodologies: reconstructive, adversarial, contrastive, and predictive methods. Reconstructive methods focus on minimizing the discrepancy between original and reconstructed MTS data, mostly using an encoder-decoder Neural Network (NN) architecture to emphasize salient features and filter out noise, thereby training the NN to learn meaningful representations [27]. Recent mainstream methods in this category predominantly employ CNN (Convolutional NN) [72, 100], RNN (Recurrent NN) [47, 66] or Transformer [15, 103] as their architectural backbone. In this category, deep clustering stands out by simultaneously optimizing clustering and reconstruction objectives. It has been implemented through various clustering algorithms, including \ud835\udc58-means[7, 89], Gaussian Mixture Model (GMM) [4, 32], and spectral clustering [79]. Reconstructive methods might face limitations in addressing long-term dependencies and adequately representing complex features such as seasonality, trends, and noise in extensive, high-dimensional datasets. Adversarial methods utilize Generative Adversarial Network (GAN) to learn TS representations by differentiating between real and generated data [50, 59]. These methods often integrate advanced NN architectures or autoregressive models to effectively capture temporal dependencies and generate realistic TS data. For instance, TimeGAN [94] combines GANs with autoregressive models for temporal dynamics replication, while RGAN [22] uses RNN to enhance the realism of generated TS. Furthermore, approaches like TimeVAE [16] and DIVERSIFY [44] innovate in data generation, with the former tailoring outputs to user-specified distributions and latter employing adversarial strategies to maximize distributional diversity in generated TS data. However, the intricate training process of GANs, potential for mode collapse, and reliance on high-quality datasets are notable drawbacks of adversarial methods, potentially generating inconsistent or abnormal samples [101]. Contrastive methods distinguish themselves by optimizing self-discrimination tasks, contrasting positive samples with similar characteristics against negative samples with different ones [107]. These methods learn representations by generating augmented views of TS data and leveraging the inherent similarities and variations within the data [101]. They include instance-level models [11, 12, 78, 92] that treat each sample independently, using data augmentations to form positive and negative pairs. Prototype-level models [8, 37, 51, 99], on the other hand, break this independence by clustering semantically similar samples, thereby capturing higherlevel semantic information. Additionally, temporal-level models [19, 78, 91, 96] address TS-specific challenges by focusing on scaleinvariant representations at individual timestamps, enhancing the understanding of complex temporal dynamics. However, a common disadvantage across these contrastive methods is their potential to overlook higher-level semantic information, especially when not integrating explicit semantic labels, leading to the generation of potentially misleading negative samples. Predictive methods excel in capturing shared information from TS data by maximizing mutual information from various data slices or augmented views. These methods, like TST [98], wave2vec [68], CaSS [14] and SAITS [17], focus on predicting future, missing, or contextual information, thereby bypassing the need for full input reconstruction. Most recent advancements in this category, such as TEMPO [3] and TimeGPT [25], leverage LLM (Large Language Model) architectures to effectively decompose and predict complex TS components. TimeGPT, in particular, stands out as a foundation model specifically for TS forecasting, yet it only treats MTS as MultiTS. Lag-Llama [62], another notable predictive model, demonstrates strong univariate probabilistic forecasting, trained on a vast corpus of TS data. However, the challenge in these methods is their focus on local information, which can limit their capacity to capture longterm dependencies and make them susceptible to noise and outliers, thus affecting their generalization ability. Diffusion-based methods in TS modeling have recently gained traction, leveraging the unique abilities of diffusion models to model the data distribution through a process of injecting and reversing noise [101]. These models, like TimeGrad [63] and CSDI [80], have been effectively applied to tasks such as forecasting and imputation, employing innovative techniques like RNN-conditioned diffusion and multiple Transformer encoders. Recent developments like SSSD [2] have further evolved the field by integrating structured state space models [26] with diffusion processes. These advancements have showcased the flexibility and potential of diffusion models in handling diverse TS data, with applications ranging from electrical load forecasting with DiffLoad [84] to predicting spatio-temporal graph evolutions using DiffSTG [85]. Despite these significant advancements, a notable gap remains in the application of diffusion models for TSRL. While a recent study [13] demonstrates the efficacy of diffusion models as robust visual representation extractors, their specific adaptation and optimization for TSRL have not Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask been explored. Our work aims to fill this gap with the innovative TSDE framework, synergistically integrating conditional diffusion processes and crossover Transformer encoders, coupled with an innovative IIF mask strategy, to effectively tackle a wide range of downstream tasks. 3 THE APPROACH The task entails learning general-purpose embeddings for MTS that has \ud835\udc3efeatures/variables and \ud835\udc3ftime steps. Formally, given a multivariate time series x: x = {\ud835\udc651:\ud835\udc3e,1:\ud835\udc3f} = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \ud835\udc651,1 \ud835\udc651,2 . . . \ud835\udc651,\ud835\udc3f \ud835\udc652,1 \ud835\udc652,2 . . . \ud835\udc652,\ud835\udc3f . . . . . . ... . . . \ud835\udc65\ud835\udc3e,1 \ud835\udc65\ud835\udc3e,2 . . . \ud835\udc65\ud835\udc3e,\ud835\udc3f \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u2208R\ud835\udc3e\u00d7\ud835\udc3f, (1) we aim to learn a \ud835\udf53-parameterized embedding function f\ud835\udf53(\u00b7) that maps the input MTS x to a latent representation Z: Z = {z1:\ud835\udc3e,1:\ud835\udc3f} = f\ud835\udf53(x) = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 z1,1 . . . z1,\ud835\udc3f . . . ... . . . z\ud835\udc3e,1 . . . z\ud835\udc3e,\ud835\udc3f \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \u2208R\ud835\udc3e\u00d7\ud835\udc3f\u00d7\ud835\udc36, (2) where each element z\ud835\udc58,\ud835\udc59\u2208R\ud835\udc36represents the embedding vector for the \ud835\udc58-th feature and \ud835\udc59-th step, with \ud835\udc36denoting the dimensionality of the embedding space. We propose to learn f\ud835\udf53by leveraging a conditional diffusion process trained in a self-supervised fashion. 3.1 Unconditional Diffusion Process The unconditional diffusion process assumes a sequence of latent variables x\ud835\udc61(\ud835\udc61\u2208Z\u2229[1,\ud835\udc47]) in the same space as x. For unification, we will denote x as x0 henceforth. The objective is to approximate the ground-truth MTS distribution\ud835\udc5e(x0) by learning a \ud835\udf3d-parameterized model distribution \ud835\udc5d\ud835\udf3d(x0). The entire process comprises both forward and reverse processes. 3.1.1 Forward process. In this process, Gaussian noise is gradually injected to x0 in \ud835\udc47steps until \ud835\udc65\ud835\udc47is close enough to a standard Gaussian distribution, which can be expressed as a Markov chain: \ud835\udc5e(x1:\ud835\udc47|x0) = \u00ce\ud835\udc47 \ud835\udc61=1\ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121), (3) where \ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121) is a diffusion transition kernel, and is defined as \ud835\udc5e(x\ud835\udc61|x\ud835\udc61\u22121) := N (x\ud835\udc61; \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61x\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I), (4) which is a conditional Gaussian distribution with a mean of \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61x\ud835\udc61\u22121 and a covariance matrix of \ud835\udefd\ud835\udc61I, and \ud835\udefd\ud835\udc61\u2208(0, 1) indicates the noise level at each diffusion step \ud835\udc61. Because of the properties of Gaussian kernels, we can sample any x\ud835\udc61from x0 directly with \ud835\udc5e(x\ud835\udc61|x0) := N (x\ud835\udc61; \u221a\ufe01 \u02dc \ud835\udefc\ud835\udc61x0, (1\u2212\u02dc \ud835\udefc\ud835\udc61)I), where \u02dc \ud835\udefc\ud835\udc61:= \u00ce\ud835\udc61 \ud835\udc56=1(1\u2212\ud835\udefd\ud835\udc56), (5) and x\ud835\udc61= \u221a\u02dc \ud835\udefc\ud835\udc61x0 + \u221a1 \u2212\u02dc \ud835\udefc\ud835\udc61\ud835\udf50, and \ud835\udf50\u223cN (0, I). 3.1.2 Reverse process. This process, modeled by a NN parameterized with \ud835\udf3d, recovers x0 by progressively denoising x\ud835\udc47: \ud835\udc5d\ud835\udf3d(x0:\ud835\udc47) = \ud835\udc5d(x\ud835\udc47)\u00ce\ud835\udc47 \ud835\udc61=1\ud835\udc5d\ud835\udf3d(x\ud835\udc61\u22121|x\ud835\udc61), (6) where \ud835\udc5d\ud835\udf3d(x\ud835\udc61\u22121|x\ud835\udc61) is the reverse transition kernel with a form of \ud835\udc5d\ud835\udf3d(x\ud835\udc61\u22121|x\ud835\udc61) := N (x\ud835\udc61\u22121; \ud835\udf41\ud835\udf3d(x\ud835\udc61,\ud835\udc61), \ud835\udeba\ud835\udf3d(x\ud835\udc61,\ud835\udc61)). (7) To approximate the reverse transition kernel, Ho et al. [29] propose the following reparametrization of the mean and variance: \ud835\udf41\ud835\udf3d(x\ud835\udc61,\ud835\udc61) := (1 \u2212\ud835\udefd\ud835\udc61)\u22121 2 (x\ud835\udc61\u2212\ud835\udefd\ud835\udc61(1 \u2212\u02dc \ud835\udefc\ud835\udc61)\u22121 2 \ud835\udf50\ud835\udf3d(x\ud835\udc61,\ud835\udc61)), (8) \ud835\udeba\ud835\udf3d(x\ud835\udc61,\ud835\udc61) := \ud835\udf48\ud835\udf3d(x\ud835\udc61,\ud835\udc61)I = \ud835\udf0e2 \ud835\udc61I, (9) where \ud835\udf0e2 \ud835\udc61= \ud835\udefd\ud835\udc61(1 \u2212\u02dc \ud835\udefc\ud835\udc61\u22121)/(1 \u2212\u02dc \ud835\udefc\ud835\udc61) when \ud835\udc61> 1, otherwise \ud835\udf0e2 \ud835\udc61= \ud835\udefd1; \ud835\udf50\ud835\udf3dis a trainable network predicting the noise added to input x\ud835\udc61at diffusion step \ud835\udc61. Specifically, \u02dc \ud835\udefc\ud835\udc47\u22480 such that \ud835\udc5e(x\ud835\udc47) \u2248N (x\ud835\udc47; 0, I), thus the starting point of the backward chain is a Gaussian noise. 3.2 Imputation-Interpolation-Forecasting Mask The reverse process of unconditional diffusion facilitates the generation of MTS from noise. However, our objective is to create generalpurpose embeddings for unlabeled MTS, which can be leveraged in many popular downstream tasks such as imputation, interpolation, and forecasting. Consequently, we propose an ImputationInterpolation-Forecasting (IIF) mask strategy, producing a pseudo observation mask mIIF = {\ud835\udc5aIIF 1:\ud835\udc3e,1:\ud835\udc3f} \u2208{0, 1}\ud835\udc3e\u00d7\ud835\udc3fwhere \ud835\udc5aIIF \ud835\udc58,\ud835\udc59=1 if \ud835\udc65\ud835\udc58,\ud835\udc59 in Equation (1) is observable, and \ud835\udc5aIIF \ud835\udc58,\ud835\udc59=0 otherwise. Algorithm 1 details the implementation and combination of imputation, interpolation, and forecasting masks2. During training, given any original MTS x0, we extract the observed (xobs 0 ) and masked (xmsk 0 ) segments by xobs 0 := x0 \u2299mIIF and xmsk 0 := x0 \u2299(m \u2212mIIF) , (10) where \u2299represents element-wise product; and m = {\ud835\udc5a1:\ud835\udc3e,1:\ud835\udc3f} \u2208 {0, 1}\ud835\udc3e\u00d7\ud835\udc3fis a mask with zeros indicating originally missing values in x0. We now reformulate our self-supervised learning objective to generate the masked version of MTS, denoted as xmsk 0 , from a corrupted input xmsk \ud835\udc61 , through a diffusion process, conditioned on the embedding of the observed MTS xobs 0 , i.e., f\ud835\udf53(xobs 0 ). Both the diffusion process (parameterized by \ud835\udf3d) and the embedding function (parameterized by \ud835\udf53) are approximated with a trainable NN. 3.3 Conditional Reverse Diffusion Process Our conditional diffusion process estimates the ground-truth conditional probability \ud835\udc5e(xmsk 0 |f\ud835\udf53(xobs 0 )) by re-formulating (6) as \ud835\udc5d\ud835\udf3d(xmsk 0:\ud835\udc47|f\ud835\udf53(xobs 0 )) :=\ud835\udc5d(xmsk \ud835\udc47 )\u00ce\ud835\udc47 \ud835\udc61=1\ud835\udc5d\ud835\udf3d(xmsk \ud835\udc61\u22121 |xmsk \ud835\udc61 , f\ud835\udf53(xobs 0 )). (11) Similar to (7), the reverse kernel \ud835\udc5d\ud835\udf3d(xmsk \ud835\udc61\u22121 |xmsk \ud835\udc61 , f\ud835\udf53(xobs 0 )) := N (xmsk \ud835\udc61\u22121; \ud835\udf41\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 )), \ud835\udeba\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 ))). (12) According to DDPM [29], the variance \ud835\udeba\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 )) can be formulated in the same way as (9), i.e., \ud835\udf48\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 ))I = \ud835\udf0e2 \ud835\udc61I. Similar to Equation (8), the conditional mean \ud835\udf41\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61, f\ud835\udf53(xobs 0 )):= (1 \u2212\ud835\udefd\ud835\udc61)\u22121 2 (xmsk \ud835\udc61 \u2212\ud835\udefd\ud835\udc61(1 \u2212\u02dc \ud835\udefc\ud835\udc61)\u22121 2 \ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 ))). (13) 2The imputation mask simulates random missing values; the interpolation mask mimics the MTS interpolation tasks by masking all values at a randomly selected timestamp; and the forecasting mask assumes all values post a specified timestamp unknown. Senane and Cao, et al. ? Embedding of diffusion step t FC + SiLu FC + ReLu Conv1\u00d71 + Expand Masked segment x t Conv1\u00d71 + ReLu msk Conv1\u00d71 + + Gated activation unit Conv1\u00d71 Conv1\u00d71 + Observable segment x 0 Conv1\u00d71 + ReLu obs Concat Time Embedding Feature Embedding Tempor al Encoder Conv1\u00d71 Spatial Encoder Conv1\u00d71 ? ? Conv1\u00d71 ... Rsidual layer 0 Input to next r esidual layer Conv1\u00d71 + ReLu Conv1\u00d71 + ... Skip connections MTS Em bedding IIF Mask m Concat + SiLu IIF m -m IIF element-wise product Output ? sum L\u00d7128 K\u00d716 K\u00d7L K\u00d7L K\u00d7L\u00d716 K\u00d7L\u00d716 K\u00d7L\u00d716 K\u00d7L\u00d733 K\u00d7L 1\u00d71\u00d7128 1\u00d71\u00d7128 K\u00d7L\u00d764 K\u00d7L\u00d7128 K\u00d7L Conditional Reverse Diffusion (denoising): Embedding Function/Block: Z= K\u00d7L\u00d7160 f (x 0 ) obs ? (m -m ) IIF f (x 0 )) obs ? (x t , t| msk ?? f (x 0 ) obs ? Rsidual layer 1 Rsidual layer N Split Split K\u00d7L\u00d7160 (L\u00d7160)\u00d7K (K\u00d7160)\u00d7L Concat Concat (L\u00d7160)\u00d7K (K\u00d7160)\u00d7L K\u00d7L\u00d7160 K\u00d7L\u00d7160 K\u00d7L K\u00d7L\u00d7160 K\u00d7L\u00d7160 K\u00d7L\u00d764 K\u00d7L\u00d764 K\u00d7L\u00d7128 K\u00d7L\u00d764 K\u00d7L\u00d764 K\u00d7L\u00d764 K\u00d7L\u00d764 K\u00d7L\u00d764 stime sfeat expand expand x 0 obs ~ x 0 obs ~ sdiff(t) Figure 1: The TSDE architecture comprises an embedding function (left) and a conditional reverse diffusion block (right): the temporal and spatial encoders are implemented as one-layer Transformer. Algorithm 1: Imputation-Interpolation-Forecasting Mask Input: Mask m= {\ud835\udc5a1:\ud835\udc3e,1:\ud835\udc3f} \u2208{0, 1}\ud835\udc3e\u00d7\ud835\udc3findicating the missing values in x0 Output: A pseudo observation mask mIIF \u2208{0, 1}\ud835\udc3e\u00d7\ud835\udc3f 1 \ud835\udc5f\u2190random value from the range of [0.1, 0.9]; // imputation mask ratio 2 \ud835\udc41\u2190\u00cd\ud835\udc3e \ud835\udc58=1 \u00cd\ud835\udc3f \ud835\udc59=1 \ud835\udc5a\ud835\udc58,\ud835\udc59; // total number of observed values 3 mIIF \u2190m and randomly set \u230a\ud835\udc41\u00d7 \ud835\udc5f\u23091s to 0; // apply imputation mask 4 Sample a probability \ud835\udc5duniformly from the range of [0, 1]; 5 if 1/3 < \ud835\udc5d< 2/3 then 6 \ud835\udc59\u2032 \u2190uniformly sample a time step from Z \u2229[1, \ud835\udc3f]; 7 mIIF[:,\ud835\udc59\u2032] \u21900; // mix with interpolation mask 8 else if \ud835\udc5d>= 2/3 then 9 \ud835\udc59\u2032 \u2190uniformly sample a time window length from Z \u2229[1, \u230a\ud835\udc3f 3 \u2309]; 10 mIIF[:, \u2212\ud835\udc59\u2032:] \u21900; // mix with forecasting mask 11 return mIIF; 3.4 Training Loss and Procedure It has been shown in [29] that the reverse process of unconditional diffusion can be trained by minimizing the following loss: L(\ud835\udf3d) := Ex0\u223c\ud835\udc5e(x0),\ud835\udf50\u223cN(0,I),\ud835\udc61\u2225\ud835\udf50\u2212\ud835\udf50\ud835\udf3d(x\ud835\udc61,\ud835\udc61))\u22252 2 . (14) Inspired by [80], we replace the noise prediction NN \ud835\udf50\ud835\udf3d(x\ud835\udc61,\ud835\udc61) with the conditioned version \ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 ) in (14), obtaining L(\ud835\udf3d, \ud835\udf53) := Ex0\u223c\ud835\udc5e(x0),\ud835\udf50\u223cN(0,I),\ud835\udc61\u2225\ud835\udf50\u2212\ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 ))\u22252 2 . (15) Given the focus of training is solely on predicting the noise at the non-missing and masked locations, we actually minimize e L(\ud835\udf3d, \ud835\udf53) := Ex0\u223c\ud835\udc5e(x0),\ud835\udf50\u223cN(0,I),\ud835\udc61\u2225(\ud835\udf50\u2212\ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 )))\u2299(m\u2212mIIF)\u22252 2. (16) The self-supervised and mini-batch training procedure, detailed in Algorithm 2 essentially attempts to solve min\ud835\udf3d,\ud835\udf53e L(\ud835\udf3d, \ud835\udf53). In each iteration \ud835\udc56of the training process, a random diffusion step \ud835\udc61is chosen, at which point the denoising operation is applied. Algorithm 2: TSDE Training Procedure Input: Ground-truth MTS data distribution \ud835\udc5e(x0), noise scheduler { \u02dc \ud835\udefc\ud835\udc61}, the denoising and embedding functions (approx. by NN): \ud835\udf50\ud835\udf3d(\u00b7) and f\ud835\udf53(\u00b7) Output: The trained NN parameters \ud835\udf3dand \ud835\udf53 Parameter: The total number of training iterations \ud835\udc41train and learning rate \ud835\udf0f 1 for (\ud835\udc56= 1;\ud835\udc56\u2264\ud835\udc41train;\ud835\udc56+ +) do 2 Sample a diffusion step \ud835\udc61\u223cUniform({1, . . . ,\ud835\udc47}) and a MTS x0 \u223c\ud835\udc5e(x0); 3 Obtain IIF Masking mIIF by following Algorithm 1; 4 Obtain the observed (xobs 0 ) and masked (xmsk 0 ) parts using Equation (10); 5 Sample a noise matrix \ud835\udf50\u223cN(0, I) that has the same shape as xmsk 0 ; 6 Compute xmsk \ud835\udc61 \u2190\u221a\u02dc \ud835\udefc\ud835\udc61xmsk 0 + \u221a1 \u2212\u02dc \ud835\udefc\ud835\udc61\ud835\udf50; 7 Compute loss e L := \u2225(\ud835\udf50\u2212\ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 ))) \u2299(m\u2212mIIF) \u22252 2, cf. (16); 8 \ud835\udf3d:= \ud835\udf3d\u2212\ud835\udf0f\ud835\udf15e L \ud835\udf15\ud835\udf3d and \ud835\udf53:= \ud835\udf53\u2212\ud835\udf0f\ud835\udf15e L \ud835\udf15\ud835\udf53; 9 return \ud835\udf3dand \ud835\udf53; 3.5 Embedding Function The left part of Figure 1 illustrates the architectural design of the embedding function f\ud835\udf53(xobs 0 ). This figure highlights that the function not only processes the input xobs 0 , but also incorporates additional side information (namely, time embedding stime(\ud835\udc59), feature embedding sfeat(\ud835\udc58), and the mask mIIF) into its computations. Consequently, the notation f\ud835\udf53(xobs 0 ) is succinctly used to represent the more extensive formulation f\ud835\udf53(xobs 0 , stime, sfeat, mIIF), which accounts for all the inputs processed by the function. To obtain 128-dimensional stime(\ud835\udc59), we largely follow [80, 108]: stime(\ud835\udc59) = \u0012 sin \ud835\udc59 \ud835\udf0f 0 64 , . . . , sin \ud835\udc59 \ud835\udf0f 63 64 , cos \ud835\udc59 \ud835\udf0f 0 64 , . . . , cos \ud835\udc59 \ud835\udf0f 63 64 \u0013 , (17) where \ud835\udf0f=10,000 and \ud835\udc59\u2208Z \u2229[1, \ud835\udc3f]. For sfeat(\ud835\udc58), a 16-dimensional feature embedding is obtained by utilizing the categorical feature embedding layer available in PyTorch. The observable segment xobs 0 undergoes a nonlinear transformation and is then concatenated with time and feature embeddings, resulting in \u02dc xobs 0 \u2208R\ud835\udc3e\u00d7\ud835\udc3f\u00d7160: \u02dc xobs 0 = \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61(\ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc62(\ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63(xobs 0 )), stime, sfeat), (18) Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask where \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61(\u00b7), \ud835\udc45\ud835\udc52\ud835\udc3f\ud835\udc62(\u00b7) and \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63(\u00b7) represent concatenation, ReLu activation, and 1\u00d71 convolution operation [39] respectively. To accurately capture the inherent temporal dependencies and feature correlations in MTS data, thereby enabling clearer data interpretation and a customizable, modular design, we devise separate temporal and feature embedding functions: g\ud835\udefe( \u02dc xobs 0 ) and h\ud835\udeff( \u02dc xobs 0 ), parameterized by \ud835\udefeand \ud835\udeffrespectively. Inspired by [80], both the temporal g\ud835\udefe(\u00b7) and feature g\ud835\udeff(\u00b7) encoders are simply implemented as a one-layer Transformer encoder that takes an input tensor shaped \ud835\udc3e\u00d7\ud835\udc3f\u00d7160, as shown in Figure 1. Specifically, the temporal encoder operates on tensors shaped 1\u00d7\ud835\udc3f\u00d7160, representing a feature across all timestamps; and the feature encoder handles tensors shaped \ud835\udc3e\u00d71\u00d7160, representing a feature vector corresponding to a time stamp. To integrate temporal and feature embeddings in varying orders without adding to the model\u2019s trainable parameters, we have developed a crossover mechanism. This mechanism is depicted by the red and blue arrows in Figure 1. It facilitates the generation of g\ud835\udefe(h\ud835\udeff( \u02dc xobs 0 )) and h\ud835\udeff(g\ud835\udefe( \u02dc xobs 0 )), which are subsequently transformed and concatenated along with mIIF, resulting in the final embedding Z = f\ud835\udf53(xobs 0 ) := \ud835\udc46\ud835\udc56\ud835\udc3f\ud835\udc62 \u0010 \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc50\ud835\udc4e\ud835\udc61 \u0010 \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63(g\ud835\udefe(h\ud835\udeff( \u02dc xobs 0 ))),\ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63(h\ud835\udeff(g\ud835\udefe( \u02dc xobs 0 ))), mIIF\u0011\u0011 , (19) where\ud835\udc46\ud835\udc56\ud835\udc3f\ud835\udc62(\u00b7) is the Sigmoid-weighted Linear Unit (SiLU) activation function [20]. Once the model is trained, the embedding for any MTS x0 is computed following Equations (18) and (19), where xobs 0 and mIIF are substituted with x0 and m, respectively. 3.6 The Overall Architecture Figure 1 provides a comprehensive depiction of the various components within the TSDE architecture. The process begins by applying the IIF mask mIIF to partition the input MTS into observable (xobs 0 ) and masked (xmsk 0 ) segments. The entire architecture primarily consists of two key elements: (1) an embedding function f\ud835\udf53(xobs 0 ) thoroughly introduced in Section 3.5; and (2) a conditional reverse diffusion module, illustrated on the right side of Figure 1. The conditional reverse diffusion, introduced in Section 3.3, functions as a noise predictor, effectively implementing \ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 )). During the \ud835\udc56-th training step, as outlined in Algorithm 2, the sampled diffusion step \ud835\udc61is first transformed into a 128-dimensional vector, denoted as sdiff(\ud835\udc61) := \u0010 sin(10 0\u00b74 63 \ud835\udc61), . . . , sin(10 63\u00b74 63 \ud835\udc61), cos(10 0\u00b74 63 \ud835\udc61), . . . , cos(10 63\u00b74 63 \ud835\udc61) \u0011 . (20) Subsequently, the MTS embedding Z, along with sdiff(\ud835\udc61) and xmsk 0 , are input into a residual block composed of \ud835\udc41residual layers. The outputs of these layers are aggregated (summation), processed through some transformations, and combined with xmsk \ud835\udc61 . This results in \ud835\udf50\ud835\udf3d(xmsk \ud835\udc61 ,\ud835\udc61|f\ud835\udf53(xobs 0 )) \u2299(m\u2212mIIF), which is then utilized to compute the loss e L(\ud835\udf3d, \ud835\udf53), as formulated in Equation (16). 3.7 Downstream Tasks and Model Efficiency The trained model can be utilized in two scenarios: (1) the embedding function, as a standalone component, can be used to generate comprehensive MTS representations, which are suitable for various downstream applications including anomaly detection, clustering, and classification as demonstrated in Section 4.2, 4.3, and 4.4, respectively. (2) When combined with the trained conditional reverse diffusion process, the model is capable of predicting missing values (for imputation and interpolation) as well as future values (for forecasting) in MTS data. In the second scenario, a notable increase in speed can be achieved compared to the existing diffusion-based methods such as those in [63, 80]. This efficiency, confirmed in Section 4.1.5, stems from simplifying the conditional reverse diffusion (the right block of Figure 1, i.e., \ud835\udf50\ud835\udf3d) to use only Conv1\u00d71 operators. This streamlining significantly accelerates the \ud835\udc47=50 steps reverse diffusion process. 4 EXPERIMENTS Our evaluation of the TSDE framework includes thorough experiments across six tasks (imputation, interpolation, forecasting, anomaly detection, classification, and clustering) accompanied by additional analyses on inference efficiency, ablation study, and embedding visualization. For experiment details, dataset specifications, hyperparameters, and metric formulas, refer to Appendix A. 4.1 Imputation, Interpolation and Forecasting 4.1.1 Imputation. We carry out imputation experiments on PhysioNet3 [70] and PM2.54 [93]. TSDE is benchmarked against several state-of-the-art TS imputation models. These include BRITS [6], a deterministic method using bi-directional RNN for correlation capture; V-RIN [53], employing variational-recurrent networks with feature and temporal correlations for uncertainty-based imputation; GP-VAE [24], integrating Gaussian Processes with VAEs; and CSDI [80], the top-performing model among the diffusion-based TS imputation models. The model performance is evaluated using CRPS (continuous ranked probability score) to assess the fit of predicted outcomes with original data distributions, and two deterministic metrics \u2013 MAE (mean absolute error) and the RMSE (root mean square error). Deterministic metrics are calculated using the median across all samples, and CRPS value is reported as the normalized average score for all missing values distributions (approximated with 100 samples). The imputation results, as detailed in the upper part of Table 1, highlight TSDE\u2019s superior performance over almost all metrics, outperforming all baselines. Notably, the pretraining-only variant (i.e., \u201cTSDE\u201d) excels on the PhysioNet dataset, underpinning its robustness and enhanced generalization capability, even without the need of any imputation-specific finetuning. For the PM2.5 dataset, finetuning TSDE (i.e., \u201cTSDE+ft\u201d) yields improved outcomes, likely attributable to its capability to adapt to the dataset\u2019s structured missing value patterns. Overall, TSDE\u2019s improvement in CRPS by 4.2%-6.5% over CSDI, a leading diffusion-based TS imputation model, signifies a notable advancement in the field. For a qualitative illustration of imputation results, refer to Figure 2(a). 4.1.2 Interpolation. For interpolation analysis, we utilized the same PhysioNet dataset [70], adopting the processing methods from 3PhysioNet, a healthcare dataset with 4,000 records of 35 variables over 48 hours, is processed and hourly sampled as [67, 80], leading to \u223c80% missing rate. For testing, we randomly mask 10%, 50%, and 90% of observed values to create ground-truth scenarios. On this dataset, we pretrain TSDE for 2,000 epochs, followed by a 200-epoch finetuning with an imputation mask. 4PM2.5, an air quality dataset, features hourly readings from 36 Beijing stations over 12 months with artificially generated missing patterns. Adapting [80], each series spans 36 consecutive timestamps. On this dataset, we pretrain for 1,500 epochs and finetune for 100 epochs using a history mask as detailed in Algorithm 5. Senane and Cao, et al. Table 1: Probabilistic MTS imputation and interpolation benchmarking results, featuring TSDE\u2019s pretraining-only and task-specific finetuned (TSDE+ft) models against established baselines. We present mean and standard deviation (SD) from three iterations, with baseline results primarily derived or reproduced according to [80]. Models PhysioNet PM2.5 10% masking ratio 50% masking ratio 90% masking ratio CRPS MAE RMSE CRPS MAE RMSE CRPS MAE RMSE CRPS MAE RMSE Imputation BRITS [6] 0.284(0.001) 0.619(0.022) 0.368(0.002) 0.693(0.023) 0.517(0.002) 0.836(0.015) 14.11(0.26) 24.47(0.73) V-RIN [53] 0.808(0.008) 0.271(0.001) 0.628(0.025) 0.831(0.005) 0.365(0.002) 0.693(0.022) 0.922(0.003) 0.606(0.006) 0.928(0.013) 0.526(0.025) 25.4(0.062) 40.11(1.14) GP-VAE [24] 0.558(0.001)* 0.449(0.002)* 0.739(0.001)* 0.642(0.003)* 0.566(0.004)* 0.898(0.005)* 0.748(0.002)* 0.690(0.002)* 1.008(0.002)* 0.397(0.009) unc. CSDI [80] 0.360(0.007) 0.326(0.008) 0.621(0.020) 0.458(0.008) 0.417(0.010) 0.734(0.024) 0.671(0.007) 0.625(0.010) 0.940(0.018) 0.135(0.001) 12.13(0.07) 22.58(0.23) CSDI [80] 0.238(0.001) 0.217(0.001) 0.498(0.020) 0.330(0.002) 0.301(0.002) 0.614(0.017) 0.522(0.002) 0.481(0.003) 0.803(0.012) 0.108(0.001) 9.60(0.04) 19.30(0.13) TSDE 0.226(0.002) 0.208(0.001) 0.446(0.003) 0.316(0.000) 0.290(0.000) 0.641(0.007) 0.488(0.001) 0.450(0.001) 0.801(0.001) 0.13(0.001) 11.41(0.60) 27.02(2.91) TSDE+ft 0.230(0.001) 0.211(0.001) 0.4718(0.013) 0.318(0.001) 0.292(0.001) 0.644(0.001) 0.490(0.001) 0.452(0.001) 0.803(0.001) 0.107 (0.000) 9.71(0.04) 18.76(0.02) Interpolation Latent ODE [65] 0.700(0.002) 0.522(0.002) 0.799(0.012) 0.676(0.003) 0.506(0.003) 0.783(0.012) 0.761(0.010) 0.578(0.009) 0.865(0.017) * Results reproduced using GP-VAE mTANs [69] 0.526(0.004) 0.389(0.003) 0.749(0.037) 0.567(0.003) 0.422(0.003) 0.721(0.014) 0.689(0.015) 0.533(0.005) 0.836(0.018) original implementation available at CSDI [80] 0.380(0.002) 0.362(0.001) 0.722(0.043) 0.418(0.001) 0.394(0.002) 0.700(0.013) 0.556(0.003) 0.518(0.003) 0.839(0.009) https://github.com/ratschlab/GP-VAE. TSDE 0.365(0.001) 0.331(0.001) 0.597(0.002) 0.403(0.001) 0.371(0.001) 0.657(0.001) 0.517(0.001) 0.476(0.001) 0.775(0.001) We report the mean and standard TSDE+ft 0.374(0.001) 0.338(0.001) 0.610(0.003) 0.421(0.001) 0.385(0.001) 0.677(0.003) 0.570(0.004) 0.522(0.006) 0.821(0.006) deviation of three runs. 0 40 0 20 40 0.0 0.5 1.0 Missing Observed 0 20 40 0.25 0.50 0.75 1.00 0 20 40 0 1 2 0 40 0 20 40 1 0 1 0 20 40 0 1 2 0 20 40 0.02 0.01 0.00 0 40 0 20 40 0.12 0.10 0.08 0 20 40 0 2 4 6 0 20 40 2 1 0 0 40 0 20 40 0.0 0.2 0.4 0 20 40 0.46 0.45 0.44 0.43 0.42 0 20 40 1 0 1 0 40 0 20 40 0.4 0.2 0.0 0.2 0 20 40 1 0 1 0 20 40 1.5 2.0 2.5 0 40 0 20 40 0.5 1.0 1.5 2.0 2.5 0 20 40 1.5 1.0 0.5 0.0 0 20 40 2.0 1.5 1.0 0.5 4 2 0 1.5 2.0 2.5 0.5 0.0 0.5 (a) Imputation on PhysioNet 120 140 80 100 120 140 3.0 80 100 120 140 0.4 0.6 120 140 80 100 120 140 2.0 3.0 4.0 x100 80 100 120 140 1.0 1.5 2.0 x1000 120 140 80 100 120 140 0.4 0.6 0.8 x1000 80 100 120 140 0.4 0.6 0.8 1.0 x1000 120 140 80 100 120 140 0.5 1.0 1.5 2.0 x1000 80 100 120 140 0.6 0.8 1.0 1.2 x100 120 140 80 100 120 140 1.0 1.5 2.0 2.5 3.0 x100 80 100 120 140 0.5 1.0 1.5 2.0 x1000 120 140 80 100 120 140 4.0 6.0 8.0 x10 80 100 120 140 0.8 1.0 1.2 1.4 x1000 120 140 80 100 120 140 1.0 1.5 2.0 x100 80 100 120 140 3.0 4.0 5.0 6.0 x10 120 140 80 100 120 140 2.0 3.0 4.0 5.0 x100 80 100 120 140 0.6 0.8 1.0 1.2 1.4 x1000 120 140 me 80 100 120 140 time 0.8 1.0 1.2 1.4 x100 (b) Interpolation on Electricity 60 180 140 160 180 2.0 4.0 140 160 180 2.0 3.0 140 160 180 1.0 1.5 60 180 140 160 180 0.8 1.0 1.2 1.4 1.6 x100 140 160 180 0.4 0.6 0.8 x1000 140 160 180 0.4 0.6 0.8 1.0 x1000 60 180 140 160 180 0.6 0.8 1.0 1.2 x100 140 160 180 0.5 1.0 1.5 2.0 x1000 140 160 180 0.6 0.8 1.0 1.2 x100 60 180 140 160 180 0.6 0.8 1.0 1.2 1.4 x1000 140 160 180 1.0 1.5 2.0 2.5 3.0 x100 140 160 180 0.5 1.0 1.5 2.0 2.5 x1000 60 180 140 160 180 3.0 4.0 5.0 x100 140 160 180 0.4 0.6 0.8 1.0 x100 140 160 180 0.8 1.0 1.2 1.4 x1000 60 180 140 160 180 2.0 4.0 6.0 x10 140 160 180 1.0 1.5 2.0 x100 140 160 180 3.0 4.0 5.0 6.0 x10 60 180 140 160 180 0.8 1.0 1.2 x1000 140 160 180 2.0 3.0 4.0 5.0 x100 140 160 180 0.6 0.8 1.0 1.2 x1000 60 180 ime 140 160 180 time 1.8 2.0 x10000 140 160 180 time 0.6 0.8 1.0 1.2 1.4 x100 (c) Forecasting on Electricity Figure 2: Comparison of predicted and ground truth values for (a) imputation (10% missing), (b) interpolation, and (c) forecasting. The line is the median of the predictions and the red shade indicates 5%\u223c95% quantile for missing/future values. See Appendix E for more results. Datasets CSDI* (sec.) TSDE (sec.) Electricity 1,997 163 Solar 608 62 Taxi 27,533 1,730 Traffic 7,569 422 Wiki 9,138 391 * For fair comparison, the linear Transformer encoders in CSDI [80] is replaced with the TransformerEncoder [57] implementation in Pytorch. Table 2: Inference time comparison for forecasting tasks between TSDE and CSDI. [65, 69, 80]. Ground truth scenarios were created by masking all values at randomly selected timestamps, sampled at rates of 10%, 50% and 90%. TSDE is pretrained for 2,000 epochs, and then further finetuned using an interpolation-only mask for another 200 epochs. In our benchmarking, TSDE is compared against three TS interpolation methods: (1) Latent ODE [65], an RNN-based model leveraging ODE (ordinary differential equation) for dynamic, continuous and irregular TS handling; (2) mTANs [69], utilizing time embeddings and attention mechanisms, noted for its strong performance in irregular TS interpolation; and (3) CSDI [80] which has also reported competitive result in interpolation tasks. The results in the lower section of Table 1 demonstrate TSDE\u2019s exceptional performance in interpolation, outperforming CSDI by 3.6%-7.0% in CRPS, 5.8%-8.6% in MAE, and 6.1%-17.3% in RMSE. These findings highlight TSDE\u2019s adeptness in managing irregular timestamp gaps, a likely factor behind the observation that finetuning does not enhance the pretraining-only TSDE\u2019s performance. Comparatively, while CSDI also operates on a similar diffusion model backbone, TSDE\u2019s edge lies in its unique embedding learning ability via IIF masking, adeptly capturing intricate TS characteristics and dynamics for improved results. A qualitative illustration of interpolation results can be found in Figure 2(b). 4.1.3 Forecasting. Our forecasting experiments employ five realworld datasets: (1) Electricity, tracking hourly consumption across 370 customers; (2) Solar, detailing photovoltaic production at 137 Alabama stations; (3) Taxi, recording half-hourly traffic from 1,214 New York locations; (4) Traffic, covering hourly occupancy rates of 963 San Francisco car lanes; and (5) Wiki, monitoring daily views of 2,000 Wikipedia pages. Adapting the practices from [55, 67, 80], each dataset is converted into a series of multivariate sequences, with \ud835\udc3f1 historical timestamps followed by \ud835\udc3f2 timestamps for forecasting. Training data apply a rolling window approach with a stride of 1, while validation and testing data employ a stride of \ud835\udc3f2, ensuring distinct, non-overlapping series for evaluation. Specific \ud835\udc3f1 and \ud835\udc3f2 values are outlined in Table 8. For evaluation metrics, we use CRPS and MSE, supplemented by CRPS-Sum, as introduced in [67]. CRPS-Sum is computed by summing across different features, capturing the joint impact of feature distributions. As of benchmarking baselines, we include several state-of-the-art probabilistic MTS forecasting models: GP-copula [67], TransMAF [64] and TLAE [55]. Additionally, in the realm of diffusion-based methods, we include CSDI [80] and TimeGrad [63]. The forecasting results, as detailed in Table 3, showcase TSDE\u2019s robust performance, especially when finetuned with a forecasting mask. Its effectiveness is notable when compared to CSDI, which is the most closely related method, sharing a diffusion backbone. TSDE particularly excels in the Electricity, Taxi, and Wiki datasets, especially as evaluated by the CRPS metric. However, it is important to note a discrepancy in the Solar dataset performance between TSDE/CSDI and other baselines, likely due to a data split issue: the actual test set, per the source code, is identical to the training set, which contradicts the details reported in the corresponding paper. For a qualitative illustration, refer to Figure 2(c). Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask Table 3: Probabilistic MTS forecasting results embodying both TSDE (pretraining-only) and finetuned (TSDE+ft) variants. Baseline results are either sourced or reproduced from [55, 64, 67]. For TSDE-related experiments, we report the mean and SD across three iterations. Models Electricity Solar Taxi Traffic Wiki CRPS GP-copula 0.056(0.002) 0.371(0.022) 0.360(0.201) 0.133(0.001) 0.236(0.000) TransMAF 0.052(0.000) 0.368(0.001) 0.377(0.002) 0.134(0.001) 0.274(0.007) TLAE 0.058(0.003) 0.335(0.044) 0.369(0.011) 0.097(0.002) 0.298(0.002) CSDI 0.043(0.001)* 0.396(0.021)*\u2020 0.277(0.006)* 0.076(0.000)* 0.232(0.006)* TSDE 0.043(0.000) 0.400(0.025)\u2020 0.277(0.001) 0.091(0.001) 0.222(0.003) TSDE+ft 0.042(0.000) 0.375(0.013)\u2020 0.282(0.001) 0.081(0.001) 0.226(0.003) CRPS-sum GP-copula 0.024(0.002) 0.337(0.024) 0.208(0.183) 0.078(0.002) 0.086(0.004) TransMAF 0.021(0.000) 0.301(0.014) 0.179(0.002) 0.056(0.001) 0.063(0.003) TimeGrad 0.021(0.001) 0.287(0.020) 0.114(0.020) 0.044(0.006) 0.049(0.002) TLAE 0.040(0.003) 0.124(0.057) 0.130(0.010) 0.069(0.002) 0.241(0.001) CSDI 0.019(0.001)* 0.345(0.029)*\u2020 0.138(0.008)* 0.020(0.000)* 0.084(0.013)* TSDE 0.020(0.001) 0.453(0.026)\u2020 0.136(0.003) 0.038(0.003) 0.064(0.002) TSDE+ft 0.017(0.001) 0.345(0.012)\u2020 0.153(0.006) 0.025(0.001) 0.059(0.003) MSE GP-copula 2.4e5(5.5e4) 9.8e2(5.2e1) 3.1e1(1.4e0) 6.9e-4(2.2e-5) 4.0e7(1.6e9) TransMAF 2.0e5 9.3e2 4.5e1 5.0e-4 3.1e7 TLAE 2.0e5(1.6e4) 6.8e2(1.3e2) 2.6e1(1.4e0) 4.0e-4(5.0e-6) 3.8e7(7.2e4) CSDI 1.23e5(9.7e3)* 1.12e3(1.2e2)*\u2020 1.82e1(7.8e-1)*3.64e-4(0.0e0)*4.43e7(1.0e7)* TSDE 1.20e5(3.5e3) 1.07e3(9.8e1)\u2020 1.89e1(3.7e-1) 4.34e-4(0.0e0) 3.59e7(7.2e4) TSDE+ft 1.16e5(6.0e3) 9.25e2(4.9e1)\u2020 1.92e1(2.4e-1) 3.88e-4(0.0e0) 3.62e7(1.8e5) * We replace the linear Transformers [83] in CSDI with the Pytorch TransformerEncoder [57]. \u2020 We take the training MTS dataset and split it into training, validation and testing sets. 4.1.4 Ablation Study. In an ablation study on TSDE across imputation, interpolation, and forecasting, evaluated on PhysioNet (10% missing ratio) and Electricity datasets, two configurations were tested: one without crossover, and another without IIF mask (replaced by an imputation mask detailed in Algorithm 4). Table 4 underscores the positive contribution of the crossover mechanism across all three tasks. The impact of IIF masking, while less pronounced for imputation and interpolation, becomes noticeable in the forecasting task. This can be attributed to the random PhysioNet missing values, which are distributed fundamentally differently from a typical forecasting scenario. Thus, IIF strategy is important for TSDE to gain a generalization ability across various settings. The contrast between \u201cTSDE\u201d and \u201cTSDE+ft\u201d in Tables 1 and 3 serves as an ablation study for finetuning; it reveals that pretrained TSDE can achieve competitive results without the necessity of finetuning. Table 4: Ablation study on PhysioNet (imputation and interpolation) and Electricity (forecasting) datasets. Ablation Configuration Imputation (MAE/CRPS) Interpolation (MAE/CRPS) Forecasting (CRPS-sum/CRPS) w/o crossover 0.252(0.001)/0.274(0.001) 0.339(0.000)/0.373(0.000) 0.021(0.001)/0.046(0.001) w/o IIF mask 0.207(0.001)/0.225(0.001) 0.330(0.001)/0.364(0.001) 0.028(0.004)/0.053(0.003) TSDE 0.208(0.001)/0.226(0.002) 0.331(0.001)/0.365(0.001) 0.020(0.001)/0.043(0.000) 4.1.5 Inference Efficiency. Similar to CSDI [80], TSDE performs inference by gradual denoising from the last diffusion step \ud835\udc47=50 to the initial step \ud835\udc61=1, to approximate the true data distribution of missing or future values for imputation/interpolation/forecasting tasks. Typically, this iterative process can become computationally expensive. TSDE achieves a substantial acceleration in this process as illustrated in Table 2, where TSDE is ten times faster than CSDI under the same experimental setup. This is primarily owing to its globally shared, efficient dual-orthogonal Transformer encoders with a crossover mechanism, merely requiring approximately a quarter of the parameters used by CSDI for MTS encoding. 4.2 Anomaly Detection For anomaly detection, we adopt an unsupervised approach using reconstruction error as the anomaly criterion, aligning with [87, 106]. We evaluate TSDE on five benchmark datasets: SMD [77], MSL [31], SMAP [31], SWaT [48] and PSM [1]. Once TSDE is pretrained, a projection layer, designed to reconstruct MTS from TSDE embeddings, is finetuned by minimizing MSE reconstruction loss. Our anomaly detection experiments align with TimesNet [106], utilizing preprocessed datasets from [90]. Following their method, we segment datasets into non-overlapping MTS instances of 100 timestamps each, labeling timestamps as anomalous based on a MSE threshold. This threshold is set according to the anomaly proportion in the validation dataset, ensuring consistency with baseline anomaly ratios for a fair comparison. In this task, TSDE is benchmarked against an extensive set of baselines featuring diverse backbones, including a) Frozen pretrained LLM-based models: GPT4TS [106]; b) Task-agnostic foundation models: TimesNet [87]; c) MLP (multi-layer perceptron) based models: LightTS [102] and DLinear [97]; and finally d) Transformerbased models: Transformer [82], Reformer [36], Informer [104], Autoformer [88], Pyraformer [41], LogSparse Transformer [38], FEDformer [105], Non-stationary Transformer [43], ETSformer [86], PatchTST [56] and Anomaly Transformer [90]. The results in Table 5 reveal that TSDE\u2019s anomaly detection performance surpasses nearly all baselines, with less than a 1% F1 score difference from GPT4TS. Notably, while TSDE doesn\u2019t outperform GPT4TS, it\u2019s important to consider that GPT4TS benefits from a pretrained LLM (GPT-2) with about 1.5 billion model parameters. TSDE, in contrast, relies on just two single-layer Transformer encoders (<0.3 million parameters), demonstrating its competitive edge despite having significantly fewer model parameters. 4.3 Classification To further inspect the discriminative power of the pretrained TSDE embedding, we utilize the labeled PhysioNet dataset to evaluate TSDE\u2019s performance on a binary classification downstream task. This dataset, marked by in-hospital mortality labels for each patient, features MTS with over 80% missing values. To address this, we pretrain TSDE for 2,000 epochs to impute the raw MTS. Subsequently, we train a simple MLP for 40 epochs to perform mortality classification. Given the imbalanced nature of PhysioNet labels, we assess our model\u2019s efficacy with AUROC as in [6, 24]. We benchmark TSDE against a diverse range of established MTS classification methods, categorized into 3 groups with a total of 12 methods: (1) heuristic methods: mean/forward imputation [24, 40], (2) GP/VAE based models: GP [61], VAE [35], HI-VAE [54], GP-VAE [24], and (3) RNN based models: GRUI-GAN [45], GRU-D [10], M-RNN [95] and BRITS variants [6]. As shown in Table 6, TSDE surpasses all existing baselines and is on par with the state-of-the-art BRITS baseline. It is worth noting that BRITS achieves that performance by employing a sophisticated multi-task learning mechanism tailored for classification tasks. In contrast, our method achieves top-tier results by simply finetuning a simple MLP. TSDE\u2019s remarkable performance, especially in challenging classification scenarios with significant class imbalance (\u223c10% positive classes), highlights its ability to learn generic embeddings well-suited for downstream MTS classification tasks. Senane and Cao, et al. Table 5: Anomaly detection: baseline results are cited from Table 27 of [106]; higher scores indicate better performance; the best and second best results are in bold and underlined, respectively. Models SMD MSL SMAP SWaT PSM Avg. F1 P R F1 P R F1 P R F1 P R F1 P R F1 Transformer 83.6 76.1 79.6 71.6 87.4 78.7 89.4 57.1 69.7 68.8 96.5 80.4 62.7 96.6 76.1 76.9 LogSparseT. 83.5 70.1 76.2 73.0 87.4 79.6 89.1 57.6 70.0 68.7 97.3 80.5 63.1 98.0 76.7 76.6 Reformer 82.6 69.2 75.3 85.5 83.3 84.4 90.9 57.4 70.4 72.5 96.5 82.8 59.9 95.4 73.6 77.3 Informer 86.6 77.2 81.6 81.8 86.5 84.1 90.1 57.1 69.9 70.3 96.7 81.4 64.3 96.3 77.1 78.8 AnomalyT.\u2020 88.9 82.2 85.5 79.6 87.4 83.3 91.8 58.1 71.2 72.5 97.3 83.1 68.3 94.7 79.4 80.5 Pyraformer 85.6 80.6 83.0 83.8 85.9 84.9 92.5 57.7 71.1 87.9 96.0 91.8 71.7 96.0 82.1 82.6 Autoformer 88.1 82.3 85.1 77.3 80.9 79.0 90.4 58.6 71.1 89.8 95.8 92.7 99.1 88.1 93.3 84.3 NonStation. 88.3 81.2 84.6 68.5 89.1 77.5 89.4 59.0 71.1 68.0 96.7 79.9 97.8 96.8 97.3 82.1 DLinear 83.6 71.5 77.1 84.3 85.4 84.9 92.3 55.4 69.3 80.9 95.3 87.5 98.3 89.3 93.5 82.5 LightTS 87.1 78.4 82.5 82.4 75.8 78.9 92.6 55.3 69.2 92.0 94.7 93.3 98.4 96.0 97.1 84.2 FEDformer 87.9 82.4 85.1 77.1 80.1 78.6 90.5 58.1 70.8 90.2 96.4 93.2 97.3 97.2 97.2 85.0 ETSformer 87.4 79.2 83.1 85.1 84.9 85.0 92.2 55.7 69.5 90.0 80.4 84.9 99.3 85.3 91.8 82.9 PatchTS. 87.3 82.1 84.6 88.3 71.0 78.7 90.6 55.5 68.8 91.1 80.9 85.7 98.8 93.5 96.1 82.8 TimesNet* 87.9 81.5 84.6 89.5 75.4 81.8 90.1 56.4 69.4 90.7 95.4 93.0 98.5 96.2 97.3 85.2 GPT4TS\u2021 88.9 85.0 86.9 82.0 82.9 82.4 90.6 60.9 72.9 92.2 96.3 94.2 98.6 95.7 97.1 86.7 TSDE\u2021 87.5 82.2 84.8 90.1 84.5 87.2 91.4 56.9 70.1 98.2 92.9 92.5 98.6 90.7 94.5 85.8 * Reproduced with https://github.com/thuml/Time-Series-Library. \u2020 Reconstruction error is used as joint criterion for fair comparison. \u2021 GPT4TS leverage a pretrained LLM (GPT-2) with 1.5B parameters, while TSDE merely uses two single-layer Transformer encoders. Table 6: Classification performance on PhysioNet measured with AUROC. The baseline results are sourced from Table 2 of [6] and Table 3 of [24]. Models AUROC Mean imp. [24, 40] 0.70 \u00b1 0.000 Forward imp. [24, 40] 0.71 \u00b1 0.000 GP [61] 0.70 \u00b1 0.007 VAE [35, 40] 0.68 \u00b1 0.002 HI-VAE [54] 0.69 \u00b1 0.010 GRUI-GAN [45] 0.70 \u00b1 0.009 GP-VAE [24] 0.73 \u00b1 0.006 GRU-D [10] 0.83 \u00b1 0.002 M-RNN [95] 0.82 \u00b1 0.003 BRITS-LR [6]\u2020 0.74 \u00b1 0.008 BRITS-RF [6]* 0.81 \u00b1 (N/A) BRITS [6] 0.85 \u00b1 0.002 TSDE 0.85 \u00b1 0.001 \u2020 Logistic Regression (LR) on imputed PhysioNet data. * Train Random Forest (RF) on imputed PhysioNet data. 4.4 Clustering MTS data often lack annotations, making supervised learning inapplicable. In such scenarios, unsupervised clustering is a valuable method for uncovering intrinsic patterns and classes. We utilize the same pretrained TSDE model from our classification experiments (trained on PhysioNet with a 10% missing ratio) to evaluate the clustering performance of TSDE embeddings. Initially, we generate MTS embeddings using TSDE\u2019s pretrained embedding function. For simplicity and visual clarity, these embeddings are projected into a 2D space using UMAP (uniform manifold approximation and projection) [49]. Subsequently, DBSCAN (density-based spatial clustering of applications with noise) [23] is applied to these 2D projections to obtain clusters. ARI=0.010 RI=0.502 (a) Raw MTS ARI=0.426 RI=0.737 (b) Embed raw MTS ARI=0.302 RI=0.684 (c) Embed imputed MTS Figure 3: Clustering of (a) raw MTS, (b) TSDE embedding of raw MTS, and (c) TSDE embedding of TSDE-imputed MTS. Marker shapes denote ground-truth binary labels; colors indicate DBSCAN [23] clusters after UMAP [49] dimension reduction. As shown in Figure 3, the clustering quality is assessed across three data types: (a) raw MTS, (b) TSDE embeddings of raw MTS, and (c) TSDE embeddings of TSDE-imputed MTS. The ground truth binary labels are indicated using two distinct marker shapes: circles and triangles. When using raw MTS as seen in 3(a), the clusters are unfavourably intertwined, with data points from both classes intermingling. However, the TSDE embeddings, whether derived from raw or imputed MTS, exhibit substantially improved cluster differentiation. These embeddings enable more precise alignment with ground truth classifications, implying the capability of TSDE in capturing data nuances. Furthermore, the negligible performance disparity between Figures 3(b) and 3(c) suggests that TSDE embeddings can be directly used for MTS clustering without the need of imputation. This consistency is likely because our encoders proficiently encapsulate missing data traits, seamlessly integrating these subtleties into the embeddings. To provide a quantitative assessment of clustering, given the presence of labels, we calculate RI (rand index) [60] and ARI (adjusted RI) [30]. These metrics are reported on top of each setup in Figure 3. Notably, the RI and ARI values align with the qualitative observations discussed earlier, further substantiating our findings. 4.5 Embedding Visualization (a) Trend (b) Seasonal (c) Noise Figure 4: TSDE embedding visualization of (a) Trend, (b) Seasonal, and (c) Noise components from synthetic MTS. To substantiate the representational efficacy of TSDE embeddings, we undertake a visualization experiment on synthetic MTS data, as showcased in Figure 4. The data comprises three distinct UTS: (a) a consistently ascending trend, (b) a cyclical seasonal signal, and (c) a white noise component. Each UTS embedding has two dimensions (\ud835\udc3f\u00d733); for a lucid depiction, we cluster the second dimension by treating it as 33 samples each of length \ud835\udc3f, and visualize the centroid of these clusters. Intriguingly, the embeddings, which were pretrained on the entire synthetic MTS, vividly encapsulate the joint encoding effects of all series. The trend\u2019s embedding delineates the series\u2019 progression, evident from the gradual color saturation changes, embodying the steady evolution. The seasonal Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask signal\u2019s embedding mirrors its inherent cyclicity, with color oscillations reflecting its periodic nature. Finally, the noise component\u2019s embeddings exhibit sporadic color band patterns (with subtle traces of seasonal patterns), capturing the inherent randomness. 5 CONCLUSION In this paper, we propose TSDE, a novel SSL framework for TSRL. TSDE, the first of its kind, effectively harnesses a diffusion process, conditioned on an innovative dual-orthogonal Transformer encoder architecture with a crossover mechanism, and employs a unique IIF mask strategy. Our comprehensive experiments across diverse TS analysis tasks, including imputation, interpolation, forecasting, anomaly detection, classification, and clustering, demonstrate TSDE\u2019s superior performance compared to state-of-the-art models. Specifically, TSDE shows remarkable results in handling MTS data with high missing rates and various complexities, thus validating its effectiveness in capturing the intricate MTS dynamics. Moreover, TSDE not only significantly accelerates inference speed but also showcases its versatile embeddings through qualitative visualizations, encapsulating key MTS characteristics. This positions TSDE as a robust, efficient, and versatile advancement in MTS representation learning, suitable for a wide range of MTS tasks. Future work will focus on enabling the pretrained TSDE to execute IIF tasks independently, without the need for the diffusion block."
18
+ }